Security, privacy, and trust in crowdsourcing systems
dc.contributor.advisor | Yue, Chuan | |
dc.contributor.author | Pei, Weiping | |
dc.date.accessioned | 2022-11-18T20:17:08Z | |
dc.date.available | 2022-11-18T20:17:08Z | |
dc.date.issued | 2022 | |
dc.identifier | Pei_mines_0052E_12462.pdf | |
dc.identifier | T 9405 | |
dc.identifier.uri | https://hdl.handle.net/11124/15510 | |
dc.description | Includes bibliographical references. | |
dc.description | 2022 Summer. | |
dc.description.abstract | Crowdsourcing has emerged as an efficient way to solve problems that require the wisdom of human beings. With their fast, low-cost, and flexible nature, crowdsourcing systems have been widely used by researchers in a variety of disciplines for data collection. However, due to the vulnerabilities of existing crowdsourcing systems and the prevalence of malicious workers or attackers, issues of security, privacy, and trust in the crowdsourcing systems are constantly arising. Attacks in crowdsourcing have caused substantial damage not only to crowdsourcing systems but also to other systems or applications which rely on data collected from crowdsourcing systems. Therefore, building secure, privacy-preserving, and trustworthy crowdsourcing systems is of great need and significance. This dissertation concentrates on enhancing security, privacy, and trust in crowdsourcing systems through investigating potential vulnerabilities and risks and exploring defense approaches to prevent attacks. We first investigated vulnerabilities of a widely used quality control mechanism, the attention check mechanism. Specifically, we proposed and designed an attack framework named AC-EasyPass that could automatically evade attention check questions. Then, we focused on addressing limitations of another popular quality control mechanism, coarse-grained behavioral analysis. We proposed and implemented a framework named Fine-grained Behavior-based Quality Control (FBQC) that specifically extracts fine-grained behavioral features to provide three quality control mechanisms: quality prediction for objective tasks, suspicious behavior detection for subjective tasks, and unsupervised worker categorization. Next, we investigated a new type of adversarial attacks named content-preserving and semantics-flipping (CPSF) adversarial attacks against natural language processing (NLP) models. We developed a two-stage approach to generate CPSF adversarial examples. Lastly, we proposed to investigate the privacy risks of third party app users in crowdsourcing. We considered the case of receipt scanning apps and focused on corresponding receipt transcription tasks in the crowdsourcing system. We designed and conducted an app user study to explore how app users perceive privacy while using receipt scanning apps, and a crowd worker study to investigate crowd workers' experiences on receipt transcription tasks and their attitudes towards tasks. | |
dc.format.medium | born digital | |
dc.format.medium | doctoral dissertations | |
dc.language | English | |
dc.language.iso | eng | |
dc.publisher | Colorado School of Mines. Arthur Lakes Library | |
dc.relation.ispartof | 2022 - Mines Theses & Dissertations | |
dc.rights | Copyright of the original work is retained by the author. | |
dc.subject | adversarial attack | |
dc.subject | crowdsourcing | |
dc.subject | machine learning | |
dc.subject | quality control | |
dc.subject | security and privacy | |
dc.title | Security, privacy, and trust in crowdsourcing systems | |
dc.type | Text | |
dc.date.updated | 2022-11-05T04:09:37Z | |
dc.contributor.committeemember | Wang, Hua | |
dc.contributor.committeemember | Wu, Bo | |
dc.contributor.committeemember | Camp, Tracy | |
dc.contributor.committeemember | Miller, Hugh B. | |
dcterms.embargo.expires | 2023-11-04 | |
thesis.degree.name | Doctor of Philosophy (Ph.D.) | |
thesis.degree.level | Doctoral | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | Colorado School of Mines | |
dc.rights.access | Embargo Expires: 11/04/2023 |