Show simple item record

dc.contributor.advisorYue, Chuan
dc.contributor.authorPei, Weiping
dc.date.accessioned2022-11-18T20:17:08Z
dc.date.available2022-11-18T20:17:08Z
dc.date.issued2022
dc.identifierPei_mines_0052E_12462.pdf
dc.identifierT 9405
dc.identifier.urihttps://hdl.handle.net/11124/15510
dc.descriptionIncludes bibliographical references.
dc.description2022 Summer.
dc.description.abstractCrowdsourcing has emerged as an efficient way to solve problems that require the wisdom of human beings. With their fast, low-cost, and flexible nature, crowdsourcing systems have been widely used by researchers in a variety of disciplines for data collection. However, due to the vulnerabilities of existing crowdsourcing systems and the prevalence of malicious workers or attackers, issues of security, privacy, and trust in the crowdsourcing systems are constantly arising. Attacks in crowdsourcing have caused substantial damage not only to crowdsourcing systems but also to other systems or applications which rely on data collected from crowdsourcing systems. Therefore, building secure, privacy-preserving, and trustworthy crowdsourcing systems is of great need and significance. This dissertation concentrates on enhancing security, privacy, and trust in crowdsourcing systems through investigating potential vulnerabilities and risks and exploring defense approaches to prevent attacks. We first investigated vulnerabilities of a widely used quality control mechanism, the attention check mechanism. Specifically, we proposed and designed an attack framework named AC-EasyPass that could automatically evade attention check questions. Then, we focused on addressing limitations of another popular quality control mechanism, coarse-grained behavioral analysis. We proposed and implemented a framework named Fine-grained Behavior-based Quality Control (FBQC) that specifically extracts fine-grained behavioral features to provide three quality control mechanisms: quality prediction for objective tasks, suspicious behavior detection for subjective tasks, and unsupervised worker categorization. Next, we investigated a new type of adversarial attacks named content-preserving and semantics-flipping (CPSF) adversarial attacks against natural language processing (NLP) models. We developed a two-stage approach to generate CPSF adversarial examples. Lastly, we proposed to investigate the privacy risks of third party app users in crowdsourcing. We considered the case of receipt scanning apps and focused on corresponding receipt transcription tasks in the crowdsourcing system. We designed and conducted an app user study to explore how app users perceive privacy while using receipt scanning apps, and a crowd worker study to investigate crowd workers' experiences on receipt transcription tasks and their attitudes towards tasks.
dc.format.mediumborn digital
dc.format.mediumdoctoral dissertations
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado School of Mines. Arthur Lakes Library
dc.relation.ispartof2022 - Mines Theses & Dissertations
dc.rightsCopyright of the original work is retained by the author.
dc.subjectadversarial attack
dc.subjectcrowdsourcing
dc.subjectmachine learning
dc.subjectquality control
dc.subjectsecurity and privacy
dc.titleSecurity, privacy, and trust in crowdsourcing systems
dc.typeText
dc.date.updated2022-11-05T04:09:37Z
dc.contributor.committeememberWang, Hua
dc.contributor.committeememberWu, Bo
dc.contributor.committeememberCamp, Tracy
dc.contributor.committeememberMiller, Hugh B.
dcterms.embargo.expires2023-11-04
thesis.degree.nameDoctor of Philosophy (Ph.D.)
thesis.degree.levelDoctoral
thesis.degree.disciplineComputer Science
thesis.degree.grantorColorado School of Mines
dc.rights.accessEmbargo Expires: 11/04/2023


Files in this item

Thumbnail
Name:
Pei_mines_0052E_12462.pdf
Size:
9.640Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record