Adversarial machine learning in computer vision: attacks and defenses on machine learning models

dc.contributor.advisorYue, Chuan
dc.contributor.authorQin, Yi
dc.contributor.committeememberCamp, Tracy
dc.contributor.committeememberHan, Qi
dc.contributor.committeememberBelviranli, Mehmet E.
dc.contributor.committeememberMohagheghi, Salman
dc.date.accessioned2021-06-28T10:14:15Z
dc.date.accessioned2022-02-03T13:23:53Z
dc.date.available2021-06-28T10:14:15Z
dc.date.available2022-02-03T13:23:53Z
dc.date.issued2021
dc.descriptionIncludes bibliographical references.
dc.description2021 Spring.
dc.description.abstractMachine learning models, including neural networks, have gained great popularity in recent years. Deep neural networks are able to directly learn from raw data and can outperform traditional machine learning models. As a result, they have been increasingly used in a variety of application domains such as image classification, natural language processing, and malware detection. However, deep neural networks are demonstrated to be vulnerable to adversarial examples at the test time. Adversarial examples are malicious inputs generated from the legitimate inputs by adding small perturbations in order to fool machine learning models to misclassify. We mainly aim to answer two research questions in this thesis: How are machine learning models vulnerable to adversarial examples? How can we better defend against the adversarial examples? We first improve the effectiveness of adversarial training by designing an experimental framework to study Method-Based Ensemble Adversarial Training (MBEAT) and Round Gap Of Adversarial Training (RGOAT). We then demonstrate the strong distinguishability of adversarial examples and design a simple yet effective approach called defensive distinction under the formulation of multi-label classification to protect against adversarial examples. We also propose fuzzing-based hard-label black-box attacks against machine learning models. We design an AdvFuzzer to explore multiple paths between a source image and a guidance image, and design a LocalFuzzer to explore the nearby space around a given input for identifying potential adversarial examples. Lastly, we propose a key-based input transformation defense to defend against adversarial examples.
dc.format.mediumborn digital
dc.format.mediumdoctoral dissertations
dc.identifierQin_mines_0052E_12151.pdf
dc.identifierT 9114
dc.identifier.urihttps://hdl.handle.net/11124/176450
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado School of Mines. Arthur Lakes Library
dc.relation.ispartof2021 - Mines Theses & Dissertations
dc.rightsCopyright of the original work is retained by the author.
dc.titleAdversarial machine learning in computer vision: attacks and defenses on machine learning models
dc.typeText
dspace.entity.typePublication
thesis.degree.disciplineComputer Science
thesis.degree.grantorColorado School of Mines
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy (Ph.D.)
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Qin_mines_0052E_12151.pdf
Size:
7.67 MB
Format:
Adobe Portable Document Format