• Login
    View Item 
    •   Home
    • Theses & Dissertations
    • 2021 - Mines Theses & Dissertations
    • View Item
    •   Home
    • Theses & Dissertations
    • 2021 - Mines Theses & Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Mines RepositoryCommunitiesPublication DateAuthorsTitlesSubjectsThis CollectionPublication DateAuthorsTitlesSubjects

    My Account

    Login

    Mines Links

    Arthur Lakes LibraryColorado School of Mines

    Statistics

    Display Statistics

    Adversarial machine learning in computer vision: attacks and defenses on machine learning models

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    Qin_mines_0052E_12151.pdf
    Size:
    7.670Mb
    Format:
    PDF
    Download
    Author
    Qin, Yi
    Advisor
    Yue, Chuan
    Date issued
    2021
    
    Metadata
    Show full item record
    URI
    https://hdl.handle.net/11124/176450
    Abstract
    Machine learning models, including neural networks, have gained great popularity in recent years. Deep neural networks are able to directly learn from raw data and can outperform traditional machine learning models. As a result, they have been increasingly used in a variety of application domains such as image classification, natural language processing, and malware detection. However, deep neural networks are demonstrated to be vulnerable to adversarial examples at the test time. Adversarial examples are malicious inputs generated from the legitimate inputs by adding small perturbations in order to fool machine learning models to misclassify. We mainly aim to answer two research questions in this thesis: How are machine learning models vulnerable to adversarial examples? How can we better defend against the adversarial examples? We first improve the effectiveness of adversarial training by designing an experimental framework to study Method-Based Ensemble Adversarial Training (MBEAT) and Round Gap Of Adversarial Training (RGOAT). We then demonstrate the strong distinguishability of adversarial examples and design a simple yet effective approach called defensive distinction under the formulation of multi-label classification to protect against adversarial examples. We also propose fuzzing-based hard-label black-box attacks against machine learning models. We design an AdvFuzzer to explore multiple paths between a source image and a guidance image, and design a LocalFuzzer to explore the nearby space around a given input for identifying potential adversarial examples. Lastly, we propose a key-based input transformation defense to defend against adversarial examples.
    Rights
    Copyright of the original work is retained by the author.
    Collections
    2021 - Mines Theses & Dissertations

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.