On adversarial robustness of deep learning systems/ Akshay Chaturvedi
Material type:
- 23 006.310727 C495
- Guided by Prof. Utpal Garain
Item type | Current library | Call number | Status | Notes | Date due | Barcode | Item holds | |
---|---|---|---|---|---|---|---|---|
THESIS | ISI Library, Kolkata | 006.310727 C495 (Browse shelf(Opens below)) | Available | E-Thesis | TH518 |
Browsing ISI Library, Kolkata shelves Close shelf browser (Hides shelf browser)
No cover image available | ||||||||
006.3101515 Si593 Mathematical analysis for machine learning and data mining/ | 006.31015195 Ar658 A computational approach to statistical learning/ | 006.3101570 R178 Deep learning for the life sciences: applying deep learning to genomics microscopy drug discovery and more/ | 006.310727 C495 On adversarial robustness of deep learning systems/ | 006.312 Ad234 Data analysis and pattern recognition in multiple databases / | 006.312 Ag266 Data mining : | 006.312 Ah285 Practical guide to data mining for business and industry / |
Includes bibliographical references
1 Introduction -- 2 Attacking VQA systems via adversarial background noise -- 3 Mimic and fool: a task-agnostic adversarial attack - 4 Exploring the robustness of NMT systems to non-sensical In-puts -- 5 Ignorance is bliss: Exploring defenses against invariance based attacks on NMT systems -- 6 Generalizability of bruteforce attack: A case-study on TQA and SciQ dataset -- 7 Conclusion
Guided by Prof. Utpal Garain
In the past decade, deep learning has been ubiquitous across diverse fields like natural language processing (NLP), computer vision, speech processing, etc. De- spite achieving state-of-the-art performance, there are ongoing concerns regarding robustness and explainability of deep-learning systems. These concerns have fur- ther gained traction due to the presence of adversarial examples which make such systems behave in an undesirable fashion. To this end, this thesis explores several adversarial attacks and defenses for deep-learning based vision and NLP systems. For vision/vision-and-language systems, the following two problems are studied in this thesis: (i) Robustness of visual question answering (VQA) systems: We study the robustness of VQA systems to adversarial background noise. The results show that, by adding minimal background noise, such systems can be easily fooled to predict an answer of the same as well as different category as the original answer. (ii) Task-agnostic adversarial attack for vision systems: We propose a task-agnostic adversarial attack named Mimic and Fool and show its effectiveness against vision systems designed for different tasks like image classification, image captioning and VQA. While the attack relies on the information loss that occurs in a convolutional neural network, we show that invertible architectures such as i-RevNet are also vulnerable to the proposed attack. For NLP systems, the following three problems are studied in this thesis: (i) Invariance-based attack against neural machine translation (NMT) systems: We explore the robustness of NMT systems to non-sensical inputs obtained via an invariance-based attack. Unlike previous adversarial attacks against NMT sys- tem which make minimal changes to the source sentence in order to change the predicted translation, the invariance-based attack makes multiple changes in the source sentence with the goal of keeping the predicted translation unchanged. (ii) Defense against invariance-based attack: The non-sensical inputs obtained via the invariance-based attack do not have a ground truth translation. This makes standard adversarial training as a defense strategy infeasible. In this con- text, we explore several defense strategies to counteract the invariance-based at- tack. (iii) Robustness of multiple choice question-answering (MCQ) systems and intervention-based study: We explore the robustness of MCQ systems against the invariance-based attack. Furthermore, we also study the generalizability of MCQ systems to different types of interventions on the input paragraph.
There are no comments on this title.