Adversarial Training in Multi-Exit Networks: Proposed NEO-KD Algorithm and Problem Setup

Written by textmodels | Published 2024/09/30
Tech Story Tags: neural-networks | multi-exit-neural-networks | neural-network-security | adversarial-robustness | neo-kd | knowledge-distillation | neural-network-robustness | adversarial-test-accuracy

TLDRThis section outlines adversarial training for multi-exit networks, focusing on three attack methods: single, max-average, and average attacks. The generated adversarial examples target different submodels, but the correlation among submodels can increase adversarial transferability, which is a challenge in multi-exit network robustness.via the TL;DR App

Authors:

(1) Seokil Ham, KAIST;

(2) Jungwuk Park, KAIST;

(3) Dong-Jun Han, Purdue University;

(4) Jaekyun Moon, KAIST.

Table of Links

Abstract and 1. Introduction

2. Related Works

3. Proposed NEO-KD Algorithm and 3.1 Problem Setup: Adversarial Training in Multi-Exit Networks

3.2 Algorithm Description

4. Experiments and 4.1 Experimental Setup

4.2. Main Experimental Results

4.3. Ablation Studies and Discussions

5. Conclusion, Acknowledgement and References

A. Experiment Details

B. Clean Test Accuracy and C. Adversarial Training via Average Attack

D. Hyperparameter Tuning

E. Discussions on Performance Degradation at Later Exits

F. Comparison with Recent Defense Methods for Single-Exit Networks

G. Comparison with SKD and ARD and H. Implementations of Stronger Attacker Algorithms

3 Proposed NEO-KD Algorithm

3.1 Problem Setup: Adversarial Training in Multi-Exit Networks

This paper is available on arxiv under CC 4.0 license.


Written by textmodels | We publish the best academic papers on rule-based techniques, LLMs, & the generation of text that resembles human text.
Published by HackerNoon on 2024/09/30