Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification. To defend against such attacks, an effective and popular approach, known as adversarial training (AT), has been shown to mitigate the negative impact of adversarial attacks by virtue of a min-max robust training method. While effective, it remains unclear whether it can successfully be adapted to the distributed learning context. The power of distributed optimization over multiple machines enables us to scale up robust training over large models and datasets. Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines. We show that DAT is general, which supports training over labeled and unlabeled data, multiple types of attack generation methods, and gradient compression operations favored for distributed optimization. Theoretically, we provide, under standard conditions in the optimization theory, the convergence rate of DAT to the first-order stationary points in general non-convex settings. Empirically, we demonstrate that DAT either matches or outperforms state-of-the-art robust accuracies and achieves a graceful training speedup (e.g., on ResNet-50 under ImageNet). Codes are available at https://github.com/dat-2022/dat.

Original languageEnglish (US)
Pages (from-to)2353-2363
Number of pages11
JournalProceedings of Machine Learning Research
Volume180
StatePublished - 2022
Event38th Conference on Uncertainty in Artificial Intelligence, UAI 2022 - Eindhoven, Netherlands
Duration: Aug 1 2022Aug 5 2022

Bibliographical note

Publisher Copyright:
© 2022 UAI. All Rights Reserved.

Fingerprint

Dive into the research topics of 'Distributed Adversarial Training to Robustify Deep Neural Networks at Scale'. Together they form a unique fingerprint.

Cite this