Deep Learning Architectures for Robust Classification Under Adversarial Noise
Technical Report,15 Feb 2018,15 Feb 2019
Harvard University Cambridge United States
Pagination or Media Count:
This report focuses on the problem of designing robust classifiers to images that are distorted by noise. The approach taken was robust optimization where the goal was to optimize in the worst case over a class of objective functions. A theoretical framework with strong guarantees was developed. In particular it was shown that given a classifier that has alpha accuracy over a finite number of attacks, one can develop a robust classifier that is an arbitrarily close to be an alpha approximation to the optimal robust classifier. These results were applied to robust neural network training and approach was evaluated experimentally on corrupted character classification.