AI in EE

AI IN DIVISIONS

AI in Computer Division

AI in EE

AI IN DIVISIONS

AI in Computer Division ​

AI in Computer Division

Dan Hendrycks (UC Berkeley), Ki-Min Lee (KAIST EE), Mantas Mazeika (University of Chicago) accepted at 36th International Conference on Machine Learning (ICML 2019)

Title: Using Pre-Training Can Improve Model Robustness and Uncertainty

Authors: Dan Hendrycks (UC Berkeley), KimMin Lee (KAIST EE), Mantas Mazeika (University of Chicago)

 He et al. (2018) have called into question the utility of pre-training by showing that training from scratch can often yield similar performance to pre-training. We show that although pre-training may not improve performance on traditional classification metrics, it improves model robustness and uncertainty estimates. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 10% absolute improvement over the previous state-of-the-art in adversarial robustness. In some cases, using pre-training without task-specific methods also surpasses the stateof-the-art, highlighting the need for pre-training when evaluating future methods on robustness and uncertainty tasks.

kimin2
Figure 1. Training for longer is not a suitable strategy for label corruption. By training for longer, the network eventually begins to model and memorize label noise, which harms its overall performance. Labels are corrupted uniformly to incorrect classes with 60% probability, and the Wide Residual Network classifier has learning rate drops at epochs 80, 120, and 160.