Digital / Machine and Deep Learning Seminar / May 04, 2023, 14:00 – 15:00 Uhr
Robust Models Are Less Over-Confident
Speaker: Julia Grabinski (Fraunhofer ITWM, Division »High Performance Computing«)
Abstract – Robust Models Are Less Over-Confident
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. In this talk, we show that adversarially training does not only achieve higher robust accuracies but also has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, we analyze the influence of different building blocks (like activation functions and pooling) and their influence on the models' prediction confidences.