Machine und Deep Learning Seminar / 28. April 2022, 10:00 – 11:00 Uhr
Aliasing Coincides With CNNs Vulnerability Towards Adversarial Attacks
Referent: Julia Grabinski (Fraunhofer ITWM, Bereich »High Performance Computing«)
Abstract (nur in Englisch verfügbar):
Many commonly well-performing convolutional neural network (CNN) models have shown to be susceptible to input data perturbations, indicating a low model robustness. While much effort and research has been invested to designed more robust networks and training schedule, the research on analyzing the source of a model's vulnerability is scarce.
In this seminar we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.