Skip to yearly menu bar Skip to main content


Poster

Similarity of Neural Architectures using Adversarial Attack Transferability

Jaehui Hwang · Dongyoon Han · Byeongho Heo · Song Park · Sanghyuk Chun · Jong-Seok Lee

[ ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

In recent years, many deep neural architectures have been developed for image classification. Whether they are similar or dissimilar and what factors contribute to their (dis)similarities remains curious. To address this question, we aim to design a quantitative and scalable similarity measure between neural architectures. We propose Similarity by Attack Transferability (SAT) from the observation that adversarial attack transferability contains information related to input gradients and decision boundaries widely used to understand model behaviors. We conduct a large-scale analysis on 69 state-of-the-art ImageNet classifiers using our proposed similarity function to answer the question. In addition, we provide interesting insights into ML applications using multiple models, such as model ensemble and knowledge distillation. Our results show that using diverse neural architectures with distinct components can be beneficial to such scenarios.

Live content is unavailable. Log in and register to view live content