Skip to yearly menu bar Skip to main content


Poster

AdversariaLeak: External Information Leakage Attack Using Adversarial Samples on Face Recognition Systems

Roye Katzav · Amit Giloni · Edita Grolman · Hiroo Saito · Tomoyuki Shibata · Tsukasa Omino · Misaki Komatsu · Yoshikazu Hanatani · Yuval Elovici · Asaf Shabtai

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Face recognition (FR) systems are vulnerable to external information leakage (EIL) attacks, which can reveal sensitive information about the training data, thus compromising the confidentiality of the company's proprietary and the privacy of the individuals concerned. Existing EIL attacks mainly rely on unrealistic assumptions, such as a high query budget for the attacker and massive computational power, resulting in impractical EIL attacks. We present AdversariaLeak, a novel and practical query-based EIL attack that targets the face verification model of the FR systems by using carefully selected adversarial samples. AdversariaLeak uses substitute models to craft adversarial samples, which are then handpicked to infer sensitive information. Our extensive evaluation on the MAAD-Face and CelebA datasets, which includes over 200 different target models, shows that AdversariaLeak outperforms state-of-the-art EIL attacks in inferring the property that best characterizes the FR model's training set while maintaining a small query budget and practical attacker assumptions.

Live content is unavailable. Log in and register to view live content