Skip to yearly menu bar Skip to main content


Poster

Delving into Adversarial Robustness on Document Tampering Localization

Huiru Shao · Zhuang Qian · Kaizhu Huang · Wei Wang · Xiaowei Huang · Qiufeng Wang

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Recent advances in document forgery techniques produce malicious yet nearly visually untraceable alterations, imposing a big challenge for document tampering localization (DTL). Despite significant recent progress, there has been surprisingly limited exploration of adversarial robustness in DTL. This paper presents the first effort to uncover the vulnerability of most existing DTL models to adversarial attacks, highlighting the need for greater attention within the DTL community. In pursuit of robust DTL, we demonstrate that adversarial training can promote the model's robustness and effectively protect against adversarial attacks. As a notable advancement, we further introduce a latent manifold adversarial training approach that enhances adversarial robustness in DTL by incorporating perturbations on the latent manifold of adversarial examples, rather than exclusively relying on label-guided information. Extensive experiments on DTL benchmark datasets shows the necessity of adversarial training and our proposed manifold-based method significantly improves the adversarial robustness on both white-box and black-box attacks.

Live content is unavailable. Log in and register to view live content