Skip to yearly menu bar Skip to main content


Poster

GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features

Luc P.J. Sträter · Mohammadreza Salehi · Efstratios Gavves · Cees G.M. Snoek · Yuki M Asano

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

In the domain of anomaly detection, methods often excel in either semantic or industrial benchmarks, rarely achieving cross-domain proficiency. In this paper, we present GeneralAD, an anomaly detection framework designed to operate in semantic, near-distribution, and industrial settings with minimal per-task adjustments. In our approach, we capitalize on the inherent design of Vision Transformers, which are trained on image patches, thereby ensuring that the last hidden states retain a patch-based structure. We propose a novel self-supervised anomaly generation module that employs straightforward operations like noise addition and shuffling to patch features to construct pseudo-abnormal samples. These features are fed to an attention-based discriminator, which is trained to score every patch in the image. With this, our method can both accurately identify anomalies at the image level and also generate interpretable anomaly maps. We extensively evaluated our approach on 10 benchmarks, achieving state-of-the-art results in 6 datasets and on-par performance in the remaining for both localization and detection tasks.

Live content is unavailable. Log in and register to view live content