Skip to yearly menu bar Skip to main content


Poster

Towards Unified Representation of Invariant-Specific Features in Missing Modality Face Anti-Spoofing

Guanghao Zheng · Yuchen Liu · Wenrui Dai · Chenglin Li · Junni Zou · Hongkai Xiong

# 221
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

The effectiveness of Vision Transformers (ViTs) diminishes considerably in multi-modal face anti-spoofing (FAS) under missing modality scenarios. Existing approaches rely on modality-invariant features to alleviate this issue but ignore modality-specific features. To solve this issue, we propose a Missing Modality Adapter framework for Face Anti-Spoofing (MMA-FAS), which leverages modality-disentangle adapters and LBP-guided contrastive loss for explicit combination of modality-invariant and modality-specific features. Modality-disentangle adapters disentangle features into modality-invariant and -specific features from the view of frequency decomposition. LBP-guided contrastive loss, together with batch-level and sample-level modality masking strategies, forces the model to cluster samples according to attack types and modal combinations, which further enhances modality-specific and -specific features. Moreover, we propose an adaptively modal combination sampling strategy, which dynamically adjusts the sample probability in masking strategies to balance the training process of different modal combinations. Extensive experiments demonstrate that our proposed method achieves state-of-the-art intra-dataset and cross-dataset performance in all the missing modality scenarios.

Live content is unavailable. Log in and register to view live content