Skip to yearly menu bar Skip to main content


Poster

De-confounded Gaze Estimation

Ziyang Liang · Yiwei Bao · Feng Lu

# 254
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Deep-learning based gaze estimation methods suffer from sever performance degradation in cross-domain settings. One of the primary reason is that the gaze estimation model is confounded by gaze-irrelevant factor during estimation, such as identity and illumination. In this paper, we propose to tackle this problem by causal intervention, an analytical tool that alleviates the impact of confounding factors by using intervening the distribution of confounding factors. Concretely, we propose the Feature-Separation-based Causal Intervention (FSCI) framework for generalizable gaze estimation. The FSCI framework first separates gaze features from gaze-irrelevant features. To alleviate the impact of gaze-irrelevant factors during training, the FSCI framework further implements causal intervention by averaging gaze-irrelevant features using the proposed Dynamic Confounder Bank strategy. Experiments show that the proposed FSCI framework outperforms SOTA gaze estimation methods in varies cross-domain settings, improving the cross-domain accuracy of the baseline up to 36.2% without touching target domain data.

Live content is unavailable. Log in and register to view live content