Skip to yearly menu bar Skip to main content


Poster

The Devil is in the Statistics: Mitigating and Exploiting Statistics Difference for Generalizable Semi-supervised Medical Image Segmentation

Muyang Qiu · Jian Zhang · Lei Qi · Qian Yu · Yinghuan Shi · Yang Gao

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Despite the recent success of domain generalization in medical image segmentation, it is known that precisely conducting voxel-wise annotation for all available source domains remains a huge burden. To combat this challenge, as a new setting, semi-supervised domain generalization has been proposed very recently by leveraging limited labeled data along with abundant unlabeled data collected from multiple medical institutions to enhance model generalization. To achieve promising results in this setting, correctly harnessing unlabeled data while improving the generalization ability to unseen domains plays a critical role simultaneously. In this work, we observe that the domain shifts between medical institutions (i.e., source domains) cause disparate feature statistics, which significantly deteriorates the pseudo-label quality due to an unexpected normalization process. Nevertheless, this phenomenon also could be exploited to facilitate unseen domain generalization. Therefore, to fully exploit the potential of statistics difference while mitigating its negative impacts, we in this paper propose 1) multiple statistics-individual branches to reduce the interference of domain shifts between source domains for reliable pseudo-labels and 2) one statistics-aggregated branch for domain-invariant feature learning. Furthermore, to simulate unseen domains with the statistics difference, we approach this from two aspects, i.e., a perturbation with histogram matching from the image level and a random batch normalization selection strategy from the feature level, producing diverse statistics to expand the training distribution. Evaluation results on the Prostate, Fundus, and M&Ms datasets demonstrate the effectiveness of our method compared with recent SOTA methods. The code will be available in Supplementary Materials.

Live content is unavailable. Log in and register to view live content