Skip to yearly menu bar Skip to main content


Poster

Diagnosing and Re-learning for Balanced Multimodal Learning

Yake Wei · Siwei Li · Ruoxuan Feng · Di Hu

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

To overcome the imbalanced multi-modal learning problem, where models prefer the training of specific modalities, existing methods propose to control the training of uni-modal encoders from different perspectives, taking the inter-modal performance discrepancy as the basis. However, the intrinsic limitation of modality capacity is ignored. The scarcely informative modalities are always recognized as ``worse-learnt'' ones in existing methods, which could force the model to memorize more noise, counterproductively affecting the multi-modal model ability. Moreover, the current modality modulation methods narrowly concentrate on selected worse-learnt modalities, even suppressing the training of others. Hence, it is essential to reasonably assess the learning state of each modality and take all modalities into account during balancing. To this end, we propose the Diagnosing & Re-learning method. The learning state of each modality is firstly estimated based on the separability of its uni-modal representation space, and then used to softly re-initialize the corresponding uni-modal encoder. In this way, encoders of worse learnt modalities are enhanced, simultaneously avoiding the over-training of other modalities. Accordingly, the multi-modal learning is effectively balanced and enhanced. Experiments covering multiple types of modalities and multi-modal frameworks demonstrate the superior performance of our simple-yet-effective method for balanced multi-modal learning.

Live content is unavailable. Log in and register to view live content