Skip to yearly menu bar Skip to main content


Poster

Depicting Beyond Scores: Advancing Image Quality Assessment through Multi-modal Language Models

Zhiyuan You · Zheyuan Li · Jinjin Gu · Zhenfei Yin · Tianfan Xue · Chao Dong

# 127
[ ] [ Project Page ] [ Paper PDF ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

We introduce a Depicted image Quality Assessment method (DepictQA), overcoming the constraints of traditional score-based methods. DepictQA allows for detailed, language-based, human-like evaluation of image quality by leveraging Multi-modal Large Language Models (MLLMs). Unlike conventional Image Quality Assessment (IQA) methods relying on scores, DepictQA interprets image content and distortions descriptively and comparatively, aligning closely with humans' reasoning process. To build the DepictQA model, we establish a hierarchical task framework, and collect a multi-modal IQA training dataset. To tackle the challenges of limited training data and multi-image processing, we propose to use multi-source training data and specialized image tags. These designs result in a better performance of DepictQA than score-based approaches on multiple benchmarks. Moreover, compared with general MLLMs, DepictQA can generate more accurate reasoning descriptive languages. Our work shows the research potential of multi-modal IQA tasks. Codes and datasets are available in https://depictqa.github.io.

Live content is unavailable. Log in and register to view live content