Home

Awesome

Language-Image Quality Evaluator (LIQE)

The official repo of Blind Image Quality Assessment via Vision-Language Correspondence: A Multitask Learning Perspective (CVPR2023)

Abstract

We aim at advancing blind image quality assessment (BIQA), which predicts the human perception of image quality without any reference information. We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks, in a way that the model parameter sharing and the loss weighting are determined automatically. Specifically, we first describe all candidate label combinations (from multiple tasks) using a textual template, and compute the joint probability from the cosine similarities of the visual-textual embeddings. Predictions of each task can be inferred from the joint distribution, and optimized by carefully designed loss functions. Through comprehensive experiments on learning three tasks - BIQA, scene classification, and distortion type identification, we verify that the proposed BIQA method 1) benefits from the scene classification and distortion type identification tasks and outperforms the state-of-the-art on multiple IQA datasets, 2) is more robust in the group maximum differentiation competition, and 3) realigns the quality annotations from different IQA datasets more effectively.

image

Requirement

torch 1.8+

torchvision

Python 3

pip install ftfy regex tqdm

pip install git+https://github.com/openai/CLIP.git

Training on 10 splits

python train_unique_clip_weight.py

Evaluation on test-sets

python BIQA_benchmark.py

Demo: detailed pipeline

python demo.py

Demo2: import LIQE as a standalone module and perform inference

python demo2.py

Pre-trained weights

Google Drive:

https://drive.google.com/file/d/1GoKwUKNR-rvX11QbKRN8MuBZw2hXKHGh/view?usp=sharing

百度网盘:

链接: https://pan.baidu.com/s/1KHjj7T8y2H_eKE6w7HnWJA 提取码: 2b8v

New! IQA-PyTorch implementation

IQA-PyTorch supports LIQE now! Can be easily used as follows:

import pyiqa
model = pyiqa.create_metric('liqe', as_loss=False) #Re-trained on the official set of KonIQ-10k
score = model(img_path)

or

import pyiqa
model = pyiqa.create_metric('liqe_mix', as_loss=False) #Trained on multiple datasets as in the paper.
score = model(img_path)

New! Training LIQE on a single IQA dataset with quality labels only

python train_liqe_single.py # it will train on official split of KonIQ-10k for fair comparison with other work

Zero-shot (cross-database) performance (SRCC) on the AIGC datasets.

BIQA ModelAGIQA-3KAGIQA-1KSJTU-H3DAIGCIQA2023Paper
DBCNN0.64540.51330.45600.7301TCSVT2020
HyperIQA0.62910.52530.26960.7211CVPR2020
TReS0.64600.51010.27000.7410WACV2022
UNIQUE0.66590.45960.75230.7605TIP2021
MUSIQ0.62940.52540.53130.7358ICCV2021
PaQ-2-PiQ0.50230.53780.26830.6425CVPR2020
CLIPIQA0.65800.3411-0.07930.6088AAAI2023
CLIPIQA+0.68310.44610.55670.7158AAAI2023
MANIQA0.69500.61800.45230.7282CVPRW2022
LIQE (Ours)0.72120.57850.67160.7435CVPR2023

Citation

@inproceedings{zhang2023liqe,  
  title={Blind Image Quality Assessment via Vision-Language Correspondence: A Multitask Learning Perspective},  
  author={Zhang, Weixia and Zhai, Guangtao and Wei, Ying and Yang, Xiaokang and Ma, Kede},  
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},  
  pages={14071--14081},
  year={2023}
}