Home

Awesome

UIQA

visitors Pytorch License arXiv

🏆 🥇 Winner solution for ECCV AIM 2024 UHD-IQA Challenge: Pushing the Boundaries of Blind Photo Quality Assessment at the AIM 2024 workshop @ ECCV 2024

Official Code for Assessing UHD Image Quality from Aesthetics, Distortions, and Saliency

Introduction

UHD images, typically with resolutions equal to or higher than 4K, pose a significant challenge for efficient image quality assessment (IQA) algorithms, as adopting full-resolution images as inputs leads to overwhelming computational complexity and commonly used pre-processing methods like resizing or cropping may cause substantial loss of detail. To address this problem, we design a multi-branch deep neural network (DNN) to assess the quality of UHD images from three perspectives: global aesthetic characteristics, local technical distortions, and salient content perception. Specifically, aesthetic features are extracted from low-resolution images downsampled from the UHD ones, which lose high-frequency texture information but still preserve the global aesthetics characteristics. Technical distortions are measured using a fragment image composed of mini-patches cropped from UHD images based on the grid mini-patch sampling strategy. The salient content of UHD images is detected and cropped to extract quality-aware features from the salient regions. We adopt the Swin Transformer Tiny as the backbone networks to extract features from these three perspectives. The extracted features are concatenated and regressed into quality scores by a two-layer multi-layer perceptron (MLP) network. We employ the mean square error (MSE) loss to optimize prediction accuracy and the fidelity loss to optimize prediction monotonicity. Experimental results show that the proposed model achieves the best performance on the UHD-IQA dataset while maintaining the lowest computational complexity, demonstrating its effectiveness and efficiency. Moreover, the proposed model won first prize in ECCV AIM 2024 UHD-IQA Challenge.

Image Pre-processing

Image Pre-processing Figure

The different image pre-processing methods for UHD images. (a) is the proposed method, which utilizes the resized image, the fragment image, and the salient patch to extract features of aesthetic, distortion, and salient content. (b) samples all non-overlapped image patches for feature extraction. (c) selects three representative patches with the highest texture complexity for feature extraction.

Model

Model Figure

The diagram of the proposed model. It consists of three modules: the image pre-processing module, the feature extraction module, and the quality regression module. We assess the quality of UHD images from three perspectives: global aesthetic characteristics, local technical distortions, and salient content perception, which are evaluated by the aesthetic assessment branch, distortion measurement branch, and salient content perception branch, respectively.

<!-- ## Computationl Complexity ![Computationl Complexity](./figures/macs.PNG) -->

Performance

Compared with state-of-the-art IQA methods

MethodsSRCCPLCCKRCCRMSEMAE
HyperIQA0.5240.1820.3590.0870.055
Effnet-2C-MLSP0.6150.6270.4450.0600.050
CONTRIQUE0.7160.7120.5210.0490.038
ARNIQA0.7180.7170.5230.0500.039
CLIP-IQA+0.7430.7320.5460.1080.087
QualiCLIP0.7570.7520.5570.0790.064
UIQA0.8170.8230.6250.0400.032
MethodsSRCCPLCCKRCCRMSEMAE
HyperIQA0.5530.1030.3890.1180.070
Effnet-2C-MLSP0.6750.6410.4910.0740.059
CONTRIQUE0.7320.6780.5320.0730.052
ARNIQA0.7390.6940.5440.0520.739
CLIP-IQA+0.7470.7090.5510.1110.089
QualiCLIP0.7700.7250.5700.0830.066
UIQA0.8460.7980.6570.0610.042

Performance on ECCV AIM 2024 UHD-IQA Challenge

TeamSRCCPLCCKRCCRMSEMAE
SJTU MMLab (ours)0.8460.7980.6570.0610.042
CIPLAB0.8350.8000.6420.0640.044
ZX AIE Vector0.7950.7680.6050.0620.044
I2Group0.7880.7560.5980.0660.046
Dominator0.7310.7120.5390.0720.052
ICL0.5170.5210.3610.1360.115

Usage

Environments

torch(>=1.13), torchvision, pandas, ptflops, numpy, Pillow
conda create -n UIQA python=3.8
conda activate UIQA 
conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia # this command install pytorch version of 2.40, you can install pytorch >=1.13
pip install pandas ptflops numpy

Dataset

Download the UHD-IQA dataset.

Train UIQA

Download the pre-trained model on AVA.

CUDA_VISIBLE_DEVICES=0,1 python -u train.py \
--num_epochs 100 \
--batch_size 12 \
--n_fragment 15 \
--resize 512 \
--crop_size 480 \
--salient_patch_dimension 480 \
--lr 0.00001 \
--lr_weight_L2 0.1 \
--lr_weight_pair 1 \
--decay_ratio 0.9 \
--decay_interval 10 \
--random_seed 1000 \
--snapshot ckpts \
--pretrained_path ckpts/Model_SwinT_AVA_size_480_epoch_10.pth \
--database_dir UHDIQA/challenge/training/ \
--model UIQA \
--multi_gpu True \
--print_samples 20 \
--database UHD_IQA \
>> logfiles/train_UIQA.log

Test UIQA

Put your trained model in the ckpts folder, or download the provided trained model (model weights, quality alignment profile file) on the UHD-IQA dataset into the ckpts folder.

CUDA_VISIBLE_DEVICES=0 python -u test_single_image.py \
--model_path ckpts/ \
--trained_model_file UIQA.pth \
--popt_file UIQA.npy \
--image_path demo/8.jpg \
--resize 512 \
--crop_size 480 \
--n_fragment 15 \
--salient_patch_dimension 480 \
--model UIQA

Citation

If you find this code is useful for your research, please cite:

@article{sun2024assessing,
  title={Assessing UHD Image Quality from Aesthetics, Distortions, and Saliency},
  author={Sun, Wei and Zhang, Weixia and Cao, Yuqin and Cao, Linhan and Jia, Jun and Chen, Zijian and Zhang, Zicheng and Min, Xiongkuo and Zhai, Guangtao},
  journal={arXiv preprint arXiv:2409.00749},
  year={2024}
}

Acknowledgement

  1. https://github.com/zwx8981/LIQE
  2. https://github.com/VQAssessment/FAST-VQA-and-FasterVQA
  3. https://github.com/imfing/ava_downloader