Home

Awesome

PKU-AIGIQA-4K

This is the official repo of the paper PKU-AIGIQA-4K: A Perceptual Quality Assessment Database for Both Text-to-Image and Image-to-Image AI-Generated Images, an extension of our previous work PKU-I2IQA:

@misc{yuan2024pkuaigiqa4k,
    title={PKU-AIGIQA-4K: A Perceptual Quality Assessment Database for Both Text-to-Image and Image-to-Image AI-Generated Images}, 
    author={Jiquan Yuan and Fanyi Yang and Jihe Li and Xinyan Cao and Jinming Che and Jinlong Lin and Xixin Cao},
    year={2024},
    eprint={2404.18409},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
<hr />

Abstract: In recent years, image generation technology has rapidly advanced, resulting in the creation of a vast array of AI-generated images (AIGIs). However, the quality of these AIGIs is highly inconsistent, with low-quality AIGIs severely impairing the visual experience of users. Due to the widespread application of AIGIs, the AI-generated image quality assessment (AIGIQA), aimed at evaluating the quality of AIGIs from the perspective of human perception, has garnered increasing interest among scholars. Nonetheless, current research has not yet fully explored this field. We have observed that existing databases are limited to images generated from single scenario settings. Databases such as AGIQA-1K, AGIQA-3K, and AIGCIQA2023, for example, only include images generated by text-to-image generative models. This oversight highlights a critical gap in the current research landscape, underscoring the need for dedicated databases catering to image-to-image scenarios, as well as more comprehensive databases that encompass a broader range of AI-generated image scenarios. Addressing these issues, we have established a large scale perceptual quality assessment database for both text-to-image and image-to-image AIGIs, named PKU-AIGIQA-4K. We then conduct a well-organized subjective experiment to collect quality labels for AIGIs and perform a comprehensive analysis of the PKU-AIGIQA-4K database. Regarding the use of image prompts during the training process, we propose three image quality assessment (IQA) methods based on pre-trained models that include a no-reference method NR-AIGCIQA, a full-reference method FR-AIGCIQA, and a partial-reference method PR-AIGCIQA. Finally, leveraging the PKU-AIGIQA-4K database, we conduct extensive benchmark experiments and compare the performance of the proposed methods and the current IQA methods.

<hr />

Three IQA methods based on pre-trained models

NR-AIGCIQA

<img src="./Fig/NR.png" width="600" height="200">

FR-AIGCIQA

<img src="./Fig/FR.png" width="600" height="300">

PR-AIGCIQA

<img src="./Fig/PR.png" width="750" height="300">

Pre-trained visual backbone

For feature extraction from input images, we selected several backbone network models pre-trained on the ImageNet dataset, including:

Installation

# clone this repo
git clone https://github.com/jiquan123/AIGIQA4K.git
cd AIGIQA4K

# create environment
conda create -n iqa 
conda activate iqa
pip install -r requirements.txt

Database

The constructed PKU-AIGIQA-4K database can be accessed using the links below. Download PKU-AIGIQA-4K database: 1.[百度网盘 (提取码:AIGI)]. 2.[Google Drive]

The data structure used for this repo should be:

├── Dataset
│   ├── PKU-AIGIQA-4K
│   │   ├── All
│   │   │   ├── DALLE_1000_00.jpg
│   │   │   ├── ...
│   │   │   ├── SD_1199_11.jpg
│   │   ├── I2I
│   │   │   ├── Generated_image
│   │   │   │   ├── All
│   │   │   │   │   ├── ....jpg
│   │   │   │   ├── MJ
│   │   │   │   │   ├── ....jpg
│   │   │   │   ├── SD
│   │   │   │   │   ├── ....jpg
│   │   │   ├── Image_prompt
│   │   │   │   ├── 0.jpg
│   │   │   │   ├── ...
│   │   │   │   ├── 199.jpg
│   │   ├── T2I
│   │   │   ├── All
│   │   │   │   ├── ....jpg
│   │   │   ├── DALLE
│   │   │   │   ├── ....jpg
│   │   │   ├── SD
│   │   │   │   ├── ....jpg
│   │   │   ├── MJ
│   │   │   │   ├── ....jpg
│   │   │   ├── SD
│   │   │   │   ├── ....jpg
│   │   ├── annotation.xlsx

Training

For how to train the NR-AIGCIQA/FR-AIGCIQA/PR-AIGCIQA model, please refer to ./PKU-AIGIQA-4K/AIGIQA4K/README.md.

For how to train the TIER-NR/TIER-FR/TIER-PR model, please refer to ./PKU-AIGIQA-4K/TIER-4K/README.md.

Contact

If you have any question, please contact yuanjiquan@stu.pku.edu.cn