Home

Awesome

T2VQA

<p align="center"> <img src="overview.png" /> </p>

This is the official repository for paper "Subjective-Aligned Dateset and Metric for Text-to-Video Quality Assessment".

With the rapid development of generative models, Artificial Intelligence-Generated Contents (AIGC) have exponentially increased in daily lives. Among them, Text-to-Video (T2V) generation has received widespread attention. Though many T2V models have been released for generating high perceptual quality videos, there is still lack of a method to evaluate the quality of these videos quantitatively. To solve this issue, we establish the largest-scale Text-to-Video Quality Assessment DataBase (T2VQA-DB) to date. The dataset is composed of 10,000 videos generated by 9 different T2V models. We also conduct a subjective study to obtain each video's corresponding mean opinion score. Based on T2VQA-DB, we propose a novel transformer-based model for subjective-aligned Text-to-Video Quality Assessment (T2VQA). The model extracts features from text-video alignment and video fidelity perspectives, then it leverages the ability of a large language model to give the prediction score. Experimental results show that T2VQA outperforms existing T2V metrics and SOTA video quality assessment models. Quantitative analysis indicates that T2VQA is capable of giving subjective-align predictions, validating its effectiveness.

Database

Download the T2VQA-DB, including 1w text-generated videos with corresponding MOSs.

Pre-trained Weights

The T2VQA model is finetuned based on the following pre-trained weights:

BLIP w/ ViT-L: model_large.pth

Video Swin Transformer-T swin_tiny_patch244_window877_kinetics400_1k.pth

Bert base: https://huggingface.co/google-bert/bert-base-uncased

Vicuna-7b-v1.1: https://huggingface.co/lmsys/vicuna-7b-v1.1

Replace paths in t2vqa.yml using your local paths.

Training

python train.py -o ./t2vqa.yml

Testing

python test.py -o ./t2vqa.yml

Citation

    @article{kou2024subjective,
  title={Subjective-Aligned Dateset and Metric for Text-to-Video Quality Assessment},
  author={Kou, Tengchuan and Liu, Xiaohong and Zhang, Zicheng and Li, Chunyi and Wu, Haoning and Min, Xiongkuo and Zhai, Guangtao and Liu, Ning},
  journal={arXiv preprint arXiv:2403.11956},
  year={2024}
}