Awesome
Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification
We provide the code for reproducing result of our paper Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification.
Getting Started
Dataset Preparation
-
CUHK-PEDES
Organize them in
dataset
folder as follows:|-- dataset/ | |-- <CUHK-PEDES>/ | |-- imgs |-- cam_a |-- cam_b |-- ... | |-- reid_raw.json
Download the CUHK-PEDES dataset from here and then run the
process_CUHK_data.py
as follow:cd SSAN python ./dataset/process_CUHK_data.py
-
ICFG-PEDES
Organize them in
dataset
folder as follows:|-- dataset/ | |-- <ICFG-PEDES>/ | |-- imgs |-- test |-- train | |-- ICFG_PEDES.json
Note that our ICFG-PEDES is collect from MSMT17 and thus we keep its storage structure in order to avoid the loss of information such as camera label, shooting time, etc. Therefore, the file
test
andtrain
here are not the way ICFG-PEDES is divided. The exact division of ICFG-PEDES is determined byICFG-PDES.json
. TheICFG-PDES.json
is organized like thereid_raw.json
in CUHK-PEDES .Please request the ICFG-PEDES database from 272521211@qq.com and then run the
process_ICFG_data.py
as follow:cd SSAN python ./dataset/process_ICFG_data.py
Training and Testing
sh experiments/CUHK-PEDES/train.sh
sh experiments/ICFG-PEDES/train.sh
Evaluation
sh experiments/CUHK-PEDES/test.sh
sh experiments/ICFG-PEDES/test.sh
Results on CUHK-PEDES and ICFG-PEDES
Our Results on CUHK-PEDES dataset
<img src="./figure/CUHK-PEDES_result.GIF">Our Results on ICFG-PEDES dataset
<img src="./figure/ICFG-PEDES_result.GIF"/>Citation
If this work is helpful for your research, please cite our work:
@article{ding2021semantically,
title={Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification},
author={Ding, Zefeng and Ding, Changxing and Shao, Zhiyin and Tao, Dacheng},
journal={arXiv preprint arXiv:2107.12666},
year={2021}
}