Awesome
[NeurIPS-22] Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels
The implementation for the paper Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels (NeurIPS 2022).
See much more related works in Awesome Weakly Supervised Multi-Label Learning!
Preparing Data
See the README.md
file in the data
directory for instructions on downloading and preparing the datasets. (The detailed procedures follow Multi-Label Learning from Single Positive Labels)
Training Model
To train and evaluate a model, the next two steps are required:
- For the first stage, we warm-up the model with the AN loss and the PLC regularization. Run:
python first_stage.py --dataset_name=coco --dataset_dir=./data \
--lambda_plc=1 --threshold=0.6 \
--batch_size=32
- For the second stage, we train the model by adding the LAC regularization. Run:
python second_stage.py --dataset_name=coco --dataset_dir=./data \
--lambda_plc=1 --threshold=0.9 \
--lambda_lac=1 --temperature=0.5 --queue_size=512 \
--batch_size=32 --is_proj
Hyper-Parameters
To obtain the results reported in the paper, please modify the following parameters:
dataset_name
: The dataset to use, e.g. 'coco', 'voc', 'nus', 'cub'.dataset_dir
: The directory of all datasets.batch_size
: The batch size of samples (images).lambda_plc
: The weight of PLC regularization item.lambda_lac
: The weight of LAC regularization item.threshold
: The threshold for pseudo positive labels.temperature
: The temperature for LAC regularization.queue_size
: The size of the Memory Queue.is_proj
: The switch of the projector which generates label-wise embeddings.is_data_parallel
: The switch of training with multi-GPUs.
Misc
- The range of hyper-parameters can be found in the paper.
- There are four folders in this directory
dataset_dir
--- 'coco/', 'voc/', 'nus/', 'cub/'. Please make sure that the path of the dataset is correct before training. - We performed all experiments on two GeForce RTX 3090 GPUs, so the
os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1"
. The switch of training with multi-GPUs isFalse
by default, and you can open it with--is_data_parallel
.
Reference
If you find the code useful in your research, please consider citing our paper:
@inproceedings{
xie2022labelaware,
title={Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels},
author={Ming-Kun Xie and Jia-Hao Xiao and Sheng-Jun Huang},
booktitle={Advances in Neural Information Processing Systems},
year={2022}
}