Home

Awesome

Patron

This repo contains the code for our ACL 2023 paper Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach.

Model Framework

Figure

Performance

The results on different datasets (using 128 labels as the budget) for fine-tuning are summarized as follows:

MethodIMDBYelp-fullAGNewsYahoo!DBPediaTRECMean
Full Supervision (RoBERTa-base)94.166.494.077.699.397.288.1
Random Sampling86.647.784.560.295.085.676.7
Best Baseline (Chang et al. 2021)88.546.485.661.396.587.777.6
Patron (Ours)89.651.287.065.197.091.180.2

For prompt-based learning, we use the same pipeline as the LM-BFF. The result with 128 labels is shown as follows.

MethodIMDBYelp-fullAGNewsYahoo!DBPediaTRECMean
Full Supervision (RoBERTa-base)94.166.494.077.699.397.288.1
Random Sampling87.751.384.964.796.085.078.2
Best Baseline (Yuan et al., 2020)88.951.787.565.996.886.579.5
Patron (Ours)89.355.687.867.697.488.981.1

Dependencies

python 3.8
transformers==4.2.0
pytorch==1.8.0
scikit-learn
faiss-cpu==1.6.4
sentencepiece==0.1.96
tqdm>=4.62.2
tensorboardX
nltk
openprompt

Datasets

We use the following four datasets for the main experiments.

DatasetTaskNumber of ClassesNumber of Unlabeled Data/Test Data
IMDBSentiment225k/25k
Yelp-fullSentiment539k/10k
AG NewsNews Topic4119k/7.6k
Yahoo! AnswersQA Topic5180k/30.1k
DBPediaOntology Topic14280k/70k
TRECQuestion Topic65k/0.5k

The processed data can be found at this link. The folder to put these datasets will be discribed in the following parts.

Data Selection Pipeline

a) Generating Unsupervised Text Embeddings via SimCSE

Run the following commands

python gen_embedding_simcse.py --dataset [the dataset you use] --gpuid [the id of gpu you use] --batchsize [the number of data processed in one time]

b) Estimating Uncertainty via Prompts

We provide the pseudo prediction obtained via prompts in the above link for datasets. Please refer to the original papers for details.

c) Data Selection with Uncertainty Propagation and PTR

Run the following commands (example on AG News dataset)

python patron_sample.py --dataset agnews --k 50 --rho 0.01 --gamma 0.5 --beta 0.5

Some important hyperparameters:

Experiments

Running Fine-tuning Experiments

See finetune folder for detailed instructions.

Running Prompt-based Learning Experiments

See prompt_learning folder for detailed instructions.

Running on a New Dataset

See this link as the pipeline for generating the prompt-based predictions. Note that you need to customize your prompt verbalizers and templates.

To generate the document embeddings, you can follow the above commands by using SimCSE.

Once you generate the index for the selected data, then you can use the pipelines in Running Fine-tuning Experiments and Running Prompt-based Learning Experiments for the few-shot fine-tuning and prompt-based learning experiments.

Citation

Please kindly cite the following paper if you find this repo helpful for your research. Thanks in advance!

@article{yu2022patron,
  title={Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach
},
  author={Yue Yu and Rongzhi Zhang and Ran Xu and Jieyu Zhang and Jiaming Shen and Chao Zhang},
  journal={arXiv preprint arXiv:2209.06995},
  year={2022}
}

Acknowledgements

We would like to thank the authors from the repo SimCSE and OpenPrompt for the well-organized code.