Home

Awesome

DiverseEvol: Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning

<p align="center"> šŸ“– <a href="https://arxiv.org/abs/2311.08182" target="_blank">Paper</a> <br> </p>

We introduce DiverseEvol, an efficient instruction-tuning method that allows the model itself to iteratively sample training subsets to improve its own performance, without any external supervision from humans or more advanced LLMs. Central to our data selection technique is the maintenance of high diversity in the chosen subsets, as the model selects new data points most distinct from any existing ones according to its current embedding space. In experiments across three datasets and benchmarks, our models, trained on less than 8% of the original dataset, maintain or improve performance compared with finetuning on full data.

Below shows the overall workflow of DiverseEvol:

<p align="center"> <img src="assets/diverse_evol_model.png" width=100%> </p>

Getting Started

Prerequisites

Make sure you have the following prerequisites installed:

Environment

Or create a Conda environment and activate it with the provided environment.yml file:

conda env create -f environment.yml
source activate diverse_evol

Running DiverseEvol

All configurations are stored in .yml files. We provide two example configs:

You can customize these configurations to suit your experiment needs. Key configurations to consider include:

To start an experiment, use the following command, specifying your configuration file and wandb key:

CUDA_VISIBLE_DEVICES=0,1,2,3 nohup python -m torch.distributed.run --nproc_per_node=4 train.py --config_file {YOUR-CONFIG-FILE} --wandb_key {YOUR-WANDB-KEY} > {YOUR-LOG-FILE} 2>&1 &

Your experiment results will be saved in the following format:

evol_res
ā””ā”€ā”€ {YOUR-RESULT-DIR-NAME}
    ā”œā”€ā”€ data  # training data pool and unselected data pool for each iteration
    ā”‚Ā Ā  ā”œā”€ā”€ rd_0_labeled.json 
    ā”‚Ā Ā  ā”œā”€ā”€ rd_0_unlabeled.json
    ā”‚Ā Ā  ā”œā”€ā”€ rd_1_labeled.json
    ā”‚Ā Ā  ā”œā”€ā”€ rd_1_unlabeled.json
    ā”‚Ā Ā  ā”œā”€ā”€ ...
    ā”‚Ā Ā  ā”œā”€ā”€ rd_N_labeled.json
    ā”‚Ā Ā  ā””ā”€ā”€ rd_N_unlabeled.json
    ā””ā”€ā”€ output  # instruction-tuned chat model for each iteration
        ā”œā”€ā”€ rd_0
        ā”œā”€ā”€ rd_1
        ā”œā”€ā”€ ...
        ā””ā”€ā”€ rd_N 

Generating Answers

With models trained in each iteration, you can generate responses to questions in different test sets and customize the range of iterations to consider.

To generate answers, use the following command:

nohup python eval.py --eval_stage generate_answer --device {GPU-IDX} --schedule {YOUR-RESULT-DIR-NAME} --rd_start 0 --rd_end 10 --testsets vicuna koala wizarlm > {YOUR-LOG-FILE} 2>&1 &

This command will create a folder structure for the generated answers in the following format:

evol_answer
ā”œā”€ā”€ koala_test
ā”‚Ā Ā  ā””ā”€ā”€ {YOUR-RESULT-DIR-NAME}
ā”‚Ā Ā      ā”œā”€ā”€ rd_0.jsonl  
ā”‚Ā Ā      ā”œā”€ā”€ rd_1.jsonl
ā”‚Ā Ā      ā”œā”€ā”€ ...
ā”‚Ā Ā      ā””ā”€ā”€ rd_N.jsonl  # koala_bench answers generated by chat model trained in iteration/round N
ā”œā”€ā”€ vicuna_test
ā”‚Ā Ā  ā””ā”€ā”€ {YOUR-RESULT-DIR-NAME}
ā”‚Ā Ā      ā”œā”€ā”€ rd_0.jsonl
ā”‚Ā Ā      ā”œā”€ā”€ rd_1.jsonl
ā”‚Ā Ā      ā”œā”€ā”€ ...
ā”‚Ā Ā      ā””ā”€ā”€ rd_N.jsonl
ā””ā”€ā”€ wizardlm_test
    ā””ā”€ā”€ {YOUR-RESULT-DIR-NAME}
 Ā Ā      ā”œā”€ā”€ rd_0.jsonl
 Ā Ā      ā”œā”€ā”€ rd_1.jsonl
 Ā Ā      ā”œā”€ā”€ ...
 Ā Ā      ā””ā”€ā”€ rd_N.jsonl

Feel free to customize the test sets (--testsets) and iteration range (--rd_start & --rd_end) to suit your specific analysis needs.

Additionally, we provide all the answers generated by our trained models in evol_answer_reference/, which directly leads to our reported results. We also include the answers by ChatGPT (GPT-3.5-TURBO) that serves as a general competitor in our paper.

GPT4-Judge

After generating the answers to testsets, you can compare them using GPT4. Refer to our GPT4_JUDGE_TEMPLATE in consts.py or Appendix A in our paper.

In evol_gpt4_judge_reference/, you can find the GPT4-judge responses to all the comparisons reported in our paper.

Analyzing Dataset Diversity

You can also evaluate the selected data subsets in terms of diversity (measured by Vendi-Score) using the following command:

nohup python eval.py --eval_stage analyse_diversity --device {GPU-IDX} --schedule {YOUR-RESULT-DIR-NAME} --embed_model_path {PATH-TO-THE-MODEL-USED-FOR-EMBEDDING} --rd_start 0 --rd_end 10 > {YOUR-LOG-FILE} 2>&1 &

This command will provide vendi-scores of each round's training data directly in the output. Additionally, it should result in a .pkl file that stores the reported diversity results:

evol_diversity
ā””ā”€ā”€ {YOUR-RESULT-DIR-NAME}_RD={rd_start}-{rd_end}_MEASURES=vendi.pkl

You can also adjust the iteration range (--rd_start & --rd_end).

Acknowledgement

Our implementation is largely inspired by the following codebases:

Citation

If you find our paper and code useful in your research, please consider giving us a star ā­ and citing us šŸ“:

@article{wu2023self,
  title={Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning},
  author={Wu, Shengguang and Lu, Keming and Xu, Benfeng and Lin, Junyang and Su, Qi and Zhou, Chang},
  journal={arXiv preprint arXiv:2311.08182},
  year={2023}
}

If you have any questions or need further assistance, please don't hesitate to reach out: wushengguang.wsg@alibaba-inc.com or lukeming.lkm@alibaba-inc.com.

Happy experimenting with DiverseEvol ^_^