Home

Awesome

CoEvol

CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation

News

How to run CoEvol?

Installation

  git clone https://github.com/lirenhao1997/CoEvol.git
  cd CoEvol
  pip install -r requirements.txt

Data Preparation

Environment Configuration

Run CoEvol via External API

CoEvol can run on proprietary models via APIs. Until now, we have tested our framework on the APIs of OpenAI, GLM, ERNIE, and custom proxies. To run CoEvol via external APIs, you should:

  1. Set your API keys in the appropriate fields within edit/api_keys.json.
  2. Run the example scripts located in the directory scripts/. If you want to run CoEvol based on a custom proxy, remember to set the URL with --proxy_api_url <YOUR_CUSTOM_PROXY>.

Run CoEvol via Local Deployment

CoEvol can also run on open-source models via local deployment. To facilitate faster agent interactions, we highly recommend utilizing inference acceleration techniques. In this implementation, we utilize vllm for local inference acceleration, which includes a chat API that is compatible with OpenAI services.

  1. Deploy your local model with vllm. For more detailed settings, please refer to the official documents.
python -u -m vllm.entrypoints.openai.api_server \
    --host 0.0.0.0 \
    --model "<YOUR_MODEL_PATH>" \
	--served-model-name "<YOUR_MOEL_ALIAS>" \
    --tensor-parallel-size 4
  1. Run the example script in run_iter_pipeline_local.sh depending on whether you use single or multi-turn data.

Notes

  1. Use the parameters --save_mem and --save_log to save agent memories and running logs, respectively.
  2. Employ the parameters --start_indx and --end_indx to control the range of data evolution. If these parameters are not set, CoEvol will process the entire dataset for data evolution.
  3. Utilize the parameter --num_workers to control the number of multi-threads used for concurrent data evolution, which should be adjusted to be compatible with the rate limit of your APIs or the load capacity of your local server.

Data Organization for SFT

Once you successfully run the framework, both intermediate processes and full results will be stored in the directory ./edit/res/<JOB_NAME>. To obtain the evolved SFT data in JSON format, use the appropriate functions within the script edit/data_post_process.py, according to the data you have used.

Fine-tuning with Evolved Data

For supervised fine-tuning, we utilize llama-factory to train our model. Please consult their repository for detailed instructions.

Citation

If you find the content of this project helpful, please cite our paper as follows:

@misc{li2024coevol,
      title={CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation}, 
      author={Renhao Li and Minghuan Tan and Derek F. Wong and Min Yang},
      year={2024},
      eprint={2406.07054},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Acknowledgement

For conversation prompt templates, we use codes from fastchat.