Home

Awesome

<div align="center"> <img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> <br /><br />

GitHub Repo stars license PyPI Downloads issue resolution open issues

👋 join us on Static Badge Static Badge Static Badge

🔍 Explore our models on Static Badge Static Badge Static Badge Static Badge

English | įŽ€äŊ“中文

</div>

🚀 Speed Benchmark

<div align=center> <img src="https://github.com/InternLM/xtuner/assets/41630003/9c9dfdf4-1efb-4daf-84bf-7c379ae40b8b" style="width:80%"> </div> <div align=center> <img src="https://github.com/InternLM/xtuner/assets/41630003/5ba973b8-8885-4b72-b51b-c69fa1583bdd" style="width:80%"> </div>

🎉 News

📖 Introduction

XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models.

Efficient

Flexible

Full-featured

đŸ”Ĩ Supports

<table> <tbody> <tr align="center" valign="middle"> <td> <b>Models</b> </td> <td> <b>SFT Datasets</b> </td> <td> <b>Data Pipelines</b> </td> <td> <b>Algorithms</b> </td> </tr> <tr valign="top"> <td align="left" valign="top"> <ul> <li><a href="https://huggingface.co/internlm">InternLM2 / 2.5</a></li> <li><a href="https://huggingface.co/meta-llama">Llama 2 / 3</a></li> <li><a href="https://huggingface.co/collections/microsoft/phi-3-6626e15e9585a200d2d761e3">Phi-3</a></li> <li><a href="https://huggingface.co/THUDM/chatglm2-6b">ChatGLM2</a></li> <li><a href="https://huggingface.co/THUDM/chatglm3-6b">ChatGLM3</a></li> <li><a href="https://huggingface.co/Qwen/Qwen-7B">Qwen</a></li> <li><a href="https://huggingface.co/baichuan-inc/Baichuan2-7B-Base">Baichuan2</a></li> <li><a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral</a></li> <li><a href="https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat">DeepSeek V2</a></li> <li><a href="https://huggingface.co/google">Gemma</a></li> <li><a href="https://huggingface.co/openbmb">MiniCPM</a></li> <li>...</li> </ul> </td> <td> <ul> <li><a href="https://modelscope.cn/datasets/damo/MSAgent-Bench">MSAgent-Bench</a></li> <li><a href="https://huggingface.co/datasets/fnlp/moss-003-sft-data">MOSS-003-SFT</a> 🔧</li> <li><a href="https://huggingface.co/datasets/tatsu-lab/alpaca">Alpaca en</a> / <a href="https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese">zh</a></li> <li><a href="https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k">WizardLM</a></li> <li><a href="https://huggingface.co/datasets/timdettmers/openassistant-guanaco">oasst1</a></li> <li><a href="https://huggingface.co/datasets/garage-bAInd/Open-Platypus">Open-Platypus</a></li> <li><a href="https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K">Code Alpaca</a></li> <li><a href="https://huggingface.co/datasets/burkelibbey/colors">Colorist</a> 🎨</li> <li><a href="https://github.com/WangRongsheng/ChatGenTitle">Arxiv GenTitle</a></li> <li><a href="https://github.com/LiuHC0428/LAW-GPT">Chinese Law</a></li> <li><a href="https://huggingface.co/datasets/Open-Orca/OpenOrca">OpenOrca</a></li> <li><a href="https://huggingface.co/datasets/shibing624/medical">Medical Dialogue</a></li> <li>...</li> </ul> </td> <td> <ul> <li><a href="docs/zh_cn/user_guides/incremental_pretraining.md">Incremental Pre-training</a> </li> <li><a href="docs/zh_cn/user_guides/single_turn_conversation.md">Single-turn Conversation SFT</a> </li> <li><a href="docs/zh_cn/user_guides/multi_turn_conversation.md">Multi-turn Conversation SFT</a> </li> </ul> </td> <td> <ul> <li><a href="http://arxiv.org/abs/2305.14314">QLoRA</a></li> <li><a href="http://arxiv.org/abs/2106.09685">LoRA</a></li> <li>Full parameter fine-tune</li> <li><a href="https://arxiv.org/abs/2305.18290">DPO</a></li> <li><a href="https://arxiv.org/abs/2403.07691">ORPO</a></li> <li>Reward Model</a></li> </ul> </td> </tr> </tbody> </table>

🛠ī¸ Quick Start

Installation

Fine-tune

XTuner supports the efficient fine-tune (e.g., QLoRA) for LLMs. Dataset prepare guides can be found on dataset_prepare.md.

Chat

XTuner provides tools to chat with pretrained / fine-tuned LLMs.

xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]

For example, we can start the chat with InternLM2.5-Chat-7B :

xtuner chat internlm/internlm2_5-chat-7b --prompt-template internlm2_chat

For more examples, please see chat.md.

Deployment

Evaluation

🤝 Contributing

We appreciate all contributions to XTuner. Please refer to CONTRIBUTING.md for the contributing guideline.

🎖ī¸ Acknowledgement

🖊ī¸ Citation

@misc{2023xtuner,
    title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
    author={XTuner Contributors},
    howpublished = {\url{https://github.com/InternLM/xtuner}},
    year={2023}
}

License

This project is released under the Apache License 2.0. Please also adhere to the Licenses of models and datasets being used.