Home

Awesome

<div align="center"> <img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> <br /><br />

GitHub Repo stars license PyPI Downloads issue resolution open issues

👋 join us on Static Badge Static Badge Static Badge

🔍 Explore our models on Static Badge Static Badge

English | įŽ€äŊ“中文

</div>

🎉 News

📖 Introduction

XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models.

Efficient

Flexible

Full-featured

🌟 Demos

đŸ”Ĩ Supports

<table> <tbody> <tr align="center" valign="middle"> <td> <b>Models</b> </td> <td> <b>SFT Datasets</b> </td> <td> <b>Data Pipelines</b> </td> <td> <b>Algorithms</b> </td> </tr> <tr valign="top"> <td align="left" valign="top"> <ul> <li><a href="https://huggingface.co/internlm">InternLM2</a></li> <li><a href="https://huggingface.co/internlm">InternLM</a></li> <li><a href="https://huggingface.co/meta-llama">Llama</a></li> <li><a href="https://huggingface.co/meta-llama">Llama2</a></li> <li><a href="https://huggingface.co/THUDM/chatglm2-6b">ChatGLM2</a></li> <li><a href="https://huggingface.co/THUDM/chatglm3-6b">ChatGLM3</a></li> <li><a href="https://huggingface.co/Qwen/Qwen-7B">Qwen</a></li> <li><a href="https://huggingface.co/baichuan-inc/Baichuan-7B">Baichuan</a></li> <li><a href="https://huggingface.co/baichuan-inc/Baichuan2-7B-Base">Baichuan2</a></li> <li><a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral 8x7B</a></li> <li><a href="https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat">DeepSeek MoE</a></li> <li><a href="https://huggingface.co/google">Gemma</a></li> <li>...</li> </ul> </td> <td> <ul> <li><a href="https://modelscope.cn/datasets/damo/MSAgent-Bench">MSAgent-Bench</a></li> <li><a href="https://huggingface.co/datasets/fnlp/moss-003-sft-data">MOSS-003-SFT</a> 🔧</li> <li><a href="https://huggingface.co/datasets/tatsu-lab/alpaca">Alpaca en</a> / <a href="https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese">zh</a></li> <li><a href="https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k">WizardLM</a></li> <li><a href="https://huggingface.co/datasets/timdettmers/openassistant-guanaco">oasst1</a></li> <li><a href="https://huggingface.co/datasets/garage-bAInd/Open-Platypus">Open-Platypus</a></li> <li><a href="https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K">Code Alpaca</a></li> <li><a href="https://huggingface.co/datasets/burkelibbey/colors">Colorist</a> 🎨</li> <li><a href="https://github.com/WangRongsheng/ChatGenTitle">Arxiv GenTitle</a></li> <li><a href="https://github.com/LiuHC0428/LAW-GPT">Chinese Law</a></li> <li><a href="https://huggingface.co/datasets/Open-Orca/OpenOrca">OpenOrca</a></li> <li><a href="https://huggingface.co/datasets/shibing624/medical">Medical Dialogue</a></li> <li>...</li> </ul> </td> <td> <ul> <li><a href="docs/zh_cn/user_guides/incremental_pretraining.md">Incremental Pre-training</a> </li> <li><a href="docs/zh_cn/user_guides/single_turn_conversation.md">Single-turn Conversation SFT</a> </li> <li><a href="docs/zh_cn/user_guides/multi_turn_conversation.md">Multi-turn Conversation SFT</a> </li> </ul> </td> <td> <ul> <li><a href="http://arxiv.org/abs/2305.14314">QLoRA</a></li> <li><a href="http://arxiv.org/abs/2106.09685">LoRA</a></li> <li>Full parameter fine-tune</li> </ul> </td> </tr> </tbody> </table>

🛠ī¸ Quick Start

Installation

Fine-tune Open In Colab

XTuner supports the efficient fine-tune (e.g., QLoRA) for LLMs. Dataset prepare guides can be found on dataset_prepare.md.

Chat Open In Colab

XTuner provides tools to chat with pretrained / fine-tuned LLMs.

xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]

For example, we can start the chat with

InternLM2-Chat-7B with adapter trained from oasst1 dataset:

xtuner chat internlm/internlm2-chat-7b --adapter xtuner/internlm2-chat-7b-qlora-oasst1 --prompt-template internlm2_chat

LLaVA-InternLM2-7B:

xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-patch14-336 --llava xtuner/llava-internlm2-7b --prompt-template internlm2_chat --image $IMAGE_PATH

For more examples, please see chat.md.

Deployment

Evaluation

🤝 Contributing

We appreciate all contributions to XTuner. Please refer to CONTRIBUTING.md for the contributing guideline.

🎖ī¸ Acknowledgement

🖊ī¸ Citation

@misc{2023xtuner,
    title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
    author={XTuner Contributors},
    howpublished = {\url{https://github.com/InternLM/xtuner}},
    year={2023}
}

License

This project is released under the Apache License 2.0. Please also adhere to the Licenses of models and datasets being used.