Awesome
LLaMA2-Accessory: An Open-source Toolkit for LLM Development π
<p align="center"> <img src="docs/logo.png" width="90%"/> <br> </p> <p align="center"> π <a href="https://llama2-accessory.readthedocs.io" target="_blank">Document</a> </p> <p align="center"> π€ <a href="https://huggingface.co/Alpha-VLLM/SPHINX" target="_blank">HF Repo</a> β’ π join our <a href="http://imagebind-llm.opengvlab.com/qrcode/" target="_blank">WeChat</a> β’ π <a href="http://imagebind-llm.opengvlab.com/" target="_blank">Demo</a> </p>πLLaMA2-Accessory is an open-source toolkit for pretraining, finetuning and deployment of Large Language Models (LLMs) and multimodal LLMs. This repo is mainly inherited from LLaMA-Adapter with more advanced features.π§
β¨Within this toolkit, we present SPHINX, a versatile multimodal large language model (MLLM) that combines a diverse array of training tasks, data domains, and visual embeddings.
News
- [2024-3-7] We release the demos and codebase of Large-DiT-T2I π.
- [2024-2-17] We release a 3 and 7 Billion Large-DiT trained on ImageNet. Pretrained checkpoints and full training codebase are released π.
- [2024-1-27] SPHINX-MoE achieves 29.57% and 29.33% accuracy results on CMMMU-test and CMMMU-val respectively.
- [2024-1-24] SPHINX-MoE achieves new SOTA performance (49.33%) on MMVP, higher than GPT-4V! π₯π₯π₯
- [2024-1-20] SPHINX-MoE achieves SOTA performance on AesBench! π₯π₯π₯
- [2024-1-18] LLaMA-Adapter is accepted by ICLR 2024!π
- [2024-1-12] We release SPHINX-Tiny built on the compact 1.1B TinyLlama that everyone can play with! π₯π₯π₯
- [2024-1-5] OpenCompass now supports seamless evaluation of all LLaMA2-Accessory models. π₯π₯πDoc
- [2024-1-2] We release the SPHINX-MoE, a MLLM based on Mixtral-8x7B-MoE. π₯π₯π₯
- [2023-12-12] SPHINX-V2 achieve outstanding results in InfiMM-Eval, ranking just below GPT4-V! π₯π₯π₯
- [2023-12-11] We now support mixtral-8x7b inference and finetuning! π₯π₯π₯
- [2023-12-08] We release OneLLM which aligns eight modalities to language using a unified framework!π₯π₯π₯
- [2023-11-17] We release SPHINX-V2, the same architecture but enhanced capabilities! π₯π₯π₯
- [2023.10.17] We release the demo, code, and model of SPHINX!π₯π₯
- [2023.09.15] We now support Falcon 180B!π₯π₯
- [2023.09.14] WeMix-LLaMA2-70B shows excellent performance on the OpenCompass benchmark!π₯π₯
- [2023.09.02] We now support InternLMπ₯
- [2023.08.28] We release quantized LLM with OmniQuant, which is an efficient, accurate, and omnibearing (even extremely low bit) quantization algorithm. Multimodal version is coming soon
- [2023.08.27] We now support CodeLLaMA and instruction finetuning on evol-code-alpaca
- [2023.08.27] We release our documentation in a webbook format πCheck it out here
- [2023.08.21] We release the Quantization codes and Evaluation result
- [2023.08.05] We release the multimodel finetuning codes and checkpoints
- [2023.07.23] Initial release π
Features
-
π‘Support More Datasets and Tasks
- π― Pretraining with RefinedWeb and StarCoder.
- π Single-modal finetuning with Alpaca, ShareGPT, LIMA, WizardLM, Flacuna, Platypus, UltraChat and MOSS.
- π Multi-modal finetuning with image-text pairs (LAION, COYO and more), interleaved image-text data (MMC4 and OBELISC) and visual instruction data (LLaVA, Shrika, Bard)
- π§ LLM for API Control (GPT4Tools and Gorilla).
-
β‘Efficient Optimization and Deployment
- π Parameter-efficient finetuning with Zero-init Attenion and Bias-norm Tuning.
- π» Fully Sharded Data Parallel (FSDP), Flash Attention 2 and QLoRA.
-
ποΈββοΈSupport More Visual Encoders and LLMs
Setup
:gear: For environment installation, please refer to Environment Setup.
Model Usage
:robot: Instructions for model pretraining, finetuning, inference, and other related topics are all available in the document.
Frequently Asked Questions (FAQ)
:question: Encountering issues or have further questions? Find answers to common inquiries here. We're here to assist you!
Demos
- Instruction-tuned LLaMA2: alpaca & gorilla.
- Chatbot LLaMA2: dialog_sharegpt & dialog_lima & llama2-chat.
- Multimodal LLaMA2: in-context & alpacaLlava_llamaQformerv2_13b
- SPHINX: demo
π‘ Now, our model SPHINX supports generating high-quality bounding boxes and then present masks created by SAM for all objects within an image driven by input prompts. Give it a try here! π
<img src="./docs/examples/finetune/mm/sphinx_box_0.png" width="90%" />Core Contributors
Chris Liu, Ziyi Lin, Guian Fang, Jiaming Han, Yijiang Liu, Renrui Zhang, Longtian Qiu, Yichi Zhang, Siyuan Huang
Project Leader
Peng Gao, Wenqi Shao, Shanghang Zhang
Hiring Announcement
π₯ We are hiring interns, postdocs, and full-time researchers at the General Vision Group, Shanghai AI Lab, with a focus on multi-modality and vision foundation models. If you are interested, please contact gaopengcuhk@gmail.com.
Citation
If you find our code and paper useful, please kindly cite:
@article{zhang2023llamaadapter,
title = {LLaMA-Adapter: Efficient Finetuning of Language Models with Zero-init Attention},
author={Zhang, Renrui and Han, Jiaming and Liu, Chris and Gao, Peng and Zhou, Aojun and Hu, Xiangfei and Yan, Shilin and Lu, Pan and Li, Hongsheng and Qiao, Yu},
journal={arXiv preprint arXiv:2303.16199},
year={2023}
}
@article{gao2023llamaadapterv2,
title = {LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model},
author={Gao, Peng and Han, Jiaming and Zhang, Renrui and Lin, Ziyi and Geng, Shijie and Zhou, Aojun and Zhang, Wei and Lu, Pan and He, Conghui and Yue, Xiangyu and Li, Hongsheng and Qiao, Yu},
journal={arXiv preprint arXiv:2304.15010},
year={2023}
}
Acknowledgement
<details><summary>Show More</summary>- @facebookresearch for ImageBind & LIMA & CodeLlama
- @Instruction-Tuning-with-GPT-4 for GPT-4-LLM
- @tatsu-lab for stanford_alpaca
- @tloen for alpaca-lora
- @lm-sys for FastChat
- @domeccleston for sharegpt
- @karpathy for nanoGPT
- @Dao-AILab for flash-attention
- @NVIDIA for apex & Megatron-LM
- @Vision-CAIR for MiniGPT-4
- @haotian-liu for LLaVA
- @huggingface for peft & OBELISC
- @Lightning-AI for lit-gpt & lit-llama
- @allenai for mmc4
- @StevenGrove for GPT4Tools
- @ShishirPatil for gorilla
- @OpenLMLab for MOSS
- @thunlp for UltraChat
- @LAION-AI for LAION-5B
- @shikras for shikra
- @kakaobrain for coyo-dataset
- @salesforce for LAVIS
- @openai for CLIP
- @bigcode-project for starcoder
- @tiiuae for falcon-refinedweb
- @microsoft for DeepSpeed
- @declare-lab for flacuna
- @nlpxucan for WizardLM
- @arielnlee for Platypus
- @InternLM for InternLM
- @Google for Bard
License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.