Home

Awesome

gpt-accelera

Simple and efficient pytorch-native transformer training and inference (batched).

gpt-accelera is a codebase based on gpt-fast -- the state-of-the-art pytorch-native tensor-parallel implementation of transformer text generation that minimizes latency (i.e. batch size=1) -- with the following improvements:

Featuring:

Shared features w/ gpt-fast:

Following the spirit of gpt-fast, this repository is NOT intended to be a "framework" or "library", but to show off what kind of performance you can get with native PyTorch. Please copy-paste and fork as you desire.

Installation

Install torch==2.2.0, sentencepiece, and huggingface_hub:

pip install sentencepiece huggingface_hub

Downloading Weights

Models tested/supported

meta-llama/Llama-2-7b-chat-hf
meta-llama/Llama-2-13b-chat-hf
meta-llama/Llama-2-70b-chat-hf
codellama/CodeLlama-7b-Python-hf
codellama/CodeLlama-34b-Python-hf
EleutherAI/llemma_7b
EleutherAI/llemma_34b
deepseek-ai/deepseek-llm-7b-base
deepseek-ai/deepseek-coder-6.7b-base
deepseek-ai/deepseek-math-7b-base

Benchmarks

TODO: Add benchmarks

Running reference methods

TODO: Add reference methods

License

Following gpt-fast, gpt-accelera is licensed under the BSD 3 license. See the LICENSE file for details.

Community

The gpt-accelera codebase is developed during the research and development of the Easy-to-Hard Generalization project.

Citation

Please consider citing our work if you use the data or code in this repo.

@misc{gpt_accelera,
  author = {Zhiqing Sun },
  title = {GPT-Accelera: Simple and efficient pytorch-native transformer training and inference (batched)},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/Edward-Sun/gpt-accelera}}
}

Acknowledgements

We thank the authors of following works for their open-source efforts in democratizing large language models.