Home

Awesome

airllm_logo

AirLLM optimizes inference memory usage, allowing 70B large language models to run inference on a single 4GB GPU card without quantization, distillation and pruning. And you can run 405B Llama3.1 on 8GB vram now.

<a href="https://github.com/lyogavin/airllm/stargazers">GitHub Repo stars</a>

Moved to here: https://github.com/lyogavin/airllm