Home

Awesome

Alpaca.cpp

Run a fast ChatGPT-like model locally on your device. The screencast below is not sped up and running on an M2 Macbook Air with 4GB of weights.

asciicast

This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface.

Consider using LLaMA.cpp instead

The changes from alpaca.cpp have since been upstreamed in llama.cpp.

Get Started (7B)

Download the zip file corresponding to your operating system from the latest release. On Windows, download alpaca-win.zip, on Mac (both Intel or ARM) download alpaca-mac.zip, and on Linux (x64) download alpaca-linux.zip.

Download ggml-alpaca-7b-q4.bin and place it in the same folder as the chat executable in the zip file. There are several options:

Once you've downloaded the model weights and placed them into the same directory as the chat or chat.exe executable, run:

./chat

The weights are based on the published fine-tunes from alpaca-lora, converted back into a pytorch checkpoint with a modified script and then quantized with llama.cpp the regular way.

Building from Source (MacOS/Linux)

git clone https://github.com/antimatter15/alpaca.cpp
cd alpaca.cpp

make chat
./chat

Building from Source (Windows)

cmake .
cmake --build . --config Release
.\Release\chat.exe

Credit

This combines Facebook's LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang's implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's Interactive Mode for llama.cpp. Inspired by Simon Willison's getting started guide for LLaMA. Andy Matuschak's thread on adapting this to 13B, using fine tuning weights by Sam Witteveen.

Disclaimer

Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.