Awesome
<div align="center">LLaMA Node
llama-node: Node.js Library for Large Language Model
<img src="https://img.shields.io/github/actions/workflow/status/hlhr202/llama-node/llama-build.yml"> <img src="https://img.shields.io/npm/l/llama-node" alt="NPM License"> <img alt="npm" src="https://img.shields.io/npm/v/llama-node"> <img alt="npm" src="https://img.shields.io/npm/types/llama-node"> <img alt="Discord" src="https://img.shields.io/discord/1106423821700960286"> <img alt="twitter" src="https://img.shields.io/twitter/url?url=https%3A%2F%2Ftwitter.com%2Fhlhr202">
<h3><a href="https://llama-node.vercel.app/">Official Documentations</a></h3> <img src="./doc/assets/llama.png" width="300px" height="300px" alt="LLaMA generated by Stable diffusion"/><sub>Picture generated by stable diffusion.</sub>
</div>Introduction
This project is in an early stage and is not production ready, we do not follow the semantic versioning. The API for nodejs may change in the future, use it with caution.
This is a nodejs library for inferencing llama, rwkv or llama derived models. It was built on top of llm (originally llama-rs), llama.cpp and rwkv.cpp. It uses napi-rs for channel messages between node.js and llama thread.
Supported models
llama.cpp backend supported models (in GGML format):
- LLaMA 🦙
- Alpaca
- GPT4All
- Chinese LLaMA / Alpaca
- Vigogne (French)
- Vicuna
- Koala
- OpenBuddy 🐶 (Multilingual)
- Pygmalion 7B / Metharme 7B
llm(llama-rs) backend supported models (in GGML format):
- GPT-2
- GPT-J
- LLaMA: LLaMA, Alpaca, Vicuna, Koala, GPT4All v1, GPT4-X, Wizard
- GPT-NeoX: GPT-NeoX, StableLM, RedPajama, Dolly v2
- BLOOM: BLOOMZ
rwkv.cpp backend supported models (in GGML format):
Supported platforms
- darwin-x64
- darwin-arm64
- linux-x64-gnu (glibc >= 2.31)
- linux-x64-musl
- win32-x64-msvc
Node.js version: >= 16
Installation
- Install llama-node npm package
npm install llama-node
-
Install anyone of the inference backends (at least one)
- llama.cpp
npm install @llama-node/llama-cpp
- or llm
npm install @llama-node/core
- or rwkv.cpp
npm install @llama-node/rwkv-cpp
Manual compilation
Please see how to start with manual compilation on our contribution guide
CUDA support
Please read the document on our site to get started with manual compilation related to CUDA support
Acknowledgments
This library was published under MIT/Apache-2.0 license. However, we strongly recommend you to cite our work/our dependencies work if you wish to reuse the code from this library.
Models/Inferencing tools dependencies
- LLaMA models: facebookresearch/llama
- RWKV models: BlinkDL/RWKV-LM
- llama.cpp: ggreganov/llama.cpp
- llm: rustformers/llm
- rwkv.cpp: saharNooby/rwkv.cpp
Some source code comes from
- llama-cpp bindings: sobelio/llm-chain
- rwkv logits sampling: KerfuffleV2/smolrsrwkv
Community
Join our Discord community now! Click to join llama-node Discord