Home

Awesome

Fllama

License: MIT pub package

Flutter binding of llama.cpp.

llama.cpp: Inference of LLaMA model in pure C/C++

Installation

Flutter

flutter pub add fllama

iOS

Please run pod install or pod update in your iOS project.

Android

You need install cmake 3.31.0、android sdk 35 and ndk 28.0.12433566. No additional operation required .

OpenHarmonyOS/HarmonyOS

This is the fastest and recommended way to add HLlama to your project.

ohpm install hllama

Or, you can add it to your project manually.

"dependencies": {
  "hllama": "^0.0.1",
}
  ohpm install

How to use

Flutter

  1. Initializing Llama
import 'package:fllama/fllama.dart';

Fllama.instance()?.initContext("model path",emitLoadProgress: true)
        .then((context) {
  modelContextId = context?["contextId"].toString() ?? "";
  if (modelContextId.isNotEmpty) {
    // you can get modelContextId,if modelContextId > 0 is success.
  }
});
  1. Bench model on device
import 'package:fllama/fllama.dart';

Fllama.instance()?.bench(double.parse(modelContextId),pp:8,tg:4,pl:2,nr: 1).then((res){
  Get.log("[FLlama] Bench Res $res");
});
  1. Tokenize and Detokenize
import 'package:fllama/fllama.dart';

Fllama.instance()?.tokenize(double.parse(modelContextId), text: "What can you do?").then((res){
  Get.log("[FLlama] Tokenize Res $res");
  Fllama.instance()?.detokenize(double.parse(modelContextId), tokens: res?['tokens']).then((res){
    Get.log("[FLlama] Detokenize Res $res");
  });
});
  1. Streaming monitoring
import 'package:fllama/fllama.dart';

Fllama.instance()?.onTokenStream?.listen((data) {
  if(data['function']=="loadProgress"){
    Get.log("[FLlama] loadProgress=${data['result']}");
  }else if(data['function']=="completion"){
    Get.log("[FLlama] completion=${data['result']}");
    final tempRes = data["result"]["token"];
    // tempRes is ans
  }
});
  1. Release this or Stop one
import 'package:fllama/fllama.dart';

Fllama.instance()?.stopCompletion(contextId: double.parse(modelContextId)); // stop one completion
Fllama.instance()?.releaseContext(double.parse(modelContextId)); // release one
Fllama.instance()?.releaseAllContexts(); // release all

OpenHarmonyOS/HarmonyOS

You can see this file

Support System

SystemMin SDKArchOther
Android23arm64-v8a、x86_64、armeabi-v7aSupports additional optimizations for certain CPUs
iOS14arm64Support Metal
OpenHarmonyOS/HarmonyOS12arm64-v8a、x86_64No additional optimizations for certain CPUs are supported

Obtain the model

You can search HuggingFace for available models (Keyword: GGUF).

For get a GGUF model or quantize manually, see Prepare and Quantize section in llama.cpp.

NOTE

iOS:

Android:

License

MIT