Awesome
PocketPal AI 📱🚀
PocketPal AI is a pocket-sized AI assistant powered by small language models (SLMs) that run directly on your phone. Designed for both iOS and Android, PocketPal AI lets you interact with various SLMs without the need for an internet connection.
Table of Contents
📰 News & Announcements
- 🎨 New Icon Alert (Nov 2024): PocketPal AI has a fresh new look! Huge thanks to Chun Te Lee for the design! Read more.
- 🚀 Hugging Face Public Hub Integration (v1.5, Nov 2024): PocketPal AI now integrates with the Hugging Face model Hub! Browse, download, and run models directly from the Hugging Face Hub within the app. Read more
Features
- Offline AI Assistance: Run language models directly on your device without internet connectivity.
- Model Flexibility: Download and swap between multiple SLMs, including Danube 2 and 3, Phi, Gemma 2, and Qwen.
- Auto Offload/Load: Automatically manage memory by offloading models when the app is in the background.
- Inference Settings: Customize model parameters like system prompt, temperature, BOS token, and chat templates.
- Real-Time Performance Metrics: View tokens per second and milliseconds per token during AI response generation.
Installation
iOS
Download PocketPal AI from the App Store:
Android
Get PocketPal AI on Google Play:
Usage
For a detailed guide on how to use PocketPal AI, check out the Getting Started Guide.
Downloading a Model
- Open the app and tap the Menu icon (☰).
- Navigate to the Models page.
- Choose a model from the list and tap Download.
Loading a Model
- After downloading, tap Load next to the model to bring it into memory.
Advanced Settings
- Tap the chevron icon (v) next to a model to access advanced settings like temperature, BOS token, and chat templates.
Chatting with the model
- Ensure a model is loaded.
- Navigate to the Chat page from the menu.
- Start conversing with your AI assistant!
Copying Text
- Copy Entire Response: Tap the copy icon at the bottom of the AI's response bubble.
- Copy Specific Paragraph: Long-press on a paragraph to copy its content.
Note: Preserving text formatting while copying is currently limited. We're working on improving this feature.
Development Setup
Interested in contributing or running the app locally? Follow the steps below.
Prerequisites
- Node.js (version 18 or higher)
- Yarn
- React Native CLI
- Xcode (for iOS development)
- Android Studio (for Android development)
Getting Started
-
Fork and Clone the Repository
git clone https://github.com/a-ghorbani/pocketpal-ai cd pocketpal-ai
-
Install Dependencies
yarn install
-
Install Pod Dependencies (iOS Only)
cd ios pod install cd ..
-
Run the App
-
iOS Simulator
yarn ios
-
Android Emulator
yarn android
-
Scripts
-
Start Metro Bundler
yarn start
-
Clean Build Artifacts
yarn clean
-
Lint and Type Check
yarn lint yarn typecheck
-
Run Tests
yarn test
Contributing
We welcome all contributions! Please read our Contributing Guidelines and Code of Conduct before you start.
Quick Start for Contributors
-
Fork the Repository
-
Create a New Branch
git checkout -b feature/your-feature-name
-
Make Your Changes
-
Test Your Changes
-
Run on iOS
yarn ios
-
Run on Android
yarn android
-
-
Lint and Type Check
yarn lint yarn typecheck
-
Commit Your Changes
-
Follow the Conventional Commits format:
git commit -m "feat: add new model support"
-
-
Push and Open a Pull Request
git push origin feature/your-feature-name
Roadmap
- Support for more Android Devices: Add support for more Android devices (diversity of the Android ecosystem is a challenge so we need more support from the community).
- Improved Text Copying: Enhance the ability to copy text while preserving formatting.
- New Models: Add support for more tiny LLMs.
- UI Enhancements: Improve the overall user interface and user experience.
- Improve Documentation: Improve the documentation of the project.
Feel free to open issues to suggest features or report bugs.
License
This project is licensed under the MIT License.
Contact
For questions or feedback, please open an issue.
Acknowledgements
PocketPal AI is built using the amazing work from:
- llama.cpp: Enables efficient inference of LLMs on local devices.
- llama.rn: Implements llama.cpp bindings into React Native.
Happy exploring! 🚀📱✨