Awesome
<div align="center"> <br> <a href=""> <img src="res/github/ollama-telegram-readme.png" width="200" height="200"> </a> <h1>🦙 Ollama Telegram Bot</h1> <p> <b>Chat with your LLM, using Telegram bot!</b><br> <b>Feel free to contribute!</b><br> </p> <br> <p align="center"> <img src="https://img.shields.io/docker/pulls/ruecat/ollama-telegram?style=for-the-badge"> </p> <br> </div>Features
Here's features that you get out of the box:
- Fully dockerized bot
- Response streaming without ratelimit with SentenceBySentence method
- Mention [@] bot in group to receive answer
Roadmap
- Docker config & automated tags by StanleyOneG, ShrirajHegde
- History and
/reset
by ShrirajHegde - Add more API-related functions [System Prompt Editor, Ollama Version fetcher, etc.]
- Redis DB integration
- Update bot UI
Prerequisites
Installation (Non-Docker)
-
Install latest Python
-
Clone Repository
git clone https://github.com/ruecat/ollama-telegram
-
Install requirements from requirements.txt
pip install -r requirements.txt
-
Enter all values in .env.example
-
Rename .env.example -> .env
-
Launch bot
python3 run.py
Installation (Docker Image)
The official image is available at dockerhub: ruecat/ollama-telegram
-
Download .env.example file, rename it to .env and populate the variables.
-
Create
docker-compose.yml
(optionally: uncomment GPU part of the file to enable Nvidia GPU)version: '3.8' services: ollama-telegram: image: ruecat/ollama-telegram container_name: ollama-telegram restart: on-failure env_file: - ./.env ollama-server: image: ollama/ollama:latest container_name: ollama-server volumes: - ./ollama:/root/.ollama # Uncomment to enable NVIDIA GPU # Otherwise runs on CPU only: # deploy: # resources: # reservations: # devices: # - driver: nvidia # count: all # capabilities: [gpu] restart: always ports: - '11434:11434'
-
Start the containers
docker compose up -d
Installation (Build your own Docker image)
-
Clone Repository
git clone https://github.com/ruecat/ollama-telegram
-
Enter all values in .env.example
-
Rename .env.example -> .env
-
Run ONE of the following docker compose commands to start:
-
To run ollama in docker container (optionally: uncomment GPU part of docker-compose.yml file to enable Nvidia GPU)
docker compose up --build -d
-
To run ollama from locally installed instance (mainly for MacOS, since docker image doesn't support Apple GPU acceleration yet):
docker compose up --build -d ollama-tg
-
Environment Configuration
Parameter | Description | Required? | Default Value | Example |
---|---|---|---|---|
TOKEN | Your Telegram bot token.<br/>[How to get token?] | Yes | yourtoken | MTA0M****.GY5L5F.****g*****5k |
ADMIN_IDS | Telegram user IDs of admins.<br/>These can change model and control the bot. | Yes | 1234567890<br/>OR<br/>1234567890,0987654321, etc. | |
USER_IDS | Telegram user IDs of regular users.<br/>These only can chat with the bot. | Yes | 1234567890<br/>OR<br/>1234567890,0987654321, etc. | |
INITMODEL | Default LLM | No | llama2 | mistral:latest<br/>mistral:7b-instruct |
OLLAMA_BASE_URL | Your OllamaAPI URL | No | localhost<br/>host.docker.internal | |
OLLAMA_PORT | Your OllamaAPI port | No | 11434 | |
TIMEOUT | The timeout in seconds for generating responses | No | 3000 | |
ALLOW_ALL_USERS_IN_GROUPS | Allows all users in group chats interact with bot without adding them to USER_IDS list | No | 0 |