Awesome
<h1 align="center"> <img src="https://github.com/darrenburns/elia/assets/5740731/4037b91a-1ad8-4d5b-884d-b3f1b495acf4" width="126px"> </h1> <p align="center"> <i align="center">A snappy, keyboard-centric terminal user interface for interacting with large language models.</i><br> <i align="center">Chat with Claude 3, ChatGPT, and local models like Llama 3, Phi 3, Mistral and Gemma.</i> </p>Introduction
elia
is an application for interacting with LLMs which runs entirely in your terminal, and is designed to be keyboard-focused, efficient, and fun to use!
It stores your conversations in a local SQLite database, and allows you to interact with a variety of models.
Speak with proprietary models such as ChatGPT and Claude, or with local models running through ollama
or LocalAI.
Installation
Install Elia with pipx:
pipx install --python 3.11 elia-chat
Depending on the model you wish to use, you may need to set one or more environment variables (e.g. OPENAI_API_KEY
, ANTHROPIC_API_KEY
, GEMINI_API_KEY
etc).
Quickstart
Launch Elia from the command line:
elia
Launch a new chat inline (under your prompt) with -i
/--inline
:
elia -i "What is the Zen of Python?"
Launch a new chat in full-screen mode:
elia "Tell me a cool fact about lizards!"
Specify a model via the command line using -m
/--model
:
elia -m gpt-4o
Options can be combined - here's how you launch a chat with Gemini 1.5 Flash in inline mode (requires GEMINI_API_KEY
environment variable).
elia -i -m gemini/gemini-1.5-flash-latest "How do I call Rust code from Python?"
Running local models
- Install
ollama
. - Pull the model you require, e.g.
ollama pull llama3
. - Run the local ollama server:
ollama serve
. - Add the model to the config file (see below).
Configuration
The location of the configuration file is noted at the bottom of
the options window (ctrl+o
).
The example file below shows the available options, as well as examples of how to add new models.
# the ID or name of the model that is selected by default on launch
default_model = "gpt-4o"
# the system prompt on launch
system_prompt = "You are a helpful assistant who talks like a pirate."
# choose from "nebula", "cobalt", "twilight", "hacker", "alpine", "galaxy", "nautilus", "monokai", "textual"
theme = "galaxy"
# change the syntax highlighting theme of code in messages
# choose from https://pygments.org/styles/
# defaults to "monokai"
message_code_theme = "dracula"
# example of adding local llama3 support
# only the `name` field is required here.
[[models]]
name = "ollama/llama3"
# example of a model running on a local server, e.g. LocalAI
[[models]]
name = "openai/some-model"
api_base = "http://localhost:8080/v1"
api_key = "api-key-if-required"
# example of add a groq model, showing some other fields
[[models]]
name = "groq/llama2-70b-4096"
display_name = "Llama 2 70B" # appears in UI
provider = "Groq" # appears in UI
temperature = 1.0 # high temp = high variation in output
max_retries = 0 # number of retries on failed request
# example of multiple instances of one model, e.g. you might
# have a 'work' OpenAI org and a 'personal' org.
[[models]]
id = "work-gpt-3.5-turbo"
name = "gpt-3.5-turbo"
display_name = "GPT 3.5 Turbo (Work)"
[[models]]
id = "personal-gpt-3.5-turbo"
name = "gpt-3.5-turbo"
display_name = "GPT 3.5 Turbo (Personal)"
Custom themes
Add a custom theme YAML file to the themes directory.
You can find the themes directory location by pressing ctrl+o
on the home screen and looking for the Themes directory
line.
Here's an example of a theme YAML file:
name: example # use this name in your config file
primary: '#4e78c4'
secondary: '#f39c12'
accent: '#e74c3c'
background: '#0e1726'
surface: '#17202a'
error: '#e74c3c' # error messages
success: '#2ecc71' # success messages
warning: '#f1c40f' # warning messages
Changing keybindings
Right now, keybinds cannot be changed. Terminals are also rather limited in what keybinds they support. For example, pressing <kbd>Cmd</kbd>+<kbd>Enter</kbd> to send a message is not possible (although we may support a protocol to allow this in some terminals in the future).
For now, I recommend you map whatever key combo you want at the terminal emulator level to send \n
.
Here's an example using iTerm:
With this mapping in place, pressing <kbd>Cmd</kbd>+<kbd>Enter</kbd> will send a message to the LLM, and pressing <kbd>Enter</kbd> alone will create a new line.
Import from ChatGPT
Export your conversations to a JSON file using the ChatGPT UI, then import them using the import
command.
elia import 'path/to/conversations.json'
Wiping the database
elia reset
Uninstalling
pipx uninstall elia-chat