Awesome
Local GPT plugin for Obsidian
No speedup. MacBook Pro 13, M1, 16GB, Ollama, orca-mini.
Local GPT assistance for maximum privacy and offline access.
The plugin allows you to open a context menu on selected text to pick an AI-assistant's action.
The most casual AI-assistant for Obsidian.
Also works with images
<img width="400" src="https://github.com/pfrankov/obsidian-local-gpt/assets/584632/a05d68fa-5419-4386-ac43-82b9513999ad">
No speedup. MacBook Pro 13, M1, 16GB, Ollama, bakllava.
Also it can use context from links, backlinks and even PDF files (RAG)
<img width="450" alt="Enhanced Actions" src="https://github.com/user-attachments/assets/5fa2ed36-0ef5-43b0-8f16-07588f76d780">
Default actions
- Continue writing
- Summarize text
- Fix spelling and grammar
- Find action items in text
- General help (just use selected text as a prompt for any purpose)
- New System Prompt to create actions for your needs
You can also add yours, share the best actions or get one from the community.
Supported AI Providers
- Ollama
- OpenAI compatible server (also OpenAI)
Installation
1. Install Plugin
Obsidian plugin store (recommended)
This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=local-gpt
BRAT
You can also install this plugin via BRAT: pfrankov/obsidian-local-gpt
2. Install LLM
Ollama (recommended)
- Install Ollama.
- Install Gemma 2 (default)
ollama pull gemma2
or any preferred model from the library.
Additional: if you want to enable streaming completion with Ollama you should set environment variable OLLAMA_ORIGINS
to *
:
- For MacOS run
launchctl setenv OLLAMA_ORIGINS "*"
. - For Linux and Windows check the docs.
OpenAI compatible server
There are several options to run local OpenAI-like server:
- Open WebUI
- llama.cpp
- llama-cpp-python
- LocalAI
- Obabooga Text generation web UI
- LM Studio
- ...maybe more
Configure Obsidian hotkey
- Open Obsidian Settings
- Go to Hotkeys
- Filter "Local" and you should see "Local GPT: Show context menu"
- Click on
+
icon and press hotkey (e.g.⌘ + M
)
"Use fallback" option
It is also possible to specify a fallback to handle requests — this allows you to use larger models when you are online and smaller ones when offline.
<img width="626" alt="image" src="https://github.com/user-attachments/assets/5f6855c7-ed10-4d83-91e3-891b99b5a605">
Using with OpenAI
Since you can provide any OpenAI-like server, it is possible to use OpenAI servers themselves.
Despite the ease of configuration, I do not recommend this method, since the main purpose of the plugin is to work with private LLMs.
- Select
OpenAI compatible server
inSelected AI provider
- Set
OpenAI compatible server URL
tohttps://api.openai.com/v1
- Retrieve and paste your
API key
from the API keys page - Click "refresh" button and select the model that suits your needs (e.g.
gpt-4o
)
My other Obsidian plugins
- Colored Tags that colorizes tags in distinguishable colors.