Awesome
<div align="center"> <img src="ollama-nextjs-ui.gif"> </div> <h1 align="center"> Fully-featured & beautiful web interface for Ollama LLMs </h1> <div align="center"> </div>Get up and running with Large Language Models quickly, locally and even offline. This project aims to be the easiest way for you to get started with LLMs. No tedious and annoying setup required!
Features ✨
- Beautiful & intuitive UI: Inspired by ChatGPT, to enhance similarity in the user experience.
- Fully local: Stores chats in localstorage for convenience. No need to run a database.
- Fully responsive: Use your phone to chat, with the same ease as on desktop.
- Easy setup: No tedious and annoying setup required. Just clone the repo and you're good to go!
- Code syntax highligting: Messages that include code, will be highlighted for easy access.
- Copy codeblocks easily: Easily copy the highlighted code with one click.
- Download/Pull & Delete models: Easily download and delete models directly from the interface.
- Switch between models: Switch between models fast with a click.
- Chat history: Chats are saved and easily accessed.
- Light & Dark mode: Switch between light & dark mode.
Preview
Requisites ⚙️
To use the web interface, these requisites must be met:
- Download Ollama and have it running. Or run it in a Docker container. Check the docs for instructions.
- Node.js (18+) and npm is required. Download
Deploy your own to Vercel or Netlify in one click ✨
You'll need to set your OLLAMA_ORIGINS environment variable on your machine that is running Ollama:
OLLAMA_ORIGINS="https://your-app.vercel.app/"
Installation 📖
Use a pre-build package from one of the supported package managers to run a local environment of the web interface. Alternatively you can install from source with the instructions below.
[!NOTE]
If your frontend runs on something other thanhttp://localhost
orhttp://127.0.0.1
, you'll need to set the OLLAMA_ORIGINS to your frontend url.This is also stated in the documentation:
Ollama allows cross-origin requests from 127.0.0.1 and 0.0.0.0 by default. Additional origins can be configured with OLLAMA_ORIGINS
Install from source
1. Clone the repository to a directory on your pc via command prompt:
git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui
2. Open the folder:
cd nextjs-ollama-llm-ui
3. Rename the .example.env
to .env
:
mv .example.env .env
4. If your instance of Ollama is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:
NEXT_PUBLIC_OLLAMA_URL="http://localhost:11434"
5. Install dependencies:
npm install
6. Start the development server:
npm run dev
5. Go to localhost:3000 and start chatting with your favourite model!
Upcoming features
This is a to-do list consisting of upcoming features.
- ✅ Voice input support
- ✅ Code syntax highlighting
- ✅ Ability to send an image in the prompt to utilize vision language models.
- ✅ Ability to regenerate responses
- ⬜️ Import and export chats
Tech stack
NextJS - React Framework for the Web
TailwindCSS - Utility-first CSS framework
shadcn-ui - UI component built using Radix UI and Tailwind CSS
shadcn-chat - Chat components for NextJS/React projects
Framer Motion - Motion/animation library for React
Lucide Icons - Icon library
Helpful links
Medium Article - How to launch your own ChatGPT clone for free on Google Colab. By Bartek Lewicz.
Lobehub mention - Five Excellent Free Ollama WebUI Client Recommendations