Home

Awesome

🚀 Ollama-chat.nvim

Chat with Ollama models directly in a Neovim buffer!

✨Features

This is a simple plugin that allows you to chat with Ollama models:

ollama chat

⌨️ Usage

This plugin adds the following commands that open an Ollama chat buffer:

The chat buffer is populated with a base prompt and is completely modifiable.

If there is a selection active when the chat buffer is opened, it is copied in the new chat buffer as text, or within a corresponding code block if the source file type is code.

The Ollama model can then be prompted with the chat buffer via OllamaChat and OllamaChatCode, both of which send the entire buffer to the Ollama server, the difference being that OllamaChatCode uses the model model_code rather than model set in the opts table.

During generation you can go back to your other buffers. Once Ollama generation completes an INFO level notification will be generated to alert the user to return to the chat.

Generation can also be canceled using q.

To yank, delete, and change chat messages use i* as they are bound with *s for the User and Ollama.

The spinner animation can be disabled via the animate_spinner option. This is handy to keep the undo history clean if you want to use say undo tree.

📦 Install

First you need Ollama installed as per their instructions.

To use the plugin with lazy.nvim you can add the file lua/plugins/ollama-chat.lua:

return {
  "gerazov/ollama-chat.nvim",
  dependencies = {
    "nvim-lua/plenary.nvim",
    "stevearc/dressing.nvim",
    "nvim-telescope/telescope.nvim",
  },
}

⚙️ Configure

Here's how you can add lazy loading, some keymaps and configure options:

return {
  "gerazov/ollama-chat.nvim",
  dependencies = {
    "nvim-lua/plenary.nvim",
    "stevearc/dressing.nvim",
    "nvim-telescope/telescope.nvim",
  },
  -- lazy load on command
  cmd = {
    "OllamaQuickChat",
    "OllamaCreateNewChat",
    "OllamaContinueChat",
    "OllamaChat",
    "OllamaChatCode",
    "OllamaModel",
    "OllamaServe",
    "OllamaServeStop"
  },

  keys = {
    {
      "<leader>ocq",
      "<cmd>OllamaQuickChat<cr>",
      desc = "Ollama Quick Chat",
      mode = { "n", "x" },
      silent = true,
    },
    {
      "<leader>ocn",
      "<cmd>OllamaCreateNewChat<cr>",
      desc = "Create Ollama Chat",
      mode = { "n", "x" },
      silent = true,
    },
    {
      "<leader>occ",
      "<cmd>OllamaContinueChat<cr>",
      desc = "Continue Ollama Chat",
      mode = { "n", "x" },
      silent = true,
    },
    {
      "<leader>och",
      "<cmd>OllamaChat<cr>",
      desc = "Chat",
      mode = { "n" },
      silent = true,
    },
    {
      "<leader>ocd",
      "<cmd>OllamaChatCode<cr>",
      desc = "Chat Code",
      mode = { "n" },
      silent = true,
    },
  },

  opts = {
    chats_folder = vim.fn.stdpath("data"), -- data folder is ~/.local/share/nvim
    -- you can also choose "current" and "tmp"
    quick_chat_file = "ollama-chat.md",
    animate_spinner = true,  -- set this to false to disable spinner animation
    model = "openhermes2-mistral",
    model_code = "codellama",
    url = "http://127.0.0.1:11434",
    serve = {
      on_start = false,
      command = "ollama",
      args = { "serve" },
      stop_command = "pkill",
      stop_args = { "-SIGTERM", "ollama" },
    },
  }
}

If you want to override your Markdown @text.emphasis highlight for the User and Ollama labels, you can add the following table to your opts:

  opts = {
    highlight = {
      guifg = "#8CAAEE",
      guibg = nil,  -- if you want transparent background
      gui = "bold,italic",
    },
  },

📋 Similar plugins