Home

Awesome

ollama.nvim

A plugin for managing and integrating your ollama workflows in neovim.

Designed to be flexible in configuration and extensible with custom functionality.

Features

Planned / Ideas (implemented depending on interest)

Usage

ollama.nvim provides the following commands, which map to methods exposed by the plugin:

Installation

ollama.nvim uses curl to communicate with your ollama server over HTTP. Please ensure that curl is installed on your system.

Install using lazy.nvim:

return {
  "nomnivore/ollama.nvim",
  dependencies = {
    "nvim-lua/plenary.nvim",
  },

  -- All the user commands added by the plugin
  cmd = { "Ollama", "OllamaModel", "OllamaServe", "OllamaServeStop" },

  keys = {
    -- Sample keybind for prompt menu. Note that the <c-u> is important for selections to work properly.
    {
      "<leader>oo",
      ":<c-u>lua require('ollama').prompt()<cr>",
      desc = "ollama prompt",
      mode = { "n", "v" },
    },

    -- Sample keybind for direct prompting. Note that the <c-u> is important for selections to work properly.
    {
      "<leader>oG",
      ":<c-u>lua require('ollama').prompt('Generate_Code')<cr>",
      desc = "ollama Generate Code",
      mode = { "n", "v" },
    },
  },

  ---@type Ollama.Config
  opts = {
    -- your configuration overrides
  }
}

To get a fuzzy-finding Telescope prompt selector you can optionally install stevearc/dressing.nvim.

Configuration

Default Options

opts = {
  model = "mistral",
  url = "http://127.0.0.1:11434",
  serve = {
    on_start = false,
    command = "ollama",
    args = { "serve" },
    stop_command = "pkill",
    stop_args = { "-SIGTERM", "ollama" },
  },
  -- View the actual default prompts in ./lua/ollama/prompts.lua
  prompts = {
    Sample_Prompt = {
      prompt = "This is a sample prompt that receives $input and $sel(ection), among others.",
      input_label = "> ",
      model = "mistral",
      action = "display",
    }
  }
}

Docker

Due to ollama.nvim's flexible configuration, docker support is included with minimal extra effort.

If your container is running on a separate machine, you just need to configure the url option to point to your server.

For local containers, you can configure the serve options to use the docker cli to create and destroy a container. Here's an example configuration that uses the official ollama docker image to create an ephemeral container with a shared volume:

opts = {
  -- $ docker run -d --rm --gpus=all -v <volume>:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
  url = "http://127.0.0.1:11434",
  serve = {
    command = "docker",
    args = {
      "run",
      "-d",
      "--rm",
      "--gpus=all",
      "-v",
      "ollama:/root/.ollama",
      "-p",
      "11434:11434",
      "--name",
      "ollama",
      "ollama/ollama",
    },
    stop_command = "docker",
    stop_args = { "stop", "ollama" },
  },
}

Writing your own prompts

By default, ollama.nvim comes with a few prompts that are useful for most workflows. However, you can also write your own prompts directly in your config, as shown above.

prompts is a dictionary of prompt names to prompt configurations. The prompt name is used in prompt selection menus where you can select which prompt to run, where "Sample_Prompt" is shown as "Sample Prompt".

This dictionary accepts the following keys:

KeyTypeDescription
promptstringThe prompt to send to the LLM. Can contain special tokens that are substituted with context before sending. See Tokens.
modelstring (Optional)The model to use for the prompt. Defaults to the global opts.model.
input_labelstring (Optional)The label to use for the input prompt. Defaults to "> ".
actionstring or table (Optional)The action to take with the response from the LLM. See Actions. Defaults to "display".
extractstring (Optional)A Lua match pattern to extract from the response. Used only by certain actions. See Extracting. Set to false if you want to disable this step.
optionstable (Optional)Additional model parameter overrides, such as temperature, listed in the documentation for the Ollama Modelfile
systemstring (Optional)The system prompt to be used in the Modelfile template, if applicable. (overrides what's in the Modelfile)
formatstring (Optional)The format to return a response in. Currently the only accepted value is "json"

If you'd like to disable a prompt (such as one of the default ones), set the value of the prompt to false.

prompts = {
  Sample_Prompt = false
}

Extracting

When using certain actions (or custom ones you write), you may want to operate on a specific part of the response. To do this, you can use the extract key in your prompt configuration.

extract = "```$ftype\n(.-)```"

ollama.nvim will parse the extract string the same way as a prompt, substituting tokens (see below). The parsed extract pattern will then be sent to the action associated with the prompt.

Tokens

Before sending the prompt, ollama.nvim will replace certain special tokens in the prompt string with context in the following ways:

TokenDescription
$inputPrompt the user for input.
$selThe current or previous selection.
$ftypeThe filetype of the current buffer.
$fnameThe filename of the current buffer.
$bufThe full contents of the current buffer.
$beforeThe contents of the current buffer, before the pointer
$afterThe contents of the current buffer, after the pointer
$lineThe current line in the buffer.
$lnumThe current line number in the buffer.

Actions

ollama.nvim provides the following built-in actions:

Sometimes, you may need functionality that is not provided by the built-in actions. In this case, you can write your own Custom Actions with the following interface:

---@type Ollama.PromptAction
action = {
  fn = function(prompt)
    -- This function is called when the prompt is selected
    -- just before sending the prompt to the LLM.
    -- Useful for setting up UI or other state.

    -- Return a function that will be used as a callback
    -- when a response is received.
    ---@type Ollama.PromptActionResponseCallback
    return function(body, job)
      -- body is a table of the json response
      -- body.response is the response text received

      -- job is the plenary.job object when opts.stream = true
      -- job is nil otherwise
    end

  end,

  opts = { stream = true } -- optional, default is false
}

Instead of returning a callback function, you can also return false or nil to indicate that the prompt should be cancelled and not be sent to the LLM. This can be useful for actions that require a selection or for other criteria not being met.

Actions can also be written without the table keys, like so:

action = {
  function(prompt)
    -- ...
    return function(body, job)
      -- ...
    end
  end,
  { stream = true }
}

Actions Factory

The built-in actions are implemented using a factory function that takes a table of options and returns a prompt action. You can use this factory to quickly make small adjustments to the built-in actions.

action = require("ollama.actions.factory").create_action({ display = true, replace = true, show_prompt = false })

The following options are available:

OptionTypeDescription
displaybooleanwhether to display the response (default: true)
insertbooleanwhether to insert the response at the cursor (default: false)
replacebooleanwhether to replace the selection with the response. Precedes insert (default: false)
show_promptbooleanwhether to prepend the display buffer with the parsed prompt (default: false)
window"float"|"split"|"vsplit"type of window to display the response in (default: "float")

Status

ollama.nvim module exposes a .status() method for checking the status of the ollama server. This is used to see if any jobs are currently running. It returns the type Ollama.StatusEnum which is one of:

You can use this to display a prompt running status in your statusline. Here are a few example recipes for lualine:

Assuming you already have lualine set up in your config, and that you are using a package manager that can merge configs

{
  "nvim-lualine/lualine.nvim",
  optional = true,

  opts = function(_, opts)
    table.insert(opts.sections.lualine_x, {
      function()
        local status = require("ollama").status()

        if status == "IDLE" then
          return "󱙺" -- nf-md-robot-outline
        elseif status == "WORKING" then
          return "󰚩" -- nf-md-robot
        end
      end,
      cond = function()
        return package.loaded["ollama"] and require("ollama").status() ~= nil
      end,
    })
  end,
},

Alternatively, Assuming you want all of the statusline config entries in one file.

-- assuming the following plugin is installed
{
  "nvim-lualine/lualine.nvim",
},

-- Define a function to check that ollama is installed and working
local function get_condition()
    return package.loaded["ollama"] and require("ollama").status ~= nil
end


-- Define a function to check the status and return the corresponding icon
local function get_status_icon()
  local status = require("ollama").status()

  if status == "IDLE" then
    return "OLLAMA IDLE"
  elseif status == "WORKING" then
    return "OLLAMA BUSY"
  end
end

-- Load and configure 'lualine'
require("lualine").setup({
	sections = {
		lualine_a = {},
		lualine_b = { "branch", "diff", "diagnostics" },
		lualine_c = { { "filename", path = 1 } },
		lualine_x = { get_status_icon, get_condition },
		lualine_y = { "progress" },
		lualine_z = { "location" },
	},
})

Credits