Skip to content

totu/nvim-ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

Ollama.ai client for NeoVIM

Dependencies

  1. Ollama server
  2. Curl
  3. Plenary

Usage

  1. Run ollama on your machine
  2. call :Ollama

Installation

Using packer:

use {
    "totu/nvim-ollama",
    requires = { { "nvim-lua/plenary.nvim" } }
}

Configuration / Setup

You can change model being queried as well as address and port of the ollama server. By default model=codellama and server is address=localhost, port=11434.

Here is an example configuration:

local ollama = require("nvim-ollama")
ollama.setup({
    model = "codellama",
    address = "127.0.0.1",
    port = 11434,
})

You can bind ollama functions like this:

vim.keymap.set("n", "<leader>t", ":OllamaToggle<cr>")
vim.keymap.set("n", "<leader>o", ":Ollama<cr>")

Functions

  • Ollama : starts a chat with the ollama server
  • OllamaHide : hides the ollama window
  • OllamaShow : shows the ollama window
  • OllamaToggle : toggles between showing and hiding the ollama window

Example of use

"Screenshot of ollama in action"

About

Simple Ollama.ai interface for NeoVIM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages