Skip to content
/ ai.nvim Public template
generated from S1M0N38/my-awesome-plugin.nvim

Query LLMs following OpenAI API specification

License

Notifications You must be signed in to change notification settings

S1M0N38/ai.nvim

Repository files navigation

ai.nvim

LuaRocks release GitHub release Reddit post


💡 Idea

LLM providers offers libraries for the most popular programming languages so you can build code that interacts with their API. Generally those are wrappers around https requests with a mechanism to handle API reposes (e.g. using callbacks).

To the best of my knowledge, if you want to build a plugin for Neovim that uses LLM, you have to explicitly make requests using a library like curl and take care of requests and responses parsing yourself. This results in a lot of boilerplate code that can be abstracted away.

ai.nvim is an experimental library that can be used to build Neovim plugins that interact with LLM providers: it craft requests, parse responses, invoke callbacks and handle errors.

⚡️ Requirements

  • Neovim ≥ 0.9
  • Curl
  • Access to LLM Provider

🚀 Usage

Read the documentation with :help ai.nvim

Plugins built with ai.nvim:

  • dante.nvim: An unpolished writing tool powered by LLM ✍️
  • PR your plugin here ...

✨ LLM Providers

There are many providers that offer LLM models exposing OpenAI compatible API. Many more providers can be queried by using LiteLLM proxy. The following is an incomplete list of providers that I have experimented with:

Provider Price Models Type
OpenAI Paid GPT Family Hosted
Mistral Paid Mistral Family Hosted
Cohere (with LiteLLM) Free Command Family Hosted
Groq Free LLama, Mixtral, Gemma Hosted
Ollama Free Open-source Models Local
llama-cpp Free Open-source Models Local

At the moment of writing, I'm really enjoying using Groq due to its free tier and its next level speed. For highly sensitive data, I use local models through Ollama.

🙏 Acknowledgments

About

Query LLMs following OpenAI API specification

Resources

License

Stars

Watchers

Forks