Skip to content

Latest commit

 

History

History
77 lines (59 loc) · 6.01 KB

README.md

File metadata and controls

77 lines (59 loc) · 6.01 KB

🤖 Awesome vim LLM plugins

Below is an exhaustive list of vim and neovim plugins hosted on GitHub which make use of Large Language Models and have commits after January 1, 2023. To optimize for maximum freshness, plugins are listed in order of last commit date.

Code writing and editing

Mature, fully-featured, configurable plugins are highlighted in bold.

Conversation-focused

These plugins are all quite similar in functionality. Robitx/gp.nvim stands out with a rich set of configuration options, and also includes commands for writing and editing code (i.e. overlaps with the above section).

Tab completion

These plugins are also pretty much identical in functionality, and perhaps more important to compare is (1) how much subscription costs, and (2) quality of the output. One plugin that stands out is huggingface/llm.nvim, which uses free inference endpoints hosted on Hugging Face.

Other

james1236/backseat.nvim provides commentary in between lines of code, and svermeulen/text-to-colorscheme helps set the mood while programming.

Tag explanations

Functionality

  • #inline: Writes, edits, or annotates code in the current buffer. A popup, window, or tab might be used in limited circumstances to display information.
  • #chat: Implements an interface focused on conversation, without significant support for copying to/from buffers.
  • #templates: Supports building custom commands, prompts, or pipelines.
  • #workflow: Significant functionality for editing code or viewing diffs before committing changes to the current buffer.
  • #augment: Augments the programming experience somehow, but does not write or edit code.
  • #other: Not related to programming, but still uses AI for some purpose within the editor.

Models

  • model:openai: OpenAI API.
  • model:chatgpt: ChatGPT web interface (without API).
  • model:bard: Google PaLM API.
  • model:huggingface: Hugging Face inference API.
  • model:local: Local model (e.g. invokes llama.cpp).
  • model:custom: Any other model without an officially open API.