Integrates Simon Willison's LLM CLI with the Micro editor. This plugin allows you to leverage Large Language Models directly within Micro for text generation, modification, and custom-defined tasks through templates.
- Micro Text Editor: Installed and working.
- LLM CLI Tool: Installed and configured (e.g., with API keys). See LLM documentation. Ensure that the
llm
command is accessible in your system's PATH.
To install the llm-micro
plugin, use Micro's standard plugin directory:
- Open your terminal.
- Ensure Micro's plugin directory exists and clone the repository:
# Micro typically creates ~/.config/micro/plug/. This command ensures it exists. mkdir -p ~/.config/micro/plug # Clone the plugin into the 'llm' subdirectory (or your preferred name) git clone https://github.com/shamanicvocalarts/llm-micro ~/.config/micro/plug/llm
- Restart Micro. It should automatically detect and load the plugin. The plugin registers its options on startup.
Llm.Streamlined.Demo.mp4
This plugin provides commands accessible via Micro's command prompt (Ctrl+E
):
This is the primary command for interacting with an LLM. It intelligently adapts its behavior based on whether text is selected.
-
Syntax:
llm [options] <your request>
-
Behavior:
- If text is selected: The command operates in "modify" mode.
- Your
<prompt>
, the selected text, and surrounding context are sent to the LLM. - The LLM's response replaces the originally selected text.
- Example: Select faulty code, run
llm fix this code
.
- Your
- If NO text is selected: The command operates in "generate" mode.
- Your
<prompt>
and surrounding context are sent to the LLM. - The LLM's response is inserted at the cursor position.
- Example: Place cursor on empty line, run
llm write a python function that sums two numbers
.
- Your
- If text is selected: The command operates in "modify" mode.
-
How to use:
- (Optional) Select text if you want to modify it or provide it as specific context for generation.
- Position your cursor appropriately (start of selection for modify, insertion point for generate).
- Press
Ctrl+E
to open the command prompt. - Type
llm
followed by your specific request (e.g.,llm fix grammar
,llm write a docstring for the function below
). - (Optional) Add options like
-t <template_name>
or-s "<custom system prompt>"
before your request. - Press Enter.
The llm
command accepts the following optional flags, which should precede your main textual request:
-t <template_name>
or--template <template_name>
: Use a specific LLM CLI template. Thellm
CLI tool must be aware of this template (see LLM Templates documentation).-s "<custom_system_prompt>"
or--system "<custom_system_prompt>"
: Provide a custom system prompt directly, enclosed in quotes. This overrides any default or template-based system prompt.
The system prompt used for the LLM call follows this order:
- Custom system prompt provided via
-s
or--system
flag. - System prompt from an LLM template specified via
-t
or--template
flag. - System prompt from the plugin's single default template set via
llm_template_default
(see below). - The plugin's built-in default system prompt corresponding to the detected operation mode ("modify" or "generate").
This plugin allows you to manage and use LLM CLI templates. Templates are stored as YAML files in the directory reported by llm templates path
(typically ~/.config/io.datasette.llm/templates/
).
Opens an LLM template YAML file for editing directly within Micro.
- How to use:
- Press
Ctrl+E
. - Type
llm_template <template_name>
(e.g.,llm_template my_summary_template
). Do not include.yaml
. - Press Enter.
- Press
- Behavior:
- If
<template_name>.yaml
exists in yourllm
templates directory, it will be opened in a new tab. - If it doesn't exist, a new buffer will be opened, and saving it (
Ctrl+S
) will create<template_name>.yaml
in that directory. - You are responsible for the correct YAML structure of the template (e.g.,
system: "..."
orprompt: "..."
). Refer to the LLM Templates documentation.
- If
Manages the single default LLM template used by the llm
command when no -t
or -s
flag is provided.
-
Set Default:
llm_template_default <template_name>
- Sets
<template_name>.yaml
as the default template for allllm
command invocations. - Example:
llm_template_default my_universal_helper
- The plugin verifies if the template file exists and is readable before setting it.
- Sets
-
Clear Default:
llm_template_default --clear
- Clears the default template setting. The plugin will revert to using its built-in system prompts based on the operation mode ("modify" or "generate") if no
-t
or-s
flag is used.
- Clears the default template setting. The plugin will revert to using its built-in system prompts based on the operation mode ("modify" or "generate") if no
-
Show Default:
llm_template_default --show
- Displays the currently set default template.
- Output will be shown in the infobar, e.g.,
Default LLM template: my_universal_helper
orDefault LLM template: Not set
.
You can configure the plugin by setting global options in Micro's settings.json
file (Ctrl+E
then set
).
-
"llm.default_template": ""
(string):- Specifies the name (without
.yaml
) of an LLM template to use by default for thellm
command if no-t
or-s
flag is provided. - Set to
""
(empty string) to have no default template (uses built-in prompts). - Example:
"llm.default_template": "my_default"
- Specifies the name (without
-
"llm.context_lines": 100
(number):- Specifies how many lines of context before and after the cursor/selection should be sent to the LLM.
- Set to
0
to disable sending context. - Default is
100
. - Example:
"llm.context_lines": 50
Example settings.json
:
{
"llm.context_lines": 50,
"llm.default_template": "my_refactor_template"
}
- The
-x
flag is automatically added tollm
CLI calls by the plugin to attempt to extract raw text/code from Markdown fenced blocks if the LLM outputs them. - You will see messages in Micro's infobar indicating the progress (e.g.,
LLM (modify): Processing...
) and outcome. - Debug logs are available via Micro's logging (
Ctrl+E
thenlog
). Look for messages prefixed withLLM_DEBUG:
,LLM_INFO:
, orLLM_ERROR:
. These are very helpful for troubleshooting.
Contributions and feedback are welcome!