AI for the command line, built for pipelines.
LLM based AI is really good at interpreting the output of commands and returning the results in CLI friendly text formats like Markdown. Mods is a simple tool that makes it super easy to use AI on the command line and in your pipelines. Mods works with OpenAI and LocalAI
To get started, install Mods and check out some of the examples below. Since Mods has built-in Markdown formatting, you may also want to grab Glow to give the output some pizzazz.
What Can It Do?
Mods works by reading standard in and prefacing it with a prompt supplied in
mods arguments. It sends the input text to an LLM and prints out the
result, optionally asking the LLM to format the response as Markdown. This
gives you a way to "question" the output of a command. Mods will also work on
standard in or an argument supplied prompt individually.
For example you can:
Improve Your Code
Piping source code to Mods and giving it an instruction on what to do with it gives you a lot of options for refactoring, enhancing or debugging code.
mods -f "what are your thoughts on improving this code?" < main.go | glow
Come Up With Product Features
Mods can also come up with entirely new features based on source code (or a README file).
mods -f "come up with 10 new features for this tool." < main.go | glow
Help Write Docs
Mods can quickly give you a first draft for new documentation.
mods "write a new section to this readme for a feature that sends you a free rabbit if you hit r" | glow
Organize Your Videos
The file system can be an amazing source of input for Mods. If you have music
or video files, Mods can parse the output of
ls and offer really good
editorialization of your content.
ls ~/vids | mods -f "organize these by decade and summarize each" | glow
Mods is really good at generating recommendations based on what you have as well, both for similar content but also content in an entirely different media (like getting music recommendations based on movies you have).
ls ~/vids | mods -f "recommend me 10 shows based on these, make them obscure" | glow
ls ~/vids | mods -f "recommend me 10 albums based on these shows, do not include any soundtrack music or music from the show" | glow
Read Your Fortune
It's easy to let your downloads folder grow into a chaotic never-ending pit of files, but with Mods you can use that to your advantage!
ls ~/Downloads | mods -f "tell my fortune based on these files" | glow
Mods can parse and understand the output of an API call with
curl and convert
it to something human readable.
curl "https://api.open-meteo.com/v1/forecast?latitude=29.00&longitude=-90.00¤t_weather=true&hourly=temperature_2m,relativehumidity_2m,windspeed_10m" 2>/dev/null | mods -f "summarize this weather data for a human." | glow
Read The Comments (so you don't have to)
Just like with APIs, Mods can read through raw HTML and summarize the contents.
curl "https://news.ycombinator.com/item?id=30048332" 2>/dev/null | mods -f "what are the authors of these comments saying?" | glow
Mods works with OpenAI compatible endpoints. By default, Mods is configured to
support OpenAI's official API and a LocalAI installation running on port 8080.
You can configure additional endpoints in your settings file by running
Mods uses GPT-4 by default and will fallback to GPT-3.5 Turbo if it's not
available. Set the
OPENAI_API_KEY environment variable to a valid OpenAI key,
which you can get from here.
LocalAI allows you to run a multitude of models locally. Mods works with the
GPT4ALL-J model as setup in this tutorial.
You can define more LocalAI models and endpoints with
# macOS or Linux brew install charmbracelet/tap/mods # Arch Linux (btw) yay -S mods # Debian/Ubuntu sudo mkdir -p /etc/apt/keyrings curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list sudo apt update && sudo apt install mods # Fedora/RHEL echo '[charm] name=Charm baseurl=https://repo.charm.sh/yum/ enabled=1 gpgcheck=1 gpgkey=https://repo.charm.sh/yum/gpg.key' | sudo tee /etc/yum.repos.d/charm.repo sudo yum install mods
Or, download it:
- Packages are available in Debian and RPM formats
- Binaries are available for Linux, macOS, and Windows
Or, just install it with
go install github.com/charmbracelet/mods@latest
Mods lets you tune your query with a variety of settings. You can configure
mods -s or pass the settings as environment variables and flags.
gpt-4 with OpenAI by default but you can specify any model as long
as your account has access to it or you have installed locally with LocalAI.
You can add new models to the settings with
mods -s. You can also specify a
model and an API endpoint with
-a to use models not in the settings
Format As Markdown
LLMs are very good at generating their response in Markdown format. They can even organize their content naturally with headers, bullet lists... Use this option to append the phrase "Format the response as Markdown." to the prompt.
Max tokens tells the LLM to respond in less than this number of tokens. LLMs are better at longer responses so values larger than 256 tend to work best.
Sampling temperature is a number between 0.0 and 2.0 and determines how confident the model is in its choices. Higher values make the output more random and lower values make it more deterministic.
Top P is an alternative to sampling temperature. It's a number between 0.0 and 2.0 with smaller numbers narrowing the domain from which the model will create its response.
By default Mods attempts to size the input to the maximum size the allowed by the model. You can potentially squeeze a few more tokens into the input by setting this but also risk getting a max token exceeded error from the OpenAI API.
Include prompt will preface the response with the entire prompt, both standard in and the prompt supplied by the arguments.
Include Prompt Args
Include prompt args will include only the prompt supplied by the arguments. This can be useful if your standard in content is long and you just a want a summary before the response.
The maximum number of retries to failed API calls. The retries happen with an exponential backoff.
Your desired level of fanciness.
Output nothing to standard err.
We’d love to hear your thoughts on this project. Feel free to drop us a note.
Part of Charm.
Charm热爱开源 • Charm loves open source