Releases: nalgeon/howto
Releases · nalgeon/howto
v0.2.1
Fixed: HOWTO_AI_TOKEN
is not required when using Ollama, courtesy of @seregayoga. Thank you, Sergey!
v0.2.0
Now you can use local Ollama models with howto
. Here's how to set it up:
- Download and install Ollama for your operating system.
- Set the environment variables to use less memory:
OLLAMA_KEEP_ALIVE = 1h
OLLAMA_FLASH_ATTENTION = 1
- Restart Ollama.
- Download the AI model Gemma 2 (or another model of your choice):
ollama pull gemma2:2b
- Set the
HOWTO_AI_VENDOR
environment variable toollama
. - Set the
HOWTO_AI_MODEL
environment variable togemma2:2b
(or another model of your choice).
Gemma 2 is a lightweight model that uses about 1GB of memory and runs well without a GPU. Unfortunately, it's not very smart. You can try more powerful (and resource hungry) models like mistral
or mistral-nemo
.
v0.1.1
You can install howto
with Homebrew:
brew tap nalgeon/howto https://github.com/nalgeon/howto
brew install howto
v0.1.0
Howto - a humble command-line assistant.
Describe the task, and howto
will suggest a solution:
$ howto curl example.org but print only the headers
curl -I example.org
The `curl` command is used to transfer data from or to a server.
The `-I` option tells `curl` to fetch the HTTP headers only, without the body
content.
Notable features:
- Works with any OpenAI-compatible provider.
- Follow-up questions.
- Run the suggested command.
Support for local Ollama models coming soon.