Skip to content

Releases: nalgeon/howto

v0.2.1

10 Feb 13:56
eafa935
Compare
Choose a tag to compare

Fixed: HOWTO_AI_TOKEN is not required when using Ollama, courtesy of @seregayoga. Thank you, Sergey!

v0.2.0

09 Feb 22:55
Compare
Choose a tag to compare

Now you can use local Ollama models with howto. Here's how to set it up:

  1. Download and install Ollama for your operating system.
  2. Set the environment variables to use less memory:
OLLAMA_KEEP_ALIVE = 1h
OLLAMA_FLASH_ATTENTION = 1
  1. Restart Ollama.
  2. Download the AI model Gemma 2 (or another model of your choice):
ollama pull gemma2:2b
  1. Set the HOWTO_AI_VENDOR environment variable to ollama.
  2. Set the HOWTO_AI_MODEL environment variable to gemma2:2b (or another model of your choice).

Gemma 2 is a lightweight model that uses about 1GB of memory and runs well without a GPU. Unfortunately, it's not very smart. You can try more powerful (and resource hungry) models like mistral or mistral-nemo.

v0.1.1

09 Feb 21:19
Compare
Choose a tag to compare

You can install howto with Homebrew:

brew tap nalgeon/howto https://github.com/nalgeon/howto
brew install howto

v0.1.0

09 Feb 20:06
Compare
Choose a tag to compare

Howto - a humble command-line assistant.

Describe the task, and howto will suggest a solution:

$ howto curl example.org but print only the headers
curl -I example.org

The `curl` command is used to transfer data from or to a server.
The `-I` option tells `curl` to fetch the HTTP headers only, without the body
content.

Notable features:

  • Works with any OpenAI-compatible provider.
  • Follow-up questions.
  • Run the suggested command.

Support for local Ollama models coming soon.