Skip to content

A small command line tool for working with Ollama quantised models.

License

Notifications You must be signed in to change notification settings

andrewdotcom/llm_tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CLI tool for working with Ollama

A small python tool I put together to help me use models though ollama on my laptop. Nothing fancy.

It uses the openai compatible ollama API.

Use

python3 ./llm.py -m "llama2" -p "say hello" -o "./llm_response_file.txt" -v

or

chmod +x llm.py && mv ./llm.py llm

then

Prompt provided through -p argument

./llm -m "llama2" -p "say hello" -o "./llm_response_file.txt" -v

Prompt read from stdin (piped from cat my_prompt_file.txt)

cat my_prompt_file.txt | ./llm -m "llama2" -o "./llm_response_file.txt" -v

Prompt read from a file

./llm -m "llama2" -f "./my_prompt_file.txt" -o "./llm_response_file.txt" -v

Args

  • -m Specify the model to use (required)
  • -p Prompt (required)
  • -tmp Model temperature
  • -t Template text file (see example)
  • -s System message
  • -o Text file to save the response to
  • -v Verbose, print the response to stout
  • --help This message

About

A small command line tool for working with Ollama quantised models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages