Skip to content

Releases: simonw/llm-replicate

0.3.1

18 Apr 17:13
cb81541
Compare
Choose a tag to compare

0.3

20 Jul 18:36
315d0a0
Compare
Choose a tag to compare
0.3
  • New command: llm replicate fetch-predictions, which fetches all predictions that have been run through Replicate (including for models other than language models queried using this tool) and stores them in a replicate_predictions table in the logs.db SQLite database. Documentation here. #11
  • The replicate-python library is no longer bundled with this package, it is installed as a dependency instead. #10

0.2

18 Jul 18:59
64a91fc
Compare
Choose a tag to compare
0.2

Support for adding chat models using llm replicate add ... --chat. These models will then use the User: ...\nAssistant: prompt format and can be used for continued conversations.

This means the new Llama 2 model from Meta can be added like this:

llm replicate add a16z-infra/llama13b-v2-chat \
  --chat --alias llama2

Then:

llm -m llama2 "Ten great names for a pet pelican"
# output here, then to continue the conversation:
llm -c "Five more and make them more nautical"

0.1

18 Jul 04:07
Compare
Choose a tag to compare
0.1
  • Ability to fetch a collection of models hosted on Replicate using llm replicate fetch-models, then run prompts against them. #1
  • Use llm replicate add joehoover/falcon-40b-instruct --alias falcon to add support for additional models, optionally with aliases. #2