Releases: simonw/llm-replicate
Releases · simonw/llm-replicate
0.3.1
0.3
- New command:
llm replicate fetch-predictions
, which fetches all predictions that have been run through Replicate (including for models other than language models queried using this tool) and stores them in areplicate_predictions
table in thelogs.db
SQLite database. Documentation here. #11 - The
replicate-python
library is no longer bundled with this package, it is installed as a dependency instead. #10
0.2
Support for adding chat models using llm replicate add ... --chat
. These models will then use the User: ...\nAssistant:
prompt format and can be used for continued conversations.
This means the new Llama 2 model from Meta can be added like this:
llm replicate add a16z-infra/llama13b-v2-chat \
--chat --alias llama2
Then:
llm -m llama2 "Ten great names for a pet pelican"
# output here, then to continue the conversation:
llm -c "Five more and make them more nautical"