Releases: hex/llm-perplexity
Releases · hex/llm-perplexity
0.6
What's Changed
- Add new models, drop old models, remove default_max_tokens by @simonw in #6
Full Changelog: 0.5...0.6
0.5
What's Changed
- Add mixtral-8x22b-instruct, llama-3-8b-instruct, llama-3-70b-instruct by @simonw in #3
New Contributors
Full Changelog: 0.4...0.5
0.4
- update system message handling
0.3
- added default max tokens per model
- added model options
temperature
, top_p
, top_k
, presence_penalty
, frequency_penalty
0.2
- No changes from v0.1. Just a version number bump to solve some PyPI publishing issues.
0.1
- Initial release. Added support for:
llm -m sonar-small-chat "prompt"
llm -m sonar-small-online "prompt"
llm -m sonar-medium-chat "prompt"
llm -m sonar-medium-online "prompt"
llm -m codellama-70b-instruct "prompt"
llm -m mistral-7b-instruct "prompt"
llm -m mixtral-8x7b-instruct "prompt"