Skip to content

Releases: hex/llm-perplexity

0.6

08 May 07:33
@hex hex
Compare
Choose a tag to compare

What's Changed

  • Add new models, drop old models, remove default_max_tokens by @simonw in #6

Full Changelog: 0.5...0.6

0.5

19 Apr 07:29
@hex hex
Compare
Choose a tag to compare
0.5

What's Changed

  • Add mixtral-8x22b-instruct, llama-3-8b-instruct, llama-3-70b-instruct by @simonw in #3

New Contributors

  • @simonw made their first contribution in #3

Full Changelog: 0.4...0.5

0.4

08 Mar 09:53
@hex hex
Compare
Choose a tag to compare
0.4
  • update system message handling

0.3

08 Mar 09:31
@hex hex
Compare
Choose a tag to compare
0.3
  • added default max tokens per model
  • added model options temperature, top_p, top_k, presence_penalty, frequency_penalty

0.2

07 Mar 12:37
@hex hex
fed5c20
Compare
Choose a tag to compare
0.2
  • No changes from v0.1. Just a version number bump to solve some PyPI publishing issues.

0.1

07 Mar 11:53
@hex hex
Compare
Choose a tag to compare
0.1
  • Initial release. Added support for:
llm -m sonar-small-chat "prompt"
llm -m sonar-small-online "prompt"
llm -m sonar-medium-chat "prompt"
llm -m sonar-medium-online "prompt"
llm -m codellama-70b-instruct "prompt"
llm -m mistral-7b-instruct "prompt"
llm -m mixtral-8x7b-instruct "prompt"