v0.8.0
⚠️ Notice
- Due to format changes, re-executing
tabby scheduler --now
is required to ensure thatCode Browser
functions properly.
🚀 Features
- Introducing a preview release of the
Code Browser
, featuring visualization ofcode snippets
utilized for code completion in inference.
- Added a Windows CPU binary distribution.
- Added a Linux ROCm (AMD GPU) binary distribution.
🧰 Fixes and Improvements
- Fixed an issue with cached permanent redirection in certain browsers (e.g., Chrome) when the
--webserver
flag is disabled. - Introduced the
TABBY_MODEL_CACHE_ROOT
environment variable to individually override the model cache directory. - The
/v1beta/chat/completions
API endpoint is now compatible with OpenAI's chat completion API. - Models from our official registry can now be referred to without the TabbyML prefix. Therefore, for the model TabbyML/CodeLlama-7B, you can simply refer to it as CodeLlama-7B everywhere.
💫 New Contributors
- @cljoly made their first contribution in #1017
- @jasonharrison made their first contribution in #1021
- @cromefire made their first contribution in #1012
- @Blackclaws made their first contribution in #1028
- @ichDaheim made their first contribution in #1045
- @hduelme made their first contribution in #1095
- @concretevitamin made their first contribution in #1103
- @Lash-L made their first contribution in #1104
- @appetrosyan made their first contribution in #1251
- @anoldguy made their first contribution in #1244
- @fbagnol made their first contribution in #1243
- @laleph made their first contribution in #1278
Full Changelog: v0.7.0...v0.8.0