Skip to content

Releases: ViperX7/Alpaca-Turbo

Beta 0.8

27 May 01:41
Compare
Choose a tag to compare

This is just a maintenance release

  • Adds support for loading latest models older models won't be supported anymore
  • Now you can load models and select which one you want to use for terminal mode
  • Binaries for mac are not updated if you want to use it on a mac please compile the binaries from here

Alpaca Turbo Releasev0.7

27 Apr 02:49
Compare
Choose a tag to compare

Complete overhaul of entire Alpaca Turbo

Release Notes
ToDo

Alpaca Turbo Releasev0.6

13 Apr 22:32
Compare
Choose a tag to compare

Alpaca Turbo 0.6 is out

  • Added support for Vicuna
  • Added support for antiprompt
  • Simple One click launcher for windows
  • updated llama.cpp
  • optimized settings for larger response and less repetetions
  • Changed the default port from 5000 => 7887
  • Added batch size support

and many more bugfixes

Alpaca Turbo Releasev0.5

03 Apr 17:03
Compare
Choose a tag to compare

This release adds support for M1/M2 mac devices
and makes some changes that allows mac devices to run Alpaca-Turbo

Windows users:
use miniconda to install

Linux Users
use conda/docker to install

Mac (M1/M2) users
use conda to install

Alpaca Turbo Releasev0.4

03 Apr 09:25
479a78d
Compare
Choose a tag to compare

This release makes installing alpaca turbo on windows easier
the last release(0.32) for some reason refuses to work under windows+docker
so for now you will need to use pipenv to install alpaca-turbo on windows

Full Changelog: beta_v0.3.2...beta_v0.4

Alpaca Turbo Release v0.2

02 Apr 15:54
e16bbb6
Compare
Choose a tag to compare

UI improvements and bugfixes

  • Faster and smarter generation: We have optimized the performance of our engine and UI, making them faster and more responsive.
  • More models to choose from: We have added support for more models, including 7B, 13B and 30B, which are capable of providing more human-like and complex conversations. You can also fine-tune the models to suit your needs and preferences.
  • More prompts to inspire you: We have added a new feature that allows you to choose from a variety of prompts, such as questions, topics, genres, styles, and more. You can also create your own custom prompts and save them for later use. (from advanced mode, using the UI)
  • More options to customize your output: We have added more options to customize your output, such as length, temperature, repetition penalty, top-k, top-p, and more. You can also edit your output directly on the UI, and copy or share it with others.
  • Better organization: Our UI now offers a cleaner and more organized interface, making it easier to navigate and find what you need.
  • Users can now view word count and generation time for each output, allowing for a more transparent and streamlined text generation process.

and minor improvements

Alpaca Turbo Release v0.3

02 Apr 17:23
Compare
Choose a tag to compare
Pre-release

This release moves away from using alpaca.cpp to llama.cpp
as a result all models that llama supports are now supported by Alpaca-Turbo
Added Pipfile so that it's easier to setup without docker

you might have to convert your old models to the newer format check (here)(https://github.com/ggerganov/llama.cpp)

also new prompts are added

THIS VERSION HAS MAJOR PROBLEM WITH WINDOWS

USE ONLY WITH LINUX

Alpaca Turbo Release

31 Mar 16:01
8ad1c59
Compare
Choose a tag to compare

Thanks to all those who tried Alpaca Turbo's alpha phase! We're thrilled to announce that we've made it to beta and can't wait to bring you an even better chat bot experience.

  • Numerous performance improvements for a faster and more efficient experience
  • Even more usability improvements for easier and more intuitive chats
  • All-new UI with a modern and user-friendly design

Get ready for smarter conversations and more fun with Alpaca Turbo!