Skip to content

v2.11.0

Compare
Choose a tag to compare
@mudler mudler released this 26 Mar 17:18
· 241 commits to master since this release
1395e50

Introducing LocalAI v2.11.0: All-in-One Images!

Hey everyone! πŸŽ‰ I'm super excited to share what we've been working on at LocalAI - the launch of v2.11.0. This isn't just any update; it's a massive leap forward, making LocalAI easier to use, faster, and more accessible for everyone.

🌠 The Spotlight: All-in-One Images, OpenAI in a box

Imagine having a magic box that, once opened, gives you everything you need to get your AI project off the ground with generative AI. A full clone of OpenAI in a box. That's exactly what our AIO images are! Designed for both CPU and GPU environments, these images come pre-packed with a full suite of models and backends, ready to go right out of the box.

Whether you're using Nvidia, AMD, or Intel, we've got an optimized image for you. If you are using CPU-only you can enjoy even smaller and lighter images.

To start LocalAI, pre-configured with function calling, llm, tts, speech to text, and image generation, just run:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu

## Do you have a Nvidia GPUs? Use this instead
## CUDA 11
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-11
## CUDA 12
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-12

❀️ Why You're Going to Love AIO Images:

  • Ease of Use: Say goodbye to the setup blues. With AIO images, everything is configured upfront, so you can dive straight into the fun part - hacking!
  • Flexibility: CPU, Nvidia, AMD, Intel? We support them all. These images are made to adapt to your setup, not the other way around.
  • Speed: Spend less time configuring and more time innovating. Our AIO images are all about getting you across the starting line as fast as possible.

🌈 Jumping In Is a Breeze:

Getting started with AIO images is as simple as pulling from Docker Hub or Quay and running it. We take care of the rest, downloading all necessary models for you. For all the details, including how to customize your setup with environment variables, our updated docs have got you covered here, while you can get more details of the AIO images here.

🎈 Vector Store

Thanks to the great contribution from @richiejp now LocalAI has a new backend type, "vector stores" that allows to use LocalAI as in-memory Vector DB (#1792). You can learn more about it here!

πŸ› Bug fixes

This release contains major bugfixes to the watchdog component, and a fix to a regression introduced in v2.10.x which was not respecting --f16, --threads and --context-size to be applied as model's defaults.

πŸŽ‰ New Model defaults for llama.cpp

Model defaults has changed to automatically offload maximum GPU layers if a GPU is available, and it sets saner defaults to the models to enhance the LLM's output.

🧠 New pre-configured models

You can now run llava-1.6-vicuna, llava-1.6-mistral and hermes-2-pro-mistral, see Run other models for a list of all the pre-configured models available in the release.

πŸ“£ Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

πŸ”— Links

🎁 What's More in v2.11.0?

Bug fixes πŸ›

  • fix(config): pass by config options, respect defaults by @mudler in #1878
  • fix(watchdog): use ShutdownModel instead of StopModel by @mudler in #1882
  • NVIDIA GPU detection support for WSL2 environments by @enricoros in #1891
  • Fix NVIDIA VRAM detection on WSL2 environments by @enricoros in #1894

Exciting New Features πŸŽ‰

  • feat(functions/aio): all-in-one images, function template enhancements by @mudler in #1862
  • feat(aio): entrypoint, update workflows by @mudler in #1872
  • feat(aio): add tests, update model definitions by @mudler in #1880
  • feat(stores): Vector store backend by @richiejp in #1795
  • ci(aio): publish hipblas and Intel GPU images by @mudler in #1883
  • ci(aio): add latest tag images by @mudler in #1884

🧠 Models

  • feat(models): add phi-2-chat, llava-1.6, bakllava, cerbero by @mudler in #1879

πŸ“– Documentation and examples

  • ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1856
  • docs(mac): improve documentation for mac build by @tauven in #1873
  • docs(aio): Add All-in-One images docs by @mudler in #1887
  • fix(aio): make image-gen for GPU functional, update docs by @mudler in #1895

πŸ‘’ Dependencies

Other Changes

New Contributors

Full Changelog: v2.10.1...v2.11.0