v3.0.0 #5689
mudler
announced in
Announcements
v3.0.0
#5689
Replies: 1 comment
-
Great! With this new release the realtime voice transcription mode in https://github.com/richiejp/VoxInput will be functional because it uses the realtime API. Also being able to download backends is a big step forward UX IMO! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🚀 LocalAI 3.0 – A New Era Begins
Say hello to LocalAI 3.0 — our most ambitious release yet!
We’ve taken huge strides toward making LocalAI not just local, but limitless. Whether you're building LLM-powered agents, experimenting with audio pipelines, or deploying multimodal backends at scale — this release is for you.
Let’s walk you through what’s new. (And yes, there’s a lot to love.)
TL;DR – What’s New in LocalAI 3.0.0 🎉
👉 Dive into the full changelog and docs below to explore more!
🧩 Introducing the Backend Gallery — Plug, Play, Power Up
No more hunting for dependencies or custom hacks.
With the new Backend Gallery, you can now:
Backends are standard OCI images — portable, composable, and totally DIY-friendly. Goodbye to "extras images" — hello to full backend modularity, even with Python-based dependencies.
📖 Explore the Backend Gallery Docs
From this release we will stop pushing
-extra
images containing python backends. You can now use standard images, and you will have only to pick the ones that are suited for your GPU. Additional backends can be installed via the backend gallery.Here below some examples, note that the CI is still publishing the images so won't be available until jobs are processed, and the installation scripts will be updated right after images are publicly available.
CPU only image:
NVIDIA GPU Images:
AMD GPU Images (ROCm):
Intel GPU Images (oneAPI):
Vulkan GPU Images:
AIO Images (pre-downloaded models):
For more information about the AIO images and pre-downloaded models, see Container Documentation.
🧠 Smarter Reasoning, Smoother Chat
🧠 Model Power-Up: VRAM Savvy + Multimodal Brains
Dynamic VRAM Estimation: LocalAI now adapts and offloads layers depending on your GPU’s capabilities. Optimal performance, no guesswork.
Llama.cpp upgrades also includes:
🧪 New Models!
More than 50 new models joined the gallery, including:
🐞 Bugfixes & Polish
The Complete Local Stack for Privacy-First AI
With LocalAGI rejoining LocalAI alongside LocalRecall, our ecosystem provides a complete, open-source stack for private, secure, and intelligent AI operations:
LocalAI
The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.
Link: https://github.com/mudler/LocalAI
LocalAGI
A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.
Link: https://github.com/mudler/LocalAGI
LocalRecall
A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI.
Link: https://github.com/mudler/LocalRecall
Join the Movement! ❤️
A massive THANK YOU to our incredible community and our sponsors! LocalAI has over 33,300 stars, and LocalAGI has already rocketed past 750+ stars!
As a reminder, LocalAI is real FOSS (Free and Open Source Software) and its sibling projects are community-driven and not backed by VCs or a company. We rely on contributors donating their spare time and our sponsors to provide us the hardware! If you love open-source, privacy-first AI, please consider starring the repos, contributing code, reporting bugs, or spreading the word!
👉 Check out the reborn LocalAGI v2 today: https://github.com/mudler/LocalAGI
LocalAI 3.0.0 is here. What will you build next?
Full changelog 👇
👉 Click to expand 👈
What's Changed
Breaking Changes 🛠
bark-cpp
to the backend gallery by @mudler in chore(backends): movebark-cpp
to the backend gallery #5682Bug fixes 🐛
Exciting New Features 🎉
🧠 Models
📖 Documentation and examples
👒 Dependencies
e41bc5c61ae66af6be2bd7011769bb821a83e8ae
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toe41bc5c61ae66af6be2bd7011769bb821a83e8ae
#5357de4c07f93783a1a96456a44dc16b9db538ee1618
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tode4c07f93783a1a96456a44dc16b9db538ee1618
#5358f89056057511a1657af90bb28ef3f21e5b1f33cd
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tof89056057511a1657af90bb28ef3f21e5b1f33cd
#5364f389d7e3e56bbbfec49fd333551927a0fcbb7213
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tof389d7e3e56bbbfec49fd333551927a0fcbb7213
#536720a20decd94badfd519a07ea91f0bba8b8fc4dea
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to20a20decd94badfd519a07ea91f0bba8b8fc4dea
#5374d1f114da61b1ae1e70b03104fad42c9dd666feeb
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tod1f114da61b1ae1e70b03104fad42c9dd666feeb
#5381e3a7cf6c5bf6a0a24217f88607b06e4405a2b5d9
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toe3a7cf6c5bf6a0a24217f88607b06e4405a2b5d9
#53846a2bc8bfb7cd502e5ebc72e36c97a6f848c21c2c
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to6a2bc8bfb7cd502e5ebc72e36c97a6f848c21c2c
#539062dc8f7d7b72ca8e75c57cd6a100712c631fa5d5
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to62dc8f7d7b72ca8e75c57cd6a100712c631fa5d5
#5398b7a17463ec190aeee7b9077c606c910fb4688b84
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tob7a17463ec190aeee7b9077c606c910fb4688b84
#53998e186ef0e764c7a620e402d1f76ebad60bf31c49
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8e186ef0e764c7a620e402d1f76ebad60bf31c49
#5423bd1cb0c8e3a04baa411dc12c1325b6a9f12ee7f4
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tobd1cb0c8e3a04baa411dc12c1325b6a9f12ee7f4
#542478b31ca7824500e429ba026c1a9b48e0b41c50cb
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to78b31ca7824500e429ba026c1a9b48e0b41c50cb
#54398a1d206f1d2b4e45918b589f3165b4be232f7ba8
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8a1d206f1d2b4e45918b589f3165b4be232f7ba8
#544013d92d08ae26031545921243256aaaf0ee057943
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to13d92d08ae26031545921243256aaaf0ee057943
#5449d13d0f6135803822ec1cd7e3efb49360b88a1bdf
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tod13d0f6135803822ec1cd7e3efb49360b88a1bdf
#5448ea9f206f18d86c4eb357db9fdc52e4d9dc24435e
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toea9f206f18d86c4eb357db9fdc52e4d9dc24435e
#5464a26c4cc11ec7c6574e3691e90ecdbd67deeea35b
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toa26c4cc11ec7c6574e3691e90ecdbd67deeea35b
#5500a3c30846e410c91c11d7bf80978795a03bb03dee
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toa3c30846e410c91c11d7bf80978795a03bb03dee
#55090ed00d9d30e8c984936ff9ed9a4fcd475d6d82e5
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to0ed00d9d30e8c984936ff9ed9a4fcd475d6d82e5
#5510d98f2a35fcf4a8d3e660ad48cd19e2a1f3d5b2ef
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tod98f2a35fcf4a8d3e660ad48cd19e2a1f3d5b2ef
#55141f5fdbecb411a61b8576242e5170c5ecef24b05a
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to1f5fdbecb411a61b8576242e5170c5ecef24b05a
#5515e5e900dd00747f747143ad30a697c8f21ddcd59e
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toe5e900dd00747f747143ad30a697c8f21ddcd59e
#552298dfe8dc264b7d0d1daccfff9a9c043bcc2ece4b
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to98dfe8dc264b7d0d1daccfff9a9c043bcc2ece4b
#55427fd6fa809749078aa00edf945e959c898f2bd1af
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to7fd6fa809749078aa00edf945e959c898f2bd1af
#5556e05af2457b7b4134ee626dc044294a19b096e62f
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toe05af2457b7b4134ee626dc044294a19b096e62f
#55697e00e60ef86645a01fda738fef85b74afa016a34
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to7e00e60ef86645a01fda738fef85b74afa016a34
#557482f461eaa4e6a1ba29fc0dbdaa415a9934ee8a1d
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to82f461eaa4e6a1ba29fc0dbdaa415a9934ee8a1d
#55750d3984424f2973c49c4bcabe4cc0153b4f90c601
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to0d3984424f2973c49c4bcabe4cc0153b4f90c601
#5585799eacdde40b3c562cfce1508da1354b90567f8f
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to799eacdde40b3c562cfce1508da1354b90567f8f
#55861caae7fc6c77551cb1066515e0f414713eebb367
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to1caae7fc6c77551cb1066515e0f414713eebb367
#5593b175baa665bc35f97a2ca774174f07dfffb84e19
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tob175baa665bc35f97a2ca774174f07dfffb84e19
#5597745aa5319b9930068aff5e87cf5e9eef7227339b
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to745aa5319b9930068aff5e87cf5e9eef7227339b
#55985787b5da57e54dba760c2deeac1edf892e8fc450
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to5787b5da57e54dba760c2deeac1edf892e8fc450
#5601247e5c6e447707bb4539bdf1913d206088a8fc69
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to247e5c6e447707bb4539bdf1913d206088a8fc69
#5605d78f08142381c1460604713e2f2ddf3331c7d816
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tod78f08142381c1460604713e2f2ddf3331c7d816
#56193678b838bb71eaccbaeb479ff38c2e12bfd2f960
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to3678b838bb71eaccbaeb479ff38c2e12bfd2f960
#56202679bec6e09231c6fd59715fcba3eebc9e2f6076
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to2679bec6e09231c6fd59715fcba3eebc9e2f6076
#5625ebbc874e85b518f963a87612f6d79f5c71a55e84
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toebbc874e85b518f963a87612f6d79f5c71a55e84
#5635ed52f3668e633423054a4eab61bb7efee47025ab
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toed52f3668e633423054a4eab61bb7efee47025ab
#5636705db0f728310c32bc96f4e355e2b18076932f75
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to705db0f728310c32bc96f4e355e2b18076932f75
#56433cb203c89f60483e349f841684173446ed23c28f
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to3cb203c89f60483e349f841684173446ed23c28f
#564430e5b01de2a0bcddc7c063c8ef0802703a958417
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to30e5b01de2a0bcddc7c063c8ef0802703a958417
#56592a4d6db7d90899aff3d58d70996916968e4e0d27
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to2a4d6db7d90899aff3d58d70996916968e4e0d27
#5661f3ff80ea8da044e5b8833e7ba54ee174504c518d
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tof3ff80ea8da044e5b8833e7ba54ee174504c518d
#5677860a9e4eeff3eb2e7bd1cc38f65787cc6c8177af
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to860a9e4eeff3eb2e7bd1cc38f65787cc6c8177af
#56788d947136546773f6410756f37fcc5d3e65b8135d
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8d947136546773f6410756f37fcc5d3e65b8135d
#5685ecb8f3c2b4e282d5ef416516bcbfb92821f06bf6
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toecb8f3c2b4e282d5ef416516bcbfb92821f06bf6
#5686Other Changes
New Contributors
Full Changelog: v2.29.0...v3.0.0-alpha1
This discussion was created from the release v3.0.0.
Beta Was this translation helpful? Give feedback.
All reactions