Skip to content

Commit

Permalink
chore(deps): update container image docker.io/localai/localai to v2.1…
Browse files Browse the repository at this point in the history
…2.1 by renovate (#20490)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.11.0-cublas-cuda11-ffmpeg-core` ->
`v2.12.1-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.11.0-cublas-cuda11-core` -> `v2.12.1-cublas-cuda11-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.11.0-cublas-cuda12-ffmpeg-core` ->
`v2.12.1-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.11.0-cublas-cuda12-core` -> `v2.12.1-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.11.0-ffmpeg-core` -> `v2.12.1-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.11.0` -> `v2.12.1` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.12.1`](https://togithub.com/mudler/LocalAI/releases/tag/v2.12.1)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.12.0...v2.12.1)

I'm happy to announce the v2.12.1 LocalAI release is out!

##### 🌠  Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration
with a simple web interface has been started, and you can see now a
landing page when hitting the LocalAI front page:

![Screenshot from 2024-04-07
14-43-26](https://togithub.com/mudler/LocalAI/assets/2420543/e7aea8de-4385-45ae-b52e-db8154495493)

You can also now enjoy Swagger to try out the API calls directly:


![swagger](https://togithub.com/mudler/LocalAI/assets/2420543/6405ab11-2908-45ff-b635-38e4456251d6)

##### 🌈 AIO images changes

Now the default model for CPU images is
https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF -
pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now
available too!

##### 🚀  OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming
support now thanks to [@&#8203;fakezeta](https://togithub.com/fakezeta)!

To try OpenVINO, you can use the example available in the documentation:
https://localai.io/features/text-generation/#examples

##### 🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

- The build time of LocalAI was speed up significantly! thanks to
[@&#8203;cryptk](https://togithub.com/cryptk) for the efforts in
enhancing the build system
- [@&#8203;thiner](https://togithub.com/thiner) worked hardly to get
Vision support for AutoGPTQ
- ... and much more! see down below for a full list, be sure to star
LocalAI and give it a try!

##### 📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you
who've chipped in to squash bugs and suggest cool new features for
LocalAI. Your help, kind words, and brilliant ideas are truly
appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help
out fellow users on Discord and in our repo, you're absolutely amazing.
We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate
sponsors behind it. It's all us, folks. So, if you've found value in
what we're building together and want to keep the momentum going,
consider showing your support. A little shoutout on your favorite social
platforms using @&#8203;LocalAI_OSS and @&#8203;mudler_it or joining our
sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the
link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us
keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

##### What's Changed

##### Bug fixes 🐛

- fix: downgrade torch by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#1902
- fix(aio): correctly detect intel systems by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1931
- fix(swagger): do not specify a host by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1930
- fix(tools): correctly render tools response in templates by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1932
- fix(grammar): respect JSONmode and grammar from user input by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1935
- fix(hermes-2-pro-mistral): add stopword for toolcall by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1939
- fix(functions): respect when selected from string by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1940
- fix: use exec in entrypoint scripts to fix signal handling by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1943
- fix(hermes-2-pro-mistral): correct stopwords by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1947
- fix(welcome): stable model list by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1949
- fix(ci): manually tag latest images by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1948
- fix(seed): generate random seed per-request if -1 is set by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1952
- fix regression
[#&#8203;1971](https://togithub.com/mudler/LocalAI/issues/1971) by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#1972

##### Exciting New Features 🎉

- feat(aio): add intel profile by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1901
- Enhance autogptq backend to support VL models by
[@&#8203;thiner](https://togithub.com/thiner) in
[mudler/LocalAI#1860
- feat(assistant): Assistant and AssistantFiles api by
[@&#8203;christ66](https://togithub.com/christ66) in
[mudler/LocalAI#1803
- feat: Openvino runtime for transformer backend and streaming support
for Openvino and CUDA by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#1892
- feat: Token Stream support for Transformer, fix: missing package for
OpenVINO by [@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#1908
- feat(welcome): add simple welcome page by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1912
- fix(build): better CI logging and correct some build failure modes in
Makefile by [@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1899
- feat(webui): add partials, show backends associated to models by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1922
- feat(swagger): Add swagger API doc by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1926
- feat(build): adjust number of parallel make jobs by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1915
- feat(swagger): update by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#1929
- feat: first pass at improving logging by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1956
- fix(llama.cpp): set better defaults for llama.cpp by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1961

##### 📖 Documentation and examples

- docs(aio-usage): update docs to show examples by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1921

##### 👒 Dependencies

- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1903
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1904
- ⬆️ Update M0Rf30/go-tiny-dream by
[@&#8203;M0Rf30](https://togithub.com/M0Rf30) in
[mudler/LocalAI#1911
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1913
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1914
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1923
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1924
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1928
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1933
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1934
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1937
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1941
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1953
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1958
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1959
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1964

##### Other Changes

- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1927
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1960
- fix(hermes-2-pro-mistral): correct dashes in template to suppress
newlines by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1966
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1969
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1970
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1973

##### New Contributors

- [@&#8203;thiner](https://togithub.com/thiner) made their first
contribution in
[mudler/LocalAI#1860

**Full Changelog**:
mudler/LocalAI@v2.11.0...v2.12.1

###
[`v2.12.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.12.0)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.11.0...v2.12.0)

I'm happy to announce the v2.12.0 LocalAI release is out!

##### 🌠  Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration
with a simple web interface has been started, and you can see now a
landing page when hitting the LocalAI front page:

![Screenshot from 2024-04-07
14-43-26](https://togithub.com/mudler/LocalAI/assets/2420543/e7aea8de-4385-45ae-b52e-db8154495493)

You can also now enjoy Swagger to try out the API calls directly:


![swagger](https://togithub.com/mudler/LocalAI/assets/2420543/6405ab11-2908-45ff-b635-38e4456251d6)

##### 🌈 AIO images changes

Now the default model for CPU images is
https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF -
pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now
available too!

##### 🚀  OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming
support now thanks to [@&#8203;fakezeta](https://togithub.com/fakezeta)!

To try OpenVINO, you can use the example available in the documentation:
https://localai.io/features/text-generation/#examples

##### 🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

- The build time of LocalAI was speed up significantly! thanks to
[@&#8203;cryptk](https://togithub.com/cryptk) for the efforts in
enhancing the build system
- [@&#8203;thiner](https://togithub.com/thiner) worked hardly to get
Vision support for AutoGPTQ
- ... and much more! see down below for a full list, be sure to star
LocalAI and give it a try!

##### 📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you
who've chipped in to squash bugs and suggest cool new features for
LocalAI. Your help, kind words, and brilliant ideas are truly
appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help
out fellow users on Discord and in our repo, you're absolutely amazing.
We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate
sponsors behind it. It's all us, folks. So, if you've found value in
what we're building together and want to keep the momentum going,
consider showing your support. A little shoutout on your favorite social
platforms using @&#8203;LocalAI_OSS and @&#8203;mudler_it or joining our
sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the
link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us
keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

##### What's Changed

##### Bug fixes 🐛

- fix: downgrade torch by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#1902
- fix(aio): correctly detect intel systems by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1931
- fix(swagger): do not specify a host by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1930
- fix(tools): correctly render tools response in templates by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1932
- fix(grammar): respect JSONmode and grammar from user input by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1935
- fix(hermes-2-pro-mistral): add stopword for toolcall by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1939
- fix(functions): respect when selected from string by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1940
- fix: use exec in entrypoint scripts to fix signal handling by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1943
- fix(hermes-2-pro-mistral): correct stopwords by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1947
- fix(welcome): stable model list by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1949
- fix(ci): manually tag latest images by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1948
- fix(seed): generate random seed per-request if -1 is set by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1952
- fix regression
[#&#8203;1971](https://togithub.com/mudler/LocalAI/issues/1971) by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#1972

##### Exciting New Features 🎉

- feat(aio): add intel profile by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1901
- Enhance autogptq backend to support VL models by
[@&#8203;thiner](https://togithub.com/thiner) in
[mudler/LocalAI#1860
- feat(assistant): Assistant and AssistantFiles api by
[@&#8203;christ66](https://togithub.com/christ66) in
[mudler/LocalAI#1803
- feat: Openvino runtime for transformer backend and streaming support
for Openvino and CUDA by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#1892
- feat: Token Stream support for Transformer, fix: missing package for
OpenVINO by [@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#1908
- feat(welcome): add simple welcome page by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1912
- fix(build): better CI logging and correct some build failure modes in
Makefile by [@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1899
- feat(webui): add partials, show backends associated to models by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1922
- feat(swagger): Add swagger API doc by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1926
- feat(build): adjust number of parallel make jobs by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1915
- feat(swagger): update by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#1929
- feat: first pass at improving logging by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1956
- fix(llama.cpp): set better defaults for llama.cpp by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1961

##### 📖 Documentation and examples

- docs(aio-usage): update docs to show examples by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1921

##### 👒 Dependencies

- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1903
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1904
- ⬆️ Update M0Rf30/go-tiny-dream by
[@&#8203;M0Rf30](https://togithub.com/M0Rf30) in
[mudler/LocalAI#1911
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1913
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1914
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1923
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1924
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1928
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1933
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1934
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1937
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1941
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1953
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1958
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1959
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1964

##### Other Changes

- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1927
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1960
- fix(hermes-2-pro-mistral): correct dashes in template to suppress
newlines by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1966
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1969
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1970
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1973

##### New Contributors

- [@&#8203;thiner](https://togithub.com/thiner) made their first
contribution in
[mudler/LocalAI#1860

**Full Changelog**:
mudler/LocalAI@v2.11.0...v2.12.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4yODEuMiIsInVwZGF0ZWRJblZlciI6IjM3LjI4MS4yIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
  • Loading branch information
truecharts-admin committed Apr 9, 2024
1 parent edebb25 commit 90676ee
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 9 deletions.
6 changes: 3 additions & 3 deletions charts/stable/local-ai/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ annotations:
truecharts.org/min_helm_version: "3.11"
truecharts.org/train: stable
apiVersion: v2
appVersion: 2.11.0
appVersion: 2.12.1
dependencies:
- name: common
version: 20.3.3
Expand All @@ -23,7 +23,7 @@ icon: https://truecharts.org/img/hotlink-ok/chart-icons/local-ai.png
keywords:
- local-ai
- ai
kubeVersion: ">=1.24.0-0"
kubeVersion: '>=1.24.0-0'
maintainers:
- name: TrueCharts
email: info@truecharts.org
Expand All @@ -34,4 +34,4 @@ sources:
- https://github.com/truecharts/charts/tree/master/charts/stable/local-ai
- https://hub.docker.com/r/localai/localai
type: application
version: 9.26.1
version: 9.32.0
12 changes: 6 additions & 6 deletions charts/stable/local-ai/values.yaml
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.11.0@sha256:30e75801ab3ef804022490fd41f89258d693981d0e4da201b7d051ab0466fb5c
tag: v2.12.1@sha256:70a1323f061340002e7eba535158590016e67a09b8224fbc1e488646bdbf78d8
ffmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.11.0-ffmpeg-core@sha256:177bd1ddabdc58dc7748eed44aa429688462de2a0f14b29c2b427b3aa9687a97
tag: v2.12.1-ffmpeg-core@sha256:118256a91b0054b6574a208ac2201ab79f7697840381ce712edc5874c974e143
cublasCuda12Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.11.0-cublas-cuda12-core@sha256:129eedb6be229ab9b91e0a6d2ba653603371dccb1c870339a372d219afc3e199
tag: v2.12.1-cublas-cuda12-core@sha256:542f81902ddc9aae8f245c9eb9f45c46afff7bdbe8e17ea23e26f063281e7c3d
cublasCuda12FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.11.0-cublas-cuda12-ffmpeg-core@sha256:9b12db20f6b98a04822942df977d61b868e74bd2ddd7255f511c970d4e3a96be
tag: v2.12.1-cublas-cuda12-ffmpeg-core@sha256:04e0aedb7eb183654a255a1a3c39bdcd4617d0cc183996a6555587d7cfc8c8ef
cublasCuda11Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.11.0-cublas-cuda11-core@sha256:ce89d16135d9812fd29c503643cec339fc917f4d79b1cabf87f1e99b04c4838b
tag: v2.12.1-cublas-cuda11-core@sha256:d037ec250d3ee56f09fda668a66da59502b3a450597ec0eca926ca70dbb0cd18
cublasCuda11FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.11.0-cublas-cuda11-ffmpeg-core@sha256:032b455321868d4471a1a0055cef3cccf2eba63eeaf96ad5cc517e85ca521632
tag: v2.12.1-cublas-cuda11-ffmpeg-core@sha256:2000825a3cf644dbf7dbe9ecc51f57e9b89e6e93f047776484fcfae2bde8f3eb
securityContext:
container:
runAsNonRoot: false
Expand Down

0 comments on commit 90676ee

Please sign in to comment.