Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: dont commit generated files to git #1993

Merged
merged 20 commits into from
Apr 13, 2024

Conversation

cryptk
Copy link
Collaborator

@cryptk cryptk commented Apr 10, 2024

Description

Generated files should not be committed to version control, only the files that they are generated from. Generated files (especially protobuf/grpc files) cause a large amount of noise in linters and other IDE extensions as they typically don't conform to anything even approaching a standard style and frequently reference things that are not defined until build time.

This commit removes the protobuf generated files and improves the Makefile to generate these files at build time. This has the added benefit of ensuring that the project is always built with the latest version of the protobuf spec.

There are also some documentation improvements to help new users to ensure they have the tooling required to compile the relevant protobuf files.

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

Copy link

netlify bot commented Apr 10, 2024

Deploy Preview for localai canceled.

Name Link
🔨 Latest commit 4bab32d
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/66197496793089000845312c

@cryptk cryptk added enhancement New feature or request area/build labels Apr 10, 2024
@cryptk cryptk marked this pull request as draft April 11, 2024 03:50
@cryptk cryptk changed the title fix: dont commit generated fix: dont commit generated files to git Apr 11, 2024
@mudler
Copy link
Owner

mudler commented Apr 11, 2024

I think this overall is the direction we should take indeed, thanks for having a look at this

There are also some documentation improvements to help new users to ensure they have the tooling required to compile the relevant protobuf files.

This was the strongest point that demotivated me to switch to in-flight generation of protobuf file to begin with - since few users (especially Apple) would need to build from source in some cases, the amount of tooling needed would be likely a barrier. But I think we should just go ahead with this and try to tackle that separately (we could be more consistent and offer pre-built binaries as now Github offers Apple arm64 workers! actions/runner-images#9254 )

@cryptk
Copy link
Collaborator Author

cryptk commented Apr 11, 2024

I think this overall is the direction we should take indeed, thanks for having a look at this

There are also some documentation improvements to help new users to ensure they have the tooling required to compile the relevant protobuf files.

This was the strongest point that demotivated me to switch to in-flight generation of protobuf file to begin with - since few users (especially Apple) would need to build from source in some cases, the amount of tooling needed would be likely a barrier. But I think we should just go ahead with this and try to tackle that separately (we could be more consistent and offer pre-built binaries as now Github offers Apple arm64 workers! actions/runner-images#9254 )

Another idea I had is potentially adding an option to run the build inside a docker container so that you don't have to worry about configuring your environment, instead, there will be an environment configured for you that runs the build and you get the artifact out of it. I think that's a potential future enhancement as well though.

…itory

Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
…eeded

Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
@cryptk cryptk marked this pull request as ready for review April 11, 2024 21:00
@dave-gray101
Copy link
Collaborator

I think this overall is the direction we should take indeed, thanks for having a look at this

There are also some documentation improvements to help new users to ensure they have the tooling required to compile the relevant protobuf files.

This was the strongest point that demotivated me to switch to in-flight generation of protobuf file to begin with - since few users (especially Apple) would need to build from source in some cases, the amount of tooling needed would be likely a barrier. But I think we should just go ahead with this and try to tackle that separately (we could be more consistent and offer pre-built binaries as now Github offers Apple arm64 workers! actions/runner-images#9254 )

I'll verify this next time I'm at my build mac, but as of the last time I checked, the current homebrew versions of protobuf worked for OSX just fine - I don't think we need to consider OSX users a blocker here, as the basic build prereqs should be sufficient.

@cryptk
Copy link
Collaborator Author

cryptk commented Apr 12, 2024

I think this overall is the direction we should take indeed, thanks for having a look at this

There are also some documentation improvements to help new users to ensure they have the tooling required to compile the relevant protobuf files.

This was the strongest point that demotivated me to switch to in-flight generation of protobuf file to begin with - since few users (especially Apple) would need to build from source in some cases, the amount of tooling needed would be likely a barrier. But I think we should just go ahead with this and try to tackle that separately (we could be more consistent and offer pre-built binaries as now Github offers Apple arm64 workers! actions/runner-images#9254 )

I'll verify this next time I'm at my build mac, but as of the last time I checked, the current homebrew versions of protobuf worked for OSX just fine - I don't think we need to consider OSX users a blocker here, as the basic build prereqs should be sufficient.

The only system that I had to do anything special with to get it to build were the Ubuntu 22.04 images. Mostly because 22.04 comes with some very old packages, so I needed to update protoc and I wasn't able to use python gpio tools from the repository either.

The Apple tests are passing, and they were actually one of the easier tests to get working, but if you could run some builds with this change and make sure that everything seems kosher, that would be great!

@golgeek
Copy link
Collaborator

golgeek commented Apr 12, 2024

I'll verify this next time I'm at my build mac, but as of the last time I checked, the current homebrew versions of protobuf worked for OSX just fine - I don't think we need to consider OSX users a blocker here, as the basic build prereqs should be sufficient.

Agreed, had to make protobuf yesterday on my arm64 mac for #1990 and had no issue.

Basically:

$ brew install protobuf
$ go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
$ pip install grpcio-tools

The only thing that didn't go as plan is that I used a conda environment, and somehow python3 is not linked to the conda env, so I had to replace python3 with python in the Makefile. Though this is definitely linked to my setup.

Dockerfile Outdated Show resolved Hide resolved
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Copy link
Owner

@mudler mudler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! that's a great cleanup, really liking it!

@mudler mudler merged commit 1981154 into mudler:master Apr 13, 2024
23 checks passed
@cryptk cryptk deleted the fix_dont_commit_generated branch April 15, 2024 19:37
truecharts-admin added a commit to truecharts/charts that referenced this pull request Apr 27, 2024
…3.0 by renovate (#21421)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.12.4-cublas-cuda11-ffmpeg-core` ->
`v2.13.0-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.12.4-cublas-cuda11-core` -> `v2.13.0-cublas-cuda11-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.12.4-cublas-cuda12-ffmpeg-core` ->
`v2.13.0-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.12.4-cublas-cuda12-core` -> `v2.13.0-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.12.4-ffmpeg-core` -> `v2.13.0-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.12.4` -> `v2.13.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.13.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.13.0):
🖼️ v2.13.0 - Model gallery edition

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.12.4...v2.13.0)

Hello folks, Ettore here - I'm happy to announce the v2.13.0 LocalAI
release is out, with many features!

Below there is a small breakdown of the hottest features introduced in
this release - however - there are many other improvements (especially
from the community) as well, so don't miss out the changelog!

Check out the full changelog below for having an overview of all the
changes that went in this release (this one is quite packed up).

##### 🖼️ Model gallery

This is the first release with model gallery in the webUI, you can see
now a "Model" button in the WebUI which lands now in a selection of
models:


![output](https://togithub.com/mudler/LocalAI/assets/2420543/7b16676e-d5b1-4c97-89bd-9fa5065c21ad)

You can choose now models between stablediffusion, llama3, tts,
embeddings and more! The gallery is growing steadly and being kept
up-to-date.

The models are simple YAML files which are hosted in this repository:
https://github.com/mudler/LocalAI/tree/master/gallery - you can host
your own repository with your model index, or if you want you can
contribute to LocalAI.

If you want to contribute adding models, you can by opening up a PR in
the `gallery` directory:
https://github.com/mudler/LocalAI/tree/master/gallery.

##### Rerankers

I'm excited to introduce a new backend for `rerankers`. LocalAI now
implements the Jina API (https://jina.ai/reranker/#apiform) as a
compatibility layer, and you can use existing Jina clients and point to
those to the LocalAI address. Behind the hoods, uses
https://github.com/AnswerDotAI/rerankers.


![output](https://togithub.com/mudler/LocalAI/assets/2420543/ede67b25-fac4-4833-ae4f-78290e401e60)

You can test this by using container images with python (this does
**NOT** work with `core` images) and a model config file like this, or
by installing `cross-encoder` from the gallery in the UI:

```yaml
name: jina-reranker-v1-base-en
backend: rerankers
parameters:
  model: cross-encoder
```

and test it with:

```bash

    curl http://localhost:8080/v1/rerank \
      -H "Content-Type: application/json" \
      -d '{
      "model": "jina-reranker-v1-base-en",
      "query": "Organic skincare products for sensitive skin",
      "documents": [
        "Eco-friendly kitchenware for modern homes",
        "Biodegradable cleaning supplies for eco-conscious consumers",
        "Organic cotton baby clothes for sensitive skin",
        "Natural organic skincare range for sensitive skin",
        "Tech gadgets for smart homes: 2024 edition",
        "Sustainable gardening tools and compost solutions",
        "Sensitive skin-friendly facial cleansers and toners",
        "Organic food wraps and storage solutions",
        "All-natural pet food for dogs with allergies",
        "Yoga mats made from recycled materials"
      ],
      "top_n": 3
    }'
```

##### Parler-tts

There is a new backend available for tts now, `parler-tts`. It is
possible to install and configure the model directly from the gallery.
https://github.com/huggingface/parler-tts

##### 🎈 Lot of small improvements behind the scenes!

Thanks to our outstanding community, we have enhanced the performance
and stability of LocalAI across various modules. From backend
optimizations to front-end adjustments, every tweak helps make LocalAI
smoother and more robust.

##### 📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you
who've chipped in to squash bugs and suggest cool new features for
LocalAI. Your help, kind words, and brilliant ideas are truly
appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help
out fellow users on Discord and in our repo, you're absolutely amazing.
We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate
sponsors behind it. It's all us, folks. So, if you've found value in
what we're building together and want to keep the momentum going,
consider showing your support. A little shoutout on your favorite social
platforms using @&#8203;LocalAI_OSS and @&#8203;mudler_it or joining our
sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the
link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us
keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

##### What's Changed

##### Bug fixes 🐛

- fix(autogptq): do not use_triton with qwen-vl by
[@&#8203;thiner](https://togithub.com/thiner) in
[mudler/LocalAI#1985
- fix: respect concurrency from parent build parameters when building
GRPC by [@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2023
- ci: fix release pipeline missing dependencies by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2025
- fix: remove build path from help text documentation by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2037
- fix: previous CLI rework broke debug logging by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2036
- fix(fncall): fix regression introduced in
[#&#8203;1963](https://togithub.com/mudler/LocalAI/issues/1963) by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2048
- fix: adjust some sources names to match the naming of their
repositories by [@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2061
- fix: move the GRPC cache generation workflow into it's own concurrency
group by [@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2071
- fix(llama.cpp): set -1 as default for max tokens by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2087
- fix(llama.cpp-ggml): fixup `max_tokens` for old backend by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2094
- fix missing TrustRemoteCode in OpenVINO model load by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#2114
- Incl ocv pkg for diffsusers utils by
[@&#8203;jtwolfe](https://togithub.com/jtwolfe) in
[mudler/LocalAI#2115

##### Exciting New Features 🎉

- feat: kong cli refactor fixes
[#&#8203;1955](https://togithub.com/mudler/LocalAI/issues/1955) by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1974
- feat: add flash-attn in nvidia and rocm envs by
[@&#8203;golgeek](https://togithub.com/golgeek) in
[mudler/LocalAI#1995
- feat: use tokenizer.apply_chat_template() in vLLM by
[@&#8203;golgeek](https://togithub.com/golgeek) in
[mudler/LocalAI#1990
- feat(gallery): support ConfigURLs by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2012
- fix: dont commit generated files to git by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#1993
- feat(parler-tts): Add new backend by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2027
- feat(grpc): return consumed token count and update response
accordingly by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2035
- feat(store): add Golang client by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#1977
- feat(functions): support models with no grammar, add tests by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2068
- refactor(template): isolate and add tests by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2069
- feat: fiber logs with zerlog and add trace level by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2082
- models(gallery): add gallery by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2078
- Add tensor_parallel_size setting to vllm setting items by
[@&#8203;Taikono-Himazin](https://togithub.com/Taikono-Himazin) in
[mudler/LocalAI#2085
- Transformer Backend: Implementing use_tokenizer_template and
stop_prompts options by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#2090
- feat: Galleries UI by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2104
- Transformers Backend: max_tokens adherence to OpenAI API by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#2108
- Fix cleanup sonarqube findings by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2106
- feat(models-ui): minor visual enhancements by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2109
- fix(gallery): show a fake image if no there is no icon by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2111
- feat(rerankers): Add new backend, support jina rerankers API by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2121

##### 🧠 Models

- models(llama3): add llama3 to embedded models by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2074
- feat(gallery): add llama3, hermes, phi-3, and others by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2110
- models(gallery): add new models to the gallery by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2124
- models(gallery): add more models by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2129

##### 📖 Documentation and examples

- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1988
- docs: fix stores link by
[@&#8203;adrienbrault](https://togithub.com/adrienbrault) in
[mudler/LocalAI#2044
- AMD/ROCm Documentation update + formatting fix by
[@&#8203;jtwolfe](https://togithub.com/jtwolfe) in
[mudler/LocalAI#2100

##### 👒 Dependencies

- deps: Update version of vLLM to add support of Cohere Command_R model
in vLLM inference by
[@&#8203;holyCowMp3](https://togithub.com/holyCowMp3) in
[mudler/LocalAI#1975
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#1991
- build(deps): bump google.golang.org/protobuf from 1.31.0 to 1.33.0 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#1998
- build(deps): bump github.com/docker/docker from 20.10.7+incompatible
to 24.0.9+incompatible by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#1999
- build(deps): bump github.com/gofiber/fiber/v2 from 2.52.0 to 2.52.1 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2001
- build(deps): bump actions/checkout from 3 to 4 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2002
- build(deps): bump actions/setup-go from 4 to 5 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2003
- build(deps): bump peter-evans/create-pull-request from 5 to 6 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2005
- build(deps): bump actions/cache from 3 to 4 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2006
- build(deps): bump actions/upload-artifact from 3 to 4 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2007
- build(deps): bump github.com/charmbracelet/glamour from 0.6.0 to 0.7.0
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2004
- build(deps): bump github.com/gofiber/fiber/v2 from 2.52.0 to 2.52.4 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2008
- build(deps): bump github.com/opencontainers/runc from 1.1.5 to 1.1.12
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2000
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2014
- build(deps): bump the pip group across 4 directories with 8 updates by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2017
- build(deps): bump follow-redirects from 1.15.2 to 1.15.6 in
/examples/langchain/langchainjs-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2020
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2024
- build(deps): bump softprops/action-gh-release from 1 to 2 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2039
- build(deps): bump dependabot/fetch-metadata from 1.3.4 to 2.0.0 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2040
- build(deps): bump github/codeql-action from 2 to 3 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2041
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2043
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2042
- build(deps): bump the pip group across 4 directories with 8 updates by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2049
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2050
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2060
- build(deps): bump aiohttp from 3.9.2 to 3.9.4 in
/examples/langchain/langchainpy-localai-example in the pip group across
1 directory by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2067
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2089
- deps(llama.cpp): update, use better model for function call tests by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2119
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2122
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2123
- build(deps): bump pydantic from 1.10.7 to 1.10.13 in
/examples/langchain/langchainpy-localai-example in the pip group across
1 directory by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2125
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2128

##### Other Changes

- ci: try to build on macos14 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2011
- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2013
- refactor: backend/service split, channel-based llm flow by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[mudler/LocalAI#1963
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2028
- fix - correct checkout versions by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[mudler/LocalAI#2029
- Revert "build(deps): bump the pip group across 4 directories with 8
updates" by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2030
- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2032
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2033
- fix: action-tmate back to upstream, dead code removal by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[mudler/LocalAI#2038
- Revert [#&#8203;1963](https://togithub.com/mudler/LocalAI/issues/1963)
by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2056
- feat: refactor the dynamic json configs for api_keys and
external_backends by [@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2055
- tests: add template tests by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2063
- feat: better control of GRPC docker cache by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2070
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2051
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2080
- feat: enable polling configs for systems with broken fsnotify (docker
volumes on windows) by [@&#8203;cryptk](https://togithub.com/cryptk) in
[mudler/LocalAI#2081
- fix: action-tmate: use connect-timeout-sections and
limit-access-to-actor by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[mudler/LocalAI#2083
- refactor(routes): split routes registration by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2077
- fix: action-tmate detached by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[mudler/LocalAI#2092
- fix: rename fiber entrypoint from http/api to http/app by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2096
- fix: typo in models.go by
[@&#8203;eltociear](https://togithub.com/eltociear) in
[mudler/LocalAI#2099
- Update text-generation.md by
[@&#8203;Taikono-Himazin](https://togithub.com/Taikono-Himazin) in
[mudler/LocalAI#2095
- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2105
- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2113

##### New Contributors

- [@&#8203;holyCowMp3](https://togithub.com/holyCowMp3) made their first
contribution in
[mudler/LocalAI#1975
- [@&#8203;dependabot](https://togithub.com/dependabot) made their first
contribution in
[mudler/LocalAI#1998
- [@&#8203;adrienbrault](https://togithub.com/adrienbrault) made their
first contribution in
[mudler/LocalAI#2044
- [@&#8203;Taikono-Himazin](https://togithub.com/Taikono-Himazin) made
their first contribution in
[mudler/LocalAI#2085
- [@&#8203;eltociear](https://togithub.com/eltociear) made their first
contribution in
[mudler/LocalAI#2099
- [@&#8203;jtwolfe](https://togithub.com/jtwolfe) made their first
contribution in
[mudler/LocalAI#2100

**Full Changelog**:
mudler/LocalAI@v2.12.4...V2.13.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zMjUuMSIsInVwZGF0ZWRJblZlciI6IjM3LjMyNS4xIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants