Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(sycl): Add support for Intel GPUs with sycl (#1647) #1660

Merged
merged 13 commits into from
Feb 1, 2024
Merged

Conversation

mudler
Copy link
Owner

@mudler mudler commented Jan 29, 2024

part of #1647

based on: ggerganov/llama.cpp#2690

Note on exposing the GPU with docker: ggerganov/llama.cpp#2690 (comment)

Testing with:

$ docker build --build-arg BUILD_TYPE=sycl_f32 --build-arg IMAGE_TYPE=core -t local-ai .
# Build only llama.cpp
$ docker build --build-arg BUILD_TYPE=sycl_f32 --build-arg IMAGE_TYPE=core --build-arg GRPC_BACKENDS=backend-assets/grpc/llama-cpp -t local-ai .
$ docker run -ti -p 8080:8080 -v $PWD/models:/build/models \
   --rm -e DEBUG=1 -e GGML_SYCL_DEVICE=0 \
   -e BUILD_TYPE=sycl_f32 --device /dev/dri/renderD128:/dev/dri/renderD128 \
   --device /dev/dri/card1:/dev/dri/card1 -t \
   local-ai https://gist.githubusercontent.com/mudler/55805db5a3675ddb877630a47d25bef3/raw/5a3a33447efb5b422793ae1ff130020d0c7876c9/llava-1.5

Copy link

netlify bot commented Jan 29, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit fd13312
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/65bbcaf48a8eff0008c7005f
😎 Deploy Preview https://deploy-preview-1660--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Dockerfile Outdated
# oneapi requirements
RUN if [ "${BUILD_TYPE}" = "sycl_f16" ] || [ "${BUILD_TYPE}" = "sycl_f32" ]; then \
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/163da6e4-56eb-4948-aba3-debcec61c064/l_BaseKit_p_2024.0.1.46_offline.sh && \
sh ./l_BaseKit_p_2024.0.1.46_offline.sh \
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm expecting some fun here as this is going to be interactive

@mudler mudler added enhancement New feature or request high prio roadmap labels Jan 31, 2024
@mudler mudler changed the title feat(sycl): Add sycl support (#1647) feat(sycl): Add support for Intel GPUs with sycl (#1647) Jan 31, 2024
@mudler mudler self-assigned this Jan 31, 2024
@mudler
Copy link
Owner Author

mudler commented Jan 31, 2024

it fails here:

11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr The program was built for 1 devices                                                                            
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stdout GGML_SYCL_DEBUG=0                                                                                              
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Build program log for '12th Gen Intel(R) Core(TM) i7-1280P':                                                   
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Compilation started                                                                                            
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Compilation done                                                                                               
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Linking started                                                                                                
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Linking done                                                                                                   
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Device build started                                                                                           
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Options used by backend compiler:                                                                              
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Failed to build device program                                                                                 
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr CompilerException Failed to lookup symbol _ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0
_clES7_EUlNS3_7nd_itemILi3EEEE_                                                                                                                                                           
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr JIT session error: Symbols not found: [ _Z11fmax_commonDv32_fS_S_ ]                                            
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Failed to materialize symbols: { (main, { _ZTSZZL17soft_max_f32_syclPKfS0_PfiiifPN4sycl3_V15queueEENKUlRNS3_7ha
ndlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuuu__ZTSZZL17soft_max_f32_syclPKfS0_PfiiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZZL17rms_norm_f32
_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZZL19group_norm_f32_syclPKfPfiiiPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3E
EEE_, _ZTSZZL19group_norm_f32_syclPKfPfiiiPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZZL17rms_norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0
_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuu__ZTSZZL19group_norm_f32_syclPKfPfiiiPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZL17sum_rows_f32_syclPKfPf
iiPN4sycl3_V15queueEEUlNS3_7nd_itemILi3EEEE_, _ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuu__ZTSZZL17rms_norm_f32_sy
clPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuu__ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_it
emILi3EEEE_, _ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3EEEE_ }) }                                                                    
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr                                                                                                                
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr  -11 (PI_ERROR_BUILD_PROGRAM_FAILURE)Exception caught at file:/build/backend/cpp/llama/llama.cpp/ggml-sycl.cpp,
 line:12644                                                                    

friendly ping @abhilash1910 @NeoZhangJianyu , chances you can help here? any pointers would be appreciated! thanks!

It looks something wrt linking, but I thought the installation steps should be general enough to apply to all the binaries in the example folder?

@mudler
Copy link
Owner Author

mudler commented Feb 1, 2024

it fails here:

11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr The program was built for 1 devices                                                                            
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stdout GGML_SYCL_DEBUG=0                                                                                              
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Build program log for '12th Gen Intel(R) Core(TM) i7-1280P':                                                   
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Compilation started                                                                                            
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Compilation done                                                                                               
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Linking started                                                                                                
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Linking done                                                                                                   
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Device build started                                                                                           
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Options used by backend compiler:                                                                              
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Failed to build device program                                                                                 
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr CompilerException Failed to lookup symbol _ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0
_clES7_EUlNS3_7nd_itemILi3EEEE_                                                                                                                                                           
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr JIT session error: Symbols not found: [ _Z11fmax_commonDv32_fS_S_ ]                                            
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr Failed to materialize symbols: { (main, { _ZTSZZL17soft_max_f32_syclPKfS0_PfiiifPN4sycl3_V15queueEENKUlRNS3_7ha
ndlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuuu__ZTSZZL17soft_max_f32_syclPKfS0_PfiiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZZL17rms_norm_f32
_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZZL19group_norm_f32_syclPKfPfiiiPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3E
EEE_, _ZTSZZL19group_norm_f32_syclPKfPfiiiPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZZL17rms_norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0
_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuu__ZTSZZL19group_norm_f32_syclPKfPfiiiPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZTSZL17sum_rows_f32_syclPKfPf
iiPN4sycl3_V15queueEEUlNS3_7nd_itemILi3EEEE_, _ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuu__ZTSZZL17rms_norm_f32_sy
clPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3EEEE_, _ZGVdN32uuuuuu__ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_it
emILi3EEEE_, _ZTSZZL13norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3EEEE_ }) }                                                                    
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr                                                                                                                
11:31PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:33397): stderr  -11 (PI_ERROR_BUILD_PROGRAM_FAILURE)Exception caught at file:/build/backend/cpp/llama/llama.cpp/ggml-sycl.cpp,
 line:12644                                                                    

friendly ping @abhilash1910 @NeoZhangJianyu , chances you can help here? any pointers would be appreciated! thanks!

It looks something wrt linking, but I thought the installation steps should be general enough to apply to all the binaries in the example folder?

nevermind, turns out I'm somehow not capable to select the GPU and hence that's what happens (the error, even if cryptic is not regarding linking issues in build-time, but rather ops not supported by the device that was selected)

@NeoZhangJianyu
Copy link

sion error:

Could you share whole log?
I guess your device id is wrong, you set non-gpu device id.

@mudler
Copy link
Owner Author

mudler commented Feb 1, 2024

sion error:

Could you share whole log? I guess your device id is wrong, you set non-gpu device id.

it looks that somehow I cannot see the GPU at all, I was confused that's why I picked up the first one (listed as acc):

$ ./bin/ls-sycl-device 
found 2 SYCL devices:
  Device 0: 12th Gen Intel(R) Core(TM) i7-1280P,        compute capability 3.0,
        max compute_units 20,   max work group size 8192,       max sub group size 64,  global mem size 67084083200
  Device 1: Intel(R) FPGA Emulation Device,     compute capability 1.2,
        max compute_units 20,   max work group size 67108864,   max sub group size 64,  global mem size 67084083200
$ sycl-ls
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.12.0.12_195853.xmain-hotfix]
[opencl:cpu:1] Intel(R) OpenCL, 12th Gen Intel(R) Core(TM) i7-1280P OpenCL 3.0 (Build 0) [2023.16.12.0.12_195853.xmain-hotfix]
$ clinfo -l                          
Platform #0: Intel(R) OpenCL
 `-- Device #0: 12th Gen Intel(R) Core(TM) i7-1280P
Platform #1: Intel(R) FPGA Emulation Platform for OpenCL(TM)
 `-- Device #0: Intel(R) FPGA Emulation Device
$ hwinfo --display
32: PCI 02.0: 0300 VGA compatible controller (VGA)              
  [Created at pci.386]
  Unique ID: _Znp.usB9nIk3U2E
  SysFS ID: /devices/pci0000:00/0000:00:02.0
  SysFS BusID: 0000:00:02.0
  Hardware Class: graphics card
  Model: "Intel VGA compatible controller"
  Vendor: pci 0x8086 "Intel Corporation"
  Device: pci 0x46a6 
  SubVendor: pci 0x1028 "Dell"
  SubDevice: pci 0x0b08 
  Revision: 0x0c
  Driver: "i915"
  Driver Modules: "i915"
  Memory Range: 0x6054000000-0x6054ffffff (rw,non-prefetchable)
  Memory Range: 0x4000000000-0x400fffffff (ro,non-prefetchable)
  I/O Ports: 0x3000-0x303f (rw)
  Memory Range: 0x000c0000-0x000dffff (rw,non-prefetchable,disabled)
  IRQ: 187 (44937037 events)
  Module Alias: "pci:v00008086d000046A6sv00001028sd00000B08bc03sc00i00"
  Driver Info #0:
    Driver Status: i915 is active
    Driver Activation Cmd: "modprobe i915"
  Config Status: cfg=new, avail=yes, need=no, active=unknown

Primary display adapter: #32
$ groups
mudler vboxusers video render

must be related to my drivers, and likely the ones I have in my openSUSE box don't work out of the box with sycl

@mudler
Copy link
Owner Author

mudler commented Feb 1, 2024

ok, going to follow-up from images built from master. I think the changes here are correct as I see sycl output all over, the problem is I cannot select my GPU device

@mudler mudler merged commit 1c57f8d into master Feb 1, 2024
24 of 25 checks passed
@mudler mudler deleted the SYCL branch February 1, 2024 18:21
@NeoZhangJianyu
Copy link

ok, going to follow-up from images built from master. I think the changes here are correct as I see sycl output all over, the problem is I cannot select my GPU device

You can choose GPU by environment variable: GGML_SYCL_DEVICE

@mudler
Copy link
Owner Author

mudler commented Feb 2, 2024

ok, going to follow-up from images built from master. I think the changes here are correct as I see sycl output all over, the problem is I cannot select my GPU device

You can choose GPU by environment variable: GGML_SYCL_DEVICE

Correct, I've tried that, but in my case see #1660 (comment) , there is no iGPU detected

@NeoZhangJianyu
Copy link

ok, going to follow-up from images built from master. I think the changes here are correct as I see sycl output all over, the problem is I cannot select my GPU device

You can choose GPU by environment variable: GGML_SYCL_DEVICE

Correct, I've tried that, but in my case see #1660 (comment) , there is no iGPU detected

Currently, GGML SYCL backend only support GPU. If there is no GPU, it can't work well. If you want to run on Intel GPU, you could use GGML oneMKL backend.

@mudler mudler mentioned this pull request Feb 8, 2024
truecharts-admin referenced this pull request in truecharts/public Feb 12, 2024
….0 by renovate (#18178)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.7.0-cublas-cuda11-ffmpeg-core` ->
`v2.8.0-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.7.0-cublas-cuda11-core` -> `v2.8.0-cublas-cuda11-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.7.0-cublas-cuda12-ffmpeg-core` ->
`v2.8.0-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.7.0-cublas-cuda12-core` -> `v2.8.0-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.7.0-ffmpeg-core` -> `v2.8.0-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.7.0` -> `v2.8.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

### [`v2.8.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.8.0)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.7.0...v2.8.0)

This release adds support for Intel GPUs, and it deprecates old
ggml-based backends which are by now superseded by llama.cpp (that now
supports more architectures out-of-the-box). See also
[https://github.com/mudler/LocalAI/issues/1651](https://togithub.com/mudler/LocalAI/issues/1651).

Images are now based on Ubuntu 22.04 LTS instead of Debian bullseye.

##### Intel GPUs

There are now images tagged with "sycl". There are sycl-f16 and sycl-f32
images indicating f16 or f32 support.

For example, to start phi-2 with an Intel GPU it is enough to use the
container image like this:

docker run -e DEBUG=true -ti -v $PWD/models:/build/models -p 8080:8080
-v /dev/dri:/dev/dri --rm
quay.io/go-skynet/local-ai:master-sycl-f32-ffmpeg-core phi-2

##### What's Changed

##### Exciting New Features 🎉

- feat(sycl): Add support for Intel GPUs with sycl
([#&#8203;1647](https://togithub.com/mudler/LocalAI/issues/1647)) by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1660](https://togithub.com/mudler/LocalAI/pull/1660)
- Drop old falcon backend (deprecated) by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1675](https://togithub.com/mudler/LocalAI/pull/1675)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1678](https://togithub.com/mudler/LocalAI/pull/1678)
- Drop ggml-based gpt2 and starcoder (supported by llama.cpp) by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1679](https://togithub.com/mudler/LocalAI/pull/1679)
- fix(Dockerfile): sycl dependencies by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1686](https://togithub.com/mudler/LocalAI/pull/1686)
- feat: Use ubuntu as base for container images, drop deprecated
ggml-transformers backends by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1689](https://togithub.com/mudler/LocalAI/pull/1689)

##### 👒 Dependencies

- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1656](https://togithub.com/mudler/LocalAI/pull/1656)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1665](https://togithub.com/mudler/LocalAI/pull/1665)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1669](https://togithub.com/mudler/LocalAI/pull/1669)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1673](https://togithub.com/mudler/LocalAI/pull/1673)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1683](https://togithub.com/mudler/LocalAI/pull/1683)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1688](https://togithub.com/mudler/LocalAI/pull/1688)
- ⬆️ Update mudler/go-stable-diffusion by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1674](https://togithub.com/mudler/LocalAI/pull/1674)

##### Other Changes

- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1661](https://togithub.com/mudler/LocalAI/pull/1661)
- feat(mamba): Add bagel-dpo-2.8b by
[@&#8203;richiejp](https://togithub.com/richiejp) in
[https://github.com/mudler/LocalAI/pull/1671](https://togithub.com/mudler/LocalAI/pull/1671)
- fix (docs): fixed broken links `github/` -> `github.com/` by
[@&#8203;Wansmer](https://togithub.com/Wansmer) in
[https://github.com/mudler/LocalAI/pull/1672](https://togithub.com/mudler/LocalAI/pull/1672)
- Fix HTTP links in README.md by
[@&#8203;vfiftyfive](https://togithub.com/vfiftyfive) in
[https://github.com/mudler/LocalAI/pull/1677](https://togithub.com/mudler/LocalAI/pull/1677)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1681](https://togithub.com/mudler/LocalAI/pull/1681)
- ci: cleanup worker before run by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1685](https://togithub.com/mudler/LocalAI/pull/1685)
- Revert "fix(Dockerfile): sycl dependencies" by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1687](https://togithub.com/mudler/LocalAI/pull/1687)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1691](https://togithub.com/mudler/LocalAI/pull/1691)

##### New Contributors

- [@&#8203;richiejp](https://togithub.com/richiejp) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/1671](https://togithub.com/mudler/LocalAI/pull/1671)
- [@&#8203;Wansmer](https://togithub.com/Wansmer) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/1672](https://togithub.com/mudler/LocalAI/pull/1672)
- [@&#8203;vfiftyfive](https://togithub.com/vfiftyfive) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/1677](https://togithub.com/mudler/LocalAI/pull/1677)

**Full Changelog**:
mudler/LocalAI@v2.7.0...v2.8.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 10pm on monday" in timezone
Europe/Amsterdam, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4xODMuMCIsInVwZGF0ZWRJblZlciI6IjM3LjE4My4wIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIn0=-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants