From 7032206e3f160b9eb72595dc1f62a8cfecb03402 Mon Sep 17 00:00:00 2001 From: Claude Date: Sat, 18 Apr 2026 12:53:08 +0000 Subject: [PATCH] Upgrade llama.cpp from b8831 to b8838 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit No C++ changes needed: b8831→b8838 contains only internal ggml/CUDA/WebGPU refactoring, CI workflow additions, and CPU-only memory-fitting improvements in src/llama.cpp. No public header API changes affect this project. https://claude.ai/code/session_011Ng1rVpZswncx1hWjjoHB1 --- CLAUDE.md | 2 +- CMakeLists.txt | 2 +- README.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/CLAUDE.md b/CLAUDE.md index b08ea58a..d2bf01ed 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -6,7 +6,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co Java bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp) via JNI, providing a high-level API for LLM inference in Java. The Java layer communicates with a native C++ library through JNI. -Current llama.cpp pinned version: **b8831** +Current llama.cpp pinned version: **b8838** ## Upgrading CUDA Version diff --git a/CMakeLists.txt b/CMakeLists.txt index 79c8e5db..20c62c24 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -97,7 +97,7 @@ set(GGML_AVX512 OFF CACHE BOOL "" FORCE) FetchContent_Declare( llama.cpp GIT_REPOSITORY https://github.com/ggerganov/llama.cpp.git - GIT_TAG b8831 + GIT_TAG b8838 ) FetchContent_MakeAvailable(llama.cpp) diff --git a/README.md b/README.md index e81df1e7..0b897aff 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ ![Java 8+](https://img.shields.io/badge/Java-8%2B-informational) -[![llama.cpp b8831](https://img.shields.io/badge/llama.cpp-%23b8831-informational)](https://github.com/ggml-org/llama.cpp/releases/tag/b8831) +[![llama.cpp b8838](https://img.shields.io/badge/llama.cpp-%23b8838-informational)](https://github.com/ggml-org/llama.cpp/releases/tag/b8838) # Java Bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp)