Skip to content

chore: ⬆️ Update ikawrakow/ik_llama.cpp to d4824131580b94ffa7b0e91c955e2b237c2fe16e#9447

Merged
mudler merged 1 commit intomudler:masterfrom
ci-forks:update/IK_LLAMA_VERSION
Apr 20, 2026
Merged

chore: ⬆️ Update ikawrakow/ik_llama.cpp to d4824131580b94ffa7b0e91c955e2b237c2fe16e#9447
mudler merged 1 commit intomudler:masterfrom
ci-forks:update/IK_LLAMA_VERSION

Conversation

@localai-bot
Copy link
Copy Markdown
Collaborator

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@localai-bot localai-bot force-pushed the update/IK_LLAMA_VERSION branch from d1d4370 to e917e86 Compare April 20, 2026 20:19
@mudler mudler merged commit 5973c0a into mudler:master Apr 20, 2026
38 checks passed
contrapuntal added a commit to contrapuntal/LocalAI that referenced this pull request Apr 23, 2026
Two related but independent upstream build issues surfaced while
rebuilding ik-llama-cpp-fallback on Apple Silicon:

1. prepare.sh BSD-sed compatibility — fixed locally in e9a45fe,
   upstream PR pending
2. llama_batch struct initializer mismatch in the grpc-server
   wrapper llava.cpp, introduced by two consecutive IK_LLAMA_VERSION
   pin bumps (PRs mudler#9430 and mudler#9447) without updating the wrapper

The document records current status, symptoms, last-known-working
pin, and fix options for each.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants