-
Notifications
You must be signed in to change notification settings - Fork 55
Description
Is it possible to use Github Actions to compile the llama.cpp libraries for CPU-only inference (to avoid needing multiple different variants per architecture and to avoid build dep issues in Actions) and include them in the built jar? This project has a workflow that can be used to build off of. They use JNI instead of JNA, but the same methods should be usable. The resource folders containing the architecture-specific libraries need to be named a certain way though for JNA to find them: {os-name}-{arch}. The docs aren't super helpful on that, they imply on linux it should be linux-amd64 but I could only get it to work by naming the resource folder linux-x86-64
I can help with this if you would like, but I can't test the workflow myself for obvious reasons