-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded #7
Comments
If you're using this fork I recommend to start by using the plugin releases found here: https://github.com/getnamo/Llama-Unreal/releases/tag/v0.3.0 they contain the correct compiled dlls that should make it drag and drop to use in blueprint only. Use the .7z link. If you're trying to build your own dlls for each platform, those build commands are meant to be run from within llama.cpp cloned root directory, not the plugin one. |
So I tried that release, followed the instructions and just put it in the plugins folder, but still the same issue. Is there anything else I could be missing? |
If you're building from source (Oculus5.3), you may possibly need to recompile the project with the plugin in it so it generates new binaries to match your custom engine. The release is meant for canonical 5.3 engine release. |
NB: if you're using the CUDA branch, you may be missing CUDA 12.2 runtimes. CPU-only should work though. |
I am having same exac error. What I did is downloaded https://github.com/getnamo/Llama-Unreal/releases/tag/v0.3.0
From LOG:
My specs Windows 10 and GTX4080 |
Same issue here tried the same steps, let me know if there is a fix. For now I'm going to try implement some of the changes myself but would be great if this could get resolved. |
Does the windows CPU only build work for anyone? |
Yeah I tried it, its an issue loading the module by the seems, it the moment in the process of extracting it from the module see if it can build then. |
It's a static build so it's possible it has a hidden dll dependency that works on my system. Need to do a dll build for CPU and cuda and try again |
What version of Cuda are you using, I have a feeling im not on the right version |
Ive gotta go to work now :) Night shift. I will try the CPU one tomorrow ASAP. The best error I got so far is : |
Update : Got CPU working with a little work around :), took it out of the module into the main source files. @oivio @aleistor222 would you like the version I've modified. Going to do the same with the GPU one most likely tomorrow evening Ill let you know if I can it to work. |
Doing a bit of a refactor before I make another build. Will test that one on other pcs to debug what's failing on startup. |
Did a little bit of poking around. Found the Module loading issue is coming from the common.cpp and idented some of the functions that are causing it. Linked a video of me all the functions that are causing the issues. https://www.youtube.com/watch?v=v1Mr1am2Zp8 These are the ones I found instantly but there could be some more. I haven't a clue why its happening but I hope it helps in any way. |
Try https://github.com/getnamo/Llama-Unreal/releases/tag/v0.4.0 with the CPU only build to see if it works out of the box. NB this has the refactor so if you used blueprints off of the old api you'll need to re-wire those. Will have to address precise CUDA build dependencies at another date. |
v0.4.0 has the same error message when launching |
yea, same for me. I can confirm that with log:
|
I'm just in the process of using GFLAG to see what dll isn't getting loaded. |
Just to confirm v0.4 is failing with cuda = false in build.cs for you guys? |
Yeah Cuda is false again I think it links back to the changes to the functions now being in the commons folder. Idk it seems to me like it is missing a windows DLL file. Ill find some time to boot it up on a 2nd PC and check it and also using the GFLAG tool for VS to see what DLL isnt getting loaded is my current plan. |
Ok for my investigating it isn't a windows DLL. On the call stack is it saying loading the module correct "LlamaCore.dll" that what the logs say I have a feeling it was from when you Refactored the name to Llamacore as v.2.4 it hadn't been done yet and its the version that I can get running. I'm currently looking into how the Plugin is set up. Update : V0.2 also has the issue. I got it working by extracting the code from the plugin. So it does go back to the first version you released. |
Apparently building llama.cpp yourself locally can resolve this. Hints at static lib config maybe? |
See #10 for an alternative path. Thanks to @ellllie-42 pr you can now specify a LLAMA_PATH and use your standard CUDA_PATH if you want a dev friendly custom build env. If those fail or you have local cuda libs it will default to local paths as before. This doesn't solve portability problem of builds yet. |
I built llama.cpp using the same settings as the cuda and cpu build on the read me and tested it on my system its working with cuda and I can use gpu offloading by itself. What do I need to copy to the plugin for it to work? I tried copying the cudart.lib cublas.lib and cuda.lib to the cuda folder,editing the build.cs to enable it, and replacing the ggml_static.lib and llama.lib with the one from my build/release folder but I am still getting the same error. I tried to rebuild the solution and deleting the binaries of the plugin but that didnt work either. I was looking at a unity version(llmunity) they use llamafile(https://github.com/Mozilla-Ocho/llamafile) would that help this plugin too? |
i'm also having the same issue. |
Did you use the tempfix associated with this issue?
…On Tue, May 21, 2024, 4:42 PM SlySeanDaBomb ***@***.***> wrote:
i'm also having the same issue.
—
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
i didn't know there was one, where/what is it? |
#10
On Tue, May 21, 2024, 5:34 PM SlySeanDaBomb ***@***.***>
wrote:
… Did you use the tempfix associated with this issue?
… <#m_-2477502457224046517_>
On Tue, May 21, 2024, 4:42 PM SlySeanDaBomb *@*.*> wrote: i'm also having
the same issue. — Reply to this email directly, view it on GitHub <#7
(comment)
<#7 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI
<https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI>
. You are receiving this because you were mentioned.Message ID: @.*>
i didn't know there was one, where/what is it?
—
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BGGIIKPN56VEDJU7IGCCB33ZDPY27AVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTG4YTGMBVGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I'm having issues implementing the plugin. I've come from using the original UELlama plugin which worked but I had issues with packaging the project. I'm pretty novice and I'm a bit confused on things.
I'm running the Oculus source build UE5.3 on Windows and when I try to follow the instructions to build using cmake I get the error CMakeLists.txt doesn't exist.
So I build through the Visual Studio IDE but when I try to run UE I get:
The text was updated successfully, but these errors were encountered: