Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded #7

Open
aleistor222 opened this issue Jan 12, 2024 · 28 comments
Labels
bug Something isn't working

Comments

@aleistor222
Copy link

I'm having issues implementing the plugin. I've come from using the original UELlama plugin which worked but I had issues with packaging the project. I'm pretty novice and I'm a bit confused on things.

I'm running the Oculus source build UE5.3 on Windows and when I try to follow the instructions to build using cmake I get the error CMakeLists.txt doesn't exist.
So I build through the Visual Studio IDE but when I try to run UE I get:

  • Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded. There may be an operating system error or the module may not be properly set up.
@getnamo
Copy link
Owner

getnamo commented Jan 21, 2024

If you're using this fork I recommend to start by using the plugin releases found here: https://github.com/getnamo/Llama-Unreal/releases/tag/v0.3.0 they contain the correct compiled dlls that should make it drag and drop to use in blueprint only. Use the .7z link.

If you're trying to build your own dlls for each platform, those build commands are meant to be run from within llama.cpp cloned root directory, not the plugin one.

@aleistor222
Copy link
Author

So I tried that release, followed the instructions and just put it in the plugins folder, but still the same issue.

Is there anything else I could be missing?
Is it because my project is being built on the visual studio IDE?
I have tried building and rebuilding the project etc.
Have tried on a different PC with a different project, all the same error.

@getnamo
Copy link
Owner

getnamo commented Jan 27, 2024

If you're building from source (Oculus5.3), you may possibly need to recompile the project with the plugin in it so it generates new binaries to match your custom engine. The release is meant for canonical 5.3 engine release.

@getnamo
Copy link
Owner

getnamo commented Feb 2, 2024

NB: if you're using the CUDA branch, you may be missing CUDA 12.2 runtimes. CPU-only should work though.

@oivio
Copy link

oivio commented Feb 3, 2024

I am having same exac error.

What I did is downloaded https://github.com/getnamo/Llama-Unreal/releases/tag/v0.3.0
Added that to clean project UE5.3 Plugin Folder
And soon as I run project that error popup:

Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded. There may be an operating system error or the module may not be properly set up.

From LOG:

[2024.02.03-00.23.06:881][ 0]LogWindows: Failed to load 'H:/Unreal Projects/Personal/CodeProject53/Plugins/Llama-Unreal/Binaries/Win64/UnrealEditor-UELlama.dll' (GetLastError=1114)
[2024.02.03-00.23.06:881][ 0]LogPluginManager: Error: Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded. There may be an operating system error or the module may not be properly set up.
[2024.02.03-00.23.08:994][ 0]Message dialog closed, result: Ok, title: Message, text: Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded. There may be an operating system error or the module may not be properly set up.

My specs

Windows 10 and GTX4080
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

@chris-boyce
Copy link

Same issue here tried the same steps, let me know if there is a fix. For now I'm going to try implement some of the changes myself but would be great if this could get resolved.

@getnamo
Copy link
Owner

getnamo commented Feb 3, 2024

Does the windows CPU only build work for anyone?

@chris-boyce
Copy link

Does the windows CPU only build work for anyone?

Yeah I tried it, its an issue loading the module by the seems, it the moment in the process of extracting it from the module see if it can build then.

@getnamo
Copy link
Owner

getnamo commented Feb 3, 2024

It's a static build so it's possible it has a hidden dll dependency that works on my system. Need to do a dll build for CPU and cuda and try again

@chris-boyce
Copy link

What version of Cuda are you using, I have a feeling im not on the right version

@chris-boyce
Copy link

What version of Cuda are you using, I have a feeling im not on the right version

Ive gotta go to work now :) Night shift. I will try the CPU one tomorrow ASAP. The best error I got so far is :
TLDR , dont think Im getting a .lib file somewhere
C:\Users\skyog\Documents\GitHub\ToTheMoonReLlama\Binaries\Win64\UnrealEditor-ToTheMoon.patch_1.lib and object C:\Users\skyog\Documents\GitHub\ToTheMoonReLlama\Binaries\Win64\UnrealEditor-ToTheMoon.patch_1.exp
llama.lib(ggml.obj) : error LNK2019: unresolved external symbol __imp_strdup referenced in function gguf_add_tensor
C:\Users\skyog\Documents\GitHub\ToTheMoonReLlama\Binaries\Win64\UnrealEditor-ToTheMoon.patch_1.exe : fatal error LNK1120: 1 unresolved externals.

@chris-boyce
Copy link

Update : Got CPU working with a little work around :), took it out of the module into the main source files. @oivio @aleistor222 would you like the version I've modified. Going to do the same with the GPU one most likely tomorrow evening Ill let you know if I can it to work.

@getnamo getnamo added the bug Something isn't working label Feb 4, 2024
@getnamo
Copy link
Owner

getnamo commented Feb 5, 2024

Doing a bit of a refactor before I make another build. Will test that one on other pcs to debug what's failing on startup.

@chris-boyce
Copy link

Did a little bit of poking around. Found the Module loading issue is coming from the common.cpp and idented some of the functions that are causing it. Linked a video of me all the functions that are causing the issues.

https://www.youtube.com/watch?v=v1Mr1am2Zp8

These are the ones I found instantly but there could be some more. I haven't a clue why its happening but I hope it helps in any way.
llama_tokenize
llama_token_to_piece
llama_detokenize_spm
llama_detokenize_bpe
llama_should_add_bos_token

@getnamo
Copy link
Owner

getnamo commented Feb 6, 2024

Try https://github.com/getnamo/Llama-Unreal/releases/tag/v0.4.0 with the CPU only build to see if it works out of the box. NB this has the refactor so if you used blueprints off of the old api you'll need to re-wire those.

Will have to address precise CUDA build dependencies at another date.

@chris-boyce
Copy link

v0.4.0 has the same error message when launching

@oivio
Copy link

oivio commented Feb 7, 2024

yea, same for me. I can confirm that with log:

[2024.02.07-16.10.49:730][  0]LogWindows: Failed to load 'H:/Unreal Projects/Llama/Plugins/Llama-Unreal/Binaries/Win64/UnrealEditor-LlamaCore.dll' (GetLastError=1114)
[2024.02.07-16.10.49:730][  0]LogPluginManager: Error: Plugin 'Llama' failed to load because module 'LlamaCore' could not be loaded.  There may be an operating system error or the module may not be properly set up.
[2024.02.07-16.11.27:950][  0]Message dialog closed, result: Ok, title: Message, text: Plugin 'Llama' failed to load because module 'LlamaCore' could not be loaded.  There may be an operating system error or the module may not be properly set up.

@chris-boyce
Copy link

I'm just in the process of using GFLAG to see what dll isn't getting loaded.

@getnamo
Copy link
Owner

getnamo commented Feb 7, 2024

Just to confirm v0.4 is failing with cuda = false in build.cs for you guys?

@chris-boyce
Copy link

chris-boyce commented Feb 7, 2024

Yeah Cuda is false again I think it links back to the changes to the functions now being in the commons folder. Idk it seems to me like it is missing a windows DLL file. Ill find some time to boot it up on a 2nd PC and check it and also using the GFLAG tool for VS to see what DLL isnt getting loaded is my current plan.

@chris-boyce
Copy link

chris-boyce commented Feb 7, 2024

Ok for my investigating it isn't a windows DLL. On the call stack is it saying loading the module correct "LlamaCore.dll" that what the logs say I have a feeling it was from when you Refactored the name to Llamacore as v.2.4 it hadn't been done yet and its the version that I can get running. I'm currently looking into how the Plugin is set up.

Update : V0.2 also has the issue. I got it working by extracting the code from the plugin. So it does go back to the first version you released.

getnamo added a commit that referenced this issue Feb 13, 2024
@getnamo
Copy link
Owner

getnamo commented Feb 13, 2024

Apparently building llama.cpp yourself locally can resolve this. Hints at static lib config maybe?

@getnamo
Copy link
Owner

getnamo commented Feb 16, 2024

See #10 for an alternative path. Thanks to @ellllie-42 pr you can now specify a LLAMA_PATH and use your standard CUDA_PATH if you want a dev friendly custom build env. If those fail or you have local cuda libs it will default to local paths as before.

This doesn't solve portability problem of builds yet.

@jm18499
Copy link

jm18499 commented Mar 4, 2024

I built llama.cpp using the same settings as the cuda and cpu build on the read me and tested it on my system its working with cuda and I can use gpu offloading by itself. What do I need to copy to the plugin for it to work? I tried copying the cudart.lib cublas.lib and cuda.lib to the cuda folder,editing the build.cs to enable it, and replacing the ggml_static.lib and llama.lib with the one from my build/release folder but I am still getting the same error. I tried to rebuild the solution and deleting the binaries of the plugin but that didnt work either. I was looking at a unity version(llmunity) they use llamafile(https://github.com/Mozilla-Ocho/llamafile) would that help this plugin too?

@SlySeanDaBomb
Copy link

i'm also having the same issue.

@ellllie-42
Copy link

ellllie-42 commented May 22, 2024 via email

@SlySeanDaBomb
Copy link

Did you use the tempfix associated with this issue?

On Tue, May 21, 2024, 4:42 PM SlySeanDaBomb @.> wrote: i'm also having the same issue. — Reply to this email directly, view it on GitHub <#7 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI . You are receiving this because you were mentioned.Message ID: @.>

i didn't know there was one, where/what is it?

@ellllie-42
Copy link

ellllie-42 commented May 22, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants