Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash when trying to load Model #84

Closed
Nilox42 opened this issue Jul 18, 2023 · 12 comments
Closed

Crash when trying to load Model #84

Nilox42 opened this issue Jul 18, 2023 · 12 comments

Comments

@Nilox42
Copy link

Nilox42 commented Jul 18, 2023

I run my Instance on a brand-new Windows VM with 64 GB Ram and 22VCPU Cores. But when I try to load a model, the application crashes after a few seconds. I didn't find any logs, so I don't know what to do.

@louisgv
Copy link
Owner

louisgv commented Jul 18, 2023

@Nilox42 which version did you download btw? There's a bit of a refactor for GPU going on in #62, also which model did you tried?

@Nilox42
Copy link
Author

Nilox42 commented Jul 18, 2023

I reinstalled everything down to the Hypervisor, and now the app doesn't start at all.

And I Tried several different Models.

I don't use a GPU.

@LLukas22
Copy link
Collaborator

Which cpu architecture does your Windows VM have? And does it support AVX2? Currently the main windows target for rustformers is a x86-64bit machine with AVX2 support.

@Nilox42
Copy link
Author

Nilox42 commented Jul 19, 2023

I used host, which is a "Xeon E5 2640", and x86-64-v2-AES. The Xeon does not support AVX2, so that's something I will look into.

@navr32
Copy link

navr32 commented Aug 15, 2023

Hello ! Crash when try to inference....or when click on load..2x X5675 processor..24threads..96gb of ram..
So no AVX...could you build to be able to run without AVX..like llama.cpp do..or some others..like stablediffusion..invokeai ..and so ? many thanks because the projects looks very very good.
Have a nice days.

@LLukas22
Copy link
Collaborator

If you compile the app on your device without AVX support it should automatically exclude it from the build.

@navr32
Copy link

navr32 commented Aug 17, 2023

Yes ok ! i have done a try of compil but on Manjaro Linux have to search for the good package and after some time the windows come up bug the windows stay White.

Capture d’écran du 2023-08-18 01-46-05

No displays of the interface..but on localhost http://localhost:3047/ i have the localai website come on , and in 1407 the error message:

Capture d’écran du 2023-08-18 01-40-12

Capture d’écran du 2023-08-18 01-42-34

Capture d’écran du 2023-08-18 01-43-04

Capture d’écran du 2023-08-18 01-43-38

So i have try on another more old computer with 4 thread processor 8go on Manjaro just come of install...just again some package problem to found but the app come to life and with the dolly 2 model i can tchat but very long but work..so why this white windows on this systems ???

I think the problem come from Tauri so i have test the demo app of tauri and tauri nextjs and for the first the windows come on and the nextjs i have display too in the windows...so curious problem..

I have search some tricks for tauri app with white screen but found nothing at this time.

Thanks !

@navr32
Copy link

navr32 commented Aug 19, 2023

So i have clean all and try build again but same problem thing..and because i think curious i have the windows in grey with no design in it..? so i have just try to resize several time and the label..tex..button..come to view...but some button are not drawn..in the model view no delete button or cloud at the left.. and when i want go frome thread and model manager i have to resize the windows to haves interface become draw again...some have an advice ? wayland problem ? gtk ? graphic card amd rx580 problem ? nodesjs ? thanks again...but now the inference is working..on this system but the interface drawing is "Lazy"..WebKitWebProces
And i just come from to see that when local.ai is started i have WebKitWebProces processus running 107% of the processor core where is running...

After some search i have test to build ..so pnpm build ..and after start with ./apps/desktop/src-tauri/targer/release/local-ai and now start faster and have the ui full drawn...and when over the mouse on models the trash and cloud icon displays...but i have always 106% webkitprocess...

@navr32
Copy link

navr32 commented Aug 19, 2023

I just try to trace the call with ltrace i have

% time seconds usecs/call calls function


66.93 320.562901 2938 109106 SYS_getegid32
22.55 107.991411 2577 41900 SYS_waitpid

66% time in getegid32 ? and 22% waitpid ? need to investigate more...

@louisgv
Copy link
Owner

louisgv commented Aug 19, 2023

Hmm... it compiles locally but doesn't work when bundle?? Which Distro are you running btw? Might need to follow the linux setup guide: https://tauri.app/v1/guides/getting-started/prerequisites#setting-up-linux

@navr32
Copy link

navr32 commented Aug 20, 2023

No no ! it compile locally dev and bundle build work...but i must open another thread because the problem is another one.
Just now it use too many cpu on webkitprocess ...

@navr32
Copy link

navr32 commented Aug 20, 2023

So i think this issue can be close.. If i build localai with pnpm dev or pnpm build it start...but others problems come on ...blank windows..but if resize the windows for refresh ..the text in it come on..and buttons..so it's possible to use it..but hard to use...and the inference is working too..on different models...So perhaps the problem is in Tauri but if i build some other projects with tauri nextjs the app is working and no webkitprocess problem...so the problems seems to come just from the tauriapp of localai...

@louisgv louisgv closed this as completed Sep 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants