Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2.6.0 cannot load any models #1822

Closed
1 of 2 tasks
DamirTenishev opened this issue Jan 10, 2024 · 17 comments
Closed
1 of 2 tasks

v2.6.0 cannot load any models #1822

DamirTenishev opened this issue Jan 10, 2024 · 17 comments

Comments

@DamirTenishev
Copy link

System Info

GPT Chat Client 2.6.0

Windows 10 21H2 OS Build 19044.1889
CPU: AMD Ryzen 9 3950X 16-Core Processor 3.50 GHz
RAM: 64 Gb
GPU: NVIDIA 2080RTX Super, 8Gb

Information

  • The official example notebooks/scripts
  • My own modified scripts

Reproduction

  1. Download the client from the website
  2. Install on Windows
  3. Run the Client
  4. Download any model (double checked that model is the same as if downloaded from browser, passes MD5 check)
  5. Get the error: could not load model due to invalid format for .gguf

Expected behavior

There shouldn't be the error, model should load,

@Aenon1
Copy link

Aenon1 commented Jan 10, 2024

Same as above, replying so I will be notified when a response is posted.

@mikicvi
Copy link

mikicvi commented Jan 10, 2024

Same as @DamirTenishev, occurring directly after an update to v2.6.0 from the previous version.
Host type: (Macbook Pro 14 M2 Pro, 16GB memory, 10/16)

@DamirTenishev
Copy link
Author

I tired removing:

  • \AppData\Roaming\nomic.ai
  • \AppData\Local\nomic.ai
    (one per step) and re-downloading,

Nothing helps.

@mikicvi
Copy link

mikicvi commented Jan 10, 2024

I have just noticed in the commits that the 2.6.0 release has been reverted. After re-installing the 2.5.4/2.5.5 everything runs fine.
@DamirTenishev could try this...
Edit: you can get the release from github releases on the repo.

@DamirTenishev
Copy link
Author

@mikicvi , thank you. Release 2.5.4 taken from https://github.com/nomic-ai/gpt4all/releases/ worked like a charm!
Am I correct in thinking that release 2.5.5 is not available on release repos and the only way to get it is to build it on my own?

@mikicvi
Copy link

mikicvi commented Jan 10, 2024

@DamirTenishev Glad that worked for you. The release on Github is titled 2.5.4, but once the app is launched it actually says 2.5.5. I assume it's just a versioning mismatch and nothing to be worried about.
image

@GithubGey
Copy link

I also have the same error message

Encountered an error loading model:
"Could not load model due to invalid format for all-MiniLM-L6-v2-f16.gguf"
Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not
enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:
• Ensure the model file has a compatible format and type
• Check the model file is complete in the download folder
• You can find the download folder in the settings dialog
• If you've sideloaded the model ensure the file is not corrupt by checking md5sum
• Read more about what models are supported in our documentation for the gui
• Check out our discord channel for help

@syrabo
Copy link

syrabo commented Jan 11, 2024

I also have the same error message

macos 14.2.1

older Version works

@m1zar
Copy link

m1zar commented Jan 11, 2024

Same as everyone else here, 2.6.0 no longer loads the gguf models (i have many), Tried complete removal and re-download within 2.6.0 and still got the same error. So something is up with 2.6.0. Going to use the link above to get back to 2.5.4 and hopefully it works.

@ThiloteE
Copy link
Collaborator

News / Problem

2.6 is bugged and the devs are working on a release, which was announced in the GPT4All discord announcements channel.

Solution:

  • For now, going back to 2.5.4 is advised.
  • You may have to remove \AppData\Roaming\nomic.ai or \AppData\Local\nomic.ai after you remove 2.6 and before you install 2.5.4, for things to work.
  • All models you downloaded within 2.6 will probably not work with version 2.5.4, as for version 2.6, the underlying core llama.cpp has been updated to a newer version. This required a breaking change for previously downloaded models. The download dialog has been updated to provide newer versions of the models that will work with 2.6

@cebtenzzre
Copy link
Member

  • All models you downloaded within 2.6 will probably not work with version 2.5.4

Actually, Llama/Mistral/Orca/Hermes/Wizard/etc. haven't changed, only Falcon and MPT have.

@cebtenzzre cebtenzzre changed the title GPT4All could not load model due to invalid format for <name>.gguf v2.6.0 cannot load any models Jan 11, 2024
@cebtenzzre
Copy link
Member

The release on Github is titled 2.5.4, but once the app is launched it actually says 2.5.5. I assume it's just a versioning mismatch and nothing to be worried about.

Yeah, I must have built the installer from the commit after the version number changed - the main branch was on 2.5.5 for a while. I reuploaded installers that should correctly report version 2.5.4.

@cebtenzzre cebtenzzre added the awaiting-release issue is awaiting next release label Jan 11, 2024
@teyssieuman
Copy link

How can I download the installer for version 2.5.4 ?

@cebtenzzre
Copy link
Member

v2.6.1 was released with a fix.

@cebtenzzre cebtenzzre removed the awaiting-release issue is awaiting next release label Jan 19, 2024
@webghostx
Copy link

webghostx commented Apr 9, 2024

Is this the same error? I have the latest version 2.7.2 on Windows 10 and cannot load any model. Everything runs smoothly on Ubuntu.

[Debug] (Sat Apr 6 18:25:39 2024): deserializing chats took: 2 ms
[Warning] (Sat Apr 6 18:25:54 2024): ERROR: Could not load model due to invalid format for Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf id "1e85b6e0-2e26-4055-ad15-c68bd95f5c71"

[Warning] (Sat Apr 6 18:25:56 2024): ERROR: Could not load model due to invalid format for mistral-7b-instruct-v0.1.Q4_0.gguf id "21b66e54-e318-4653-a533-2f016a4ec5a9"

[Warning] (Sat Apr 6 18:25:57 2024): ERROR: Could not load model due to invalid format for gpt4all-falcon-newbpe-q4_0.gguf id "53455340-589f-44ae-8411-b5a3b566ec51"

[Warning] (Sat Apr 6 18:25:58 2024): ERROR: Could not load model due to invalid format for wizardlm-13b-v1.2.Q4_0.gguf id "651900ae-f896-47af-bfc6-ee1d1c717e69"

[Warning] (Sat Apr 6 18:26:00 2024): ERROR: Could not load model due to invalid format for mpt-7b-chat.gguf4.Q4_0.gguf id "7c0dd5c0-aa28-44d3-a90b-f04b43ec3b2c"

[Warning] (Sat Apr 6 18:26:02 2024): ERROR: Could not load model due to invalid format for orca-mini-3b-gguf2-q4_0.gguf id "917cc57c-4d1f-445c-9bb7-65c1f5f265ef"

[Warning] (Sat Apr 6 18:26:04 2024): ERROR: Could not load model due to invalid format for em_german_mistral_v01.Q4_0.gguf id "fd56fdfe-ea7e-4e3c-925c-ff2a904fb966"

[Warning] (Sat Apr 6 18:26:05 2024): ERROR: Could not load model due to invalid format for kafkalm-7b-german-v0.1.Q4_0.gguf id "07141e0a-855f-45a2-aea4-19ac092e45eb"

@cebtenzzre
Copy link
Member

I have the latest version 2.7.2 on Windows 10 and cannot load any model.

What CPU does your Windows machine have? If it's old or a low-end CPU (e.g. Pentium) then it may not support AVX instructions, which caused silent issues on Windows until #2141, which is not in a released version yet but causes an error to be clearly displayed. The tracking issue for these CPUs is #1540.

Otherwise, you should open a new issue. Btw, are you using WSL?

@webghostx
Copy link

Okay. After 12 years, I should perhaps buy a new machine. It's an i7-860 2.80GHz. I was hoping it would run a bit faster with the GeForce GTX 1050 Ti than on my Acer Swift Notebook. There it runs on Ubuntu. No WSL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants