Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix non-AVX CPU detection #2141

Merged
merged 9 commits into from
Mar 19, 2024
Merged

fix non-AVX CPU detection #2141

merged 9 commits into from
Mar 19, 2024

Conversation

cebtenzzre
Copy link
Member

@cebtenzzre cebtenzzre commented Mar 18, 2024

Key Changes

  • Fix a missing defined(_M_X64) that caused the "you don't have AVX" screen to not appear on Windows
  • Refactor CPU feature detection so that can't happen anymore
  • Improve the way we report this to the bindings

Chat

This message can now be shown on Windows. This screenshot is only an example, I haven't actually tested this on a Windows machine without AVX:

Screenshot 2024-03-18 at 18 43 00

Bindings

Before (exception is generic and misleading, actual error is hidden above the traceback):

LLModel ERROR: CPU does not support AVX
Traceback (most recent call last):
  File "/home/jared/src/own/gpt4all-scripts/test_any.py", line 18, in <module>
    main()
  File "/home/jared/src/own/gpt4all-scripts/test_any.py", line 10, in main
    x = GPT4All(model_name=path.name, model_path=path.parent, allow_download=False, device="Tesla P40")
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jared/src/forks/gpt4all/gpt4all-bindings/python/gpt4all/gpt4all.py", line 132, in __init__
    self.model = _pyllmodel.LLModel(self.config["path"], n_ctx, ngl)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jared/src/forks/gpt4all/gpt4all-bindings/python/gpt4all/_pyllmodel.py", line 188, in __init__
    raise ValueError(f"Unable to instantiate model: {'null' if s is None else s.decode()}")
ValueError: Unable to instantiate model: Model format not supported (no matching implementation found)

After:

Traceback (most recent call last):
  File "/home/jared/src/own/gpt4all-scripts/test_any.py", line 18, in <module>
    main()
  File "/home/jared/src/own/gpt4all-scripts/test_any.py", line 10, in main
    x = GPT4All(model_name=path.name, model_path=path.parent, allow_download=False, device="Tesla P40")
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jared/src/forks/gpt4all/gpt4all-bindings/python/gpt4all/gpt4all.py", line 132, in __init__
    self.model = _pyllmodel.LLModel(self.config["path"], n_ctx, ngl)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jared/src/forks/gpt4all/gpt4all-bindings/python/gpt4all/_pyllmodel.py", line 188, in __init__
    raise RuntimeError(f"Unable to instantiate model: {'null' if s is None else s.decode()}")
RuntimeError: Unable to instantiate model: CPU does not support AVX

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
This allows the bindings to have a more accurate error message in this
case.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
@cebtenzzre cebtenzzre requested a review from manyoso March 18, 2024 22:45
@cebtenzzre cebtenzzre linked an issue Mar 18, 2024 that may be closed by this pull request
2 tasks
@cebtenzzre cebtenzzre merged commit 6994100 into main Mar 19, 2024
6 of 19 checks passed
@cebtenzzre cebtenzzre deleted the fix-noavx-error branch March 19, 2024 14:56
@cebtenzzre cebtenzzre mentioned this pull request Apr 10, 2024
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

GPT4All 2.6.1: Could not load model due to invalid format. Checksum is OK
2 participants