Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local build error: ValueError: Unable to instantiate model: Model format not supported (no matching implementation found) #2133

Open
pxysea opened this issue Mar 15, 2024 · 0 comments
Labels
bindings gpt4all-binding issues bug-unconfirmed python-bindings gpt4all-bindings Python specific issues

Comments

@pxysea
Copy link

pxysea commented Mar 15, 2024

error info:
`
Traceback (most recent call last):
File "D:\WorkProject\gpt4all\gpt4all\test.py", line 73, in
model = GPT4All(model_name=model_name, model_path=model_path);
File "d:\workproject\gpt4all\gpt4all\gpt4all-bindings\python\gpt4all\gpt4all.py", line 137, in init
self.model = _pyllmodel.LLModel(self.config["path"], n_ctx, ngl)
File "d:\workproject\gpt4all\gpt4all\gpt4all-bindings\python\gpt4all_pyllmodel.py", line 191, in init
raise ValueError(f"Unable to instantiate model: {'null' if s is None else s.decode()}")
ValueError: Unable to instantiate model: Model format not supported (no matching implementation found)

I try build the gpt4all-backend but has errors like cmake -B build result:
cmake -B build
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22621.
-- Interprocedural optimization support detected
-- Kompute found
-- General purpose GPU compute framework built on Vulkan
-- =======================================================
-- KOMPUTE_OPT_LOG_LEVEL: Critical
-- KOMPUTE_OPT_USE_SPDLOG: OFF
-- KOMPUTE_OPT_DISABLE_VK_DEBUG_LAYERS: ON
-- KOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK: ON
-- KOMPUTE_OPT_BUILD_SHADERS: OFF
-- KOMPUTE_OPT_USE_BUILT_IN_SPDLOG: ON
-- KOMPUTE_OPT_SPDLOG_ASYNC_MODE: OFF
-- KOMPUTE_OPT_USE_BUILT_IN_FMT: ON
-- KOMPUTE_OPT_USE_BUILT_IN_VULKAN_HEADER: ON
-- KOMPUTE_OPT_BUILT_IN_VULKAN_HEADER_TAG: v1.3.231
-- =======================================================
-- Version: 10.0.0
-- Build type:
-- Using log level Critical
-- shaderop_scale.h generating SHADEROP_SCALE_H
-- shaderop_scale_8.h generating SHADEROP_SCALE_8_H
-- shaderop_add.h generating SHADEROP_ADD_H
-- shaderop_addrow.h generating SHADEROP_ADDROW_H
-- shaderop_mul.h generating SHADEROP_MUL_H
-- shaderop_silu.h generating SHADEROP_SILU_H
-- shaderop_relu.h generating SHADEROP_RELU_H
-- shaderop_gelu.h generating SHADEROP_GELU_H
-- shaderop_softmax.h generating SHADEROP_SOFTMAX_H
-- shaderop_norm.h generating SHADEROP_NORM_H
-- shaderop_rmsnorm.h generating SHADEROP_RMSNORM_H
-- shaderop_diagmask.h generating SHADEROP_DIAGMASK_H
-- shaderop_mul_mat_mat_f32.h generating SHADEROP_MUL_MAT_MAT_F32_H
-- shaderop_mul_mat_f16.h generating SHADEROP_MUL_MAT_F16_H
-- shaderop_mul_mat_q8_0.h generating SHADEROP_MUL_MAT_Q8_0_H
-- shaderop_mul_mat_q4_0.h generating SHADEROP_MUL_MAT_Q4_0_H
-- shaderop_mul_mat_q4_1.h generating SHADEROP_MUL_MAT_Q4_1_H
-- shaderop_mul_mat_q6_k.h generating SHADEROP_MUL_MAT_Q6_K_H
-- shaderop_getrows_f16.h generating SHADEROP_GETROWS_F16_H
-- shaderop_getrows_q4_0.h generating SHADEROP_GETROWS_Q4_0_H
-- shaderop_getrows_q4_1.h generating SHADEROP_GETROWS_Q4_1_H
-- shaderop_getrows_q6_k.h generating SHADEROP_GETROWS_Q6_K_H
-- shaderop_rope_f16.h generating SHADEROP_ROPE_F16_H
-- shaderop_rope_f32.h generating SHADEROP_ROPE_F32_H
-- shaderop_cpy_f16_f16.h generating SHADEROP_CPY_F16_F16_H
-- shaderop_cpy_f16_f32.h generating SHADEROP_CPY_F16_F32_H
-- shaderop_cpy_f32_f16.h generating SHADEROP_CPY_F32_F16_H
-- shaderop_cpy_f32_f32.h generating SHADEROP_CPY_F32_F32_H
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM:
-- x86 detected
-- Configuring ggml implementation target llama-mainline-default in D:/WorkProject/gpt4all/gpt4all/gpt4all-backend/llama.cpp-mainline
-- x86 detected
-- Configuring model implementation target llamamodel-mainline-default
-- Configuring model implementation target gptj-default
-- Configuring ggml implementation target llama-mainline-avxonly in D:/WorkProject/gpt4all/gpt4all/gpt4all-backend/llama.cpp-mainline
-- x86 detected
-- Configuring model implementation target llamamodel-mainline-avxonly
-- Configuring model implementation target gptj-avxonly
-- Configuring done (7.0s)
-- Generating done (0.9s)
-- Build files have been written to: D:/WorkProject/gpt4all/gpt4all/gpt4all-backend/build
`

cmake --build build --parallel --config RelWithDebInfo
`
D:\WorkProject\gpt4all\gpt4all\gpt4all-backend>cmake --build build --parallel --config RelWithDebInfo

用于 .NET Framework 的 Microsoft (R) 生成引擎版本 17.1.0+ae57d105c
版权所有(C) Microsoft Corporation。保留所有权利。

xxd.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\bin\RelWithDebInfo\xxd.exe
Auto build dll exports
fmt.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\bin\RelWithDebInfo\fmt.dll
kp_logger.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llama.cpp-mainline\kompu
te\src\logger\RelWithDebInfo\kp_logger.lib
kompute.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llama.cpp-mainline\kompute
\src\RelWithDebInfo\kompute.lib
ggml-mainline-avxonly.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\ggml-mainlin
e-avxonly.dir\RelWithDebInfo\ggml-mainline-avxonly.lib
ggml-mainline-default.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\ggml-mainlin
e-default.dir\RelWithDebInfo\ggml-mainline-default.lib
llama-mainline-default.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\RelWithDebI
nfo\llama-mainline-default.lib
Auto build dll exports
llama-mainline-avxonly.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\RelWithDebI
nfo\llama-mainline-avxonly.lib
llmodel.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\bin\RelWithDebInfo\llmodel
.dll
Auto build dll exports
Auto build dll exports
gptj-avxonly.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\bin\RelWithDebInfo\gp
tj-avxonly.dll
用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.31.31106.2 版
版权所有(C) Microsoft Corporation。保留所有权利。
llamamodel.cpp
cl /c /I"D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build" /I"D:\WorkProject\gpt4all\gpt4all\g
pt4all-backend\llama.cpp-mainline" /Zi /W1 /WX- /diagnostics:column /O2 /Ob1 /D _WINDLL /D _MBCS /
D WIN32 /D _WINDOWS /D NDEBUG /D "LLAMA_VERSIONS=>=3" /D LLAMA_DATE=999999 /D "GGML_BUILD_VARIANT=
"avxonly"" /D VULKAN_HPP_DISPATCH_LOADER_DYNAMIC=1 /D GGML_USE_KOMPUTE /D _CRT_SECURE_NO_WARNING
S /D _XOPEN_SOURCE=600 /D "CMAKE_INTDIR="RelWithDebInfo"" /D llamamodel_mainline_avxonly_EXPORTS
/Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /GR /std:c++20 /Fo"llamamodel-
mainline-avxonly.dir\RelWithDebInfo\" /Fd"llamamodel-mainline-avxonly.dir\RelWithDebInfo\vc143.pd
b" /external:W1 /Gd /TP /errorReport:queue "D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\llamamo
del.cpp"
gptj-default.vcxproj -> D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\bin\RelWithDebInfo\gp
tj-default.dll
用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.31.31106.2 版
llamamodel.cpp
版权所有(C) Microsoft Corporation。保留所有权利。
cl /c /I"D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build" /I"D:\WorkProject\gpt4all\gpt4all\g
pt4all-backend\llama.cpp-mainline" /Zi /W1 /WX- /diagnostics:column /O2 /Ob1 /D _WINDLL /D _MBCS /
D WIN32 /D _WINDOWS /D NDEBUG /D "LLAMA_VERSIONS=>=3" /D LLAMA_DATE=999999 /D "GGML_BUILD_VARIANT=
"default"" /D VULKAN_HPP_DISPATCH_LOADER_DYNAMIC=1 /D GGML_USE_KOMPUTE /D _CRT_SECURE_NO_WARNING
S /D _XOPEN_SOURCE=600 /D "CMAKE_INTDIR="RelWithDebInfo"" /D llamamodel_mainline_default_EXPORTS
/Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /GR /std:c++20 /Fo"llamamodel-
mainline-default.dir\RelWithDebInfo\" /Fd"llamamodel-mainline-default.dir\RelWithDebInfo\vc143.pd
b" /external:W1 /Gd /TP /errorReport:queue "D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\llamamo
del.cpp"
D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\llamamodel.cpp(726,39): error C2039: "inner_product":
不是 "std" 的成员 [D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llamamodel-mainline-avxonly.vcxp
roj]
D:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.31.31103\include\unordered
_set(24): message : 参见“std”的声明 [D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llamamodel-main
line-avxonly.vcxproj]
D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\llamamodel.cpp(839,36): error C2039: "accumulate": 不 是
"std" 的成员 [D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llamamodel-mainline-avxonly.vcxproj
]
D:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.31.31103\include\unordered
_set(24): message : 参见“std”的声明 [D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llamamodel-main
line-avxonly.vcxproj]
D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\llamamodel.cpp(857,5): error C3861: “accumulate”: 找不到
标识符 [D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llamamodel-mainline-avxonly.vcxproj]
D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\llamamodel.cpp(843,40): error C2039: "inner_product":
不是 "std" 的成员 [D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llamamodel-mainline-avxonly.vcxp
roj]
D:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.31.31103\include\unordered
_set(24): message : 参见“std”的声明 [D:\WorkProject\gpt4all\gpt4all\gpt4all-backend\build\llamamodel-main
line-avxonly.vcxproj]
.......
`

Example Code

`test.py
model_name = "gpt4all-falcon-newbpe-q4_0.gguf"
model_path = f'C:\OpenAi\GPT4All\Models';

model = GPT4All(model_name=model_name, model_path=model_path);

`

Steps to Reproduce

1.cd gpt4all\gpt4all-backend & cmake -B build & cmake --build build --parallel --config RelWithDebInfo
2. cd ..\gpt4all-bindings\python & pip install -e .
3. python test.py

Expected Behavior

Your Environment

Name: gpt4all
Version: 2.3.0
Summary: Python bindings for GPT4All
Home-page: https://gpt4all.io/
Author: Nomic and the Open Source Community
Author-email: support@nomic.ai
License:
Location: d:\workproject\gpt4all\gpt4all\gpt4all-bindings\python
Editable project location: d:\workproject\gpt4all\gpt4all\gpt4all-bindings\python
Requires: requests, tqdm
Required-by:

  • Bindings version (e.g. "Version" from pip show gpt4all):
  • Operating System:
  • Chat model used (if applicable):
    image
@pxysea pxysea added bindings gpt4all-binding issues bug-unconfirmed labels Mar 15, 2024
@cebtenzzre cebtenzzre added the python-bindings gpt4all-bindings Python specific issues label Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bindings gpt4all-binding issues bug-unconfirmed python-bindings gpt4all-bindings Python specific issues
Projects
None yet
Development

No branches or pull requests

2 participants