Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot instantiate local gpt4all model in chat #348

Open
h3jia opened this issue Aug 17, 2023 · 6 comments
Open

cannot instantiate local gpt4all model in chat #348

h3jia opened this issue Aug 17, 2023 · 6 comments
Labels
bug Something isn't working project:extensibility Extension points, routing, configuration

Comments

@h3jia
Copy link

h3jia commented Aug 17, 2023

Hello, could you help me figure out why I cannot use the local gpt4all model? I'm using the ggml-gpt4all-l13b-snoozy language model without embedding model, and have the model downloaded to .cache/gpt4all/ (although via a symbolic link since I'm on a cluster with limited home directory quota). When I type /ask hello world in the chat it gives me the following error:

Sorry, something went wrong and I wasn't able to index that path.

Traceback (most recent call last):
  File "/home/hejia/.conda/envs/hejia@stellar-2/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 38, in process_message
    await self._process_message(message)
  File "/home/hejia/.conda/envs/hejia@stellar-2/lib/python3.11/site-packages/jupyter_ai/chat_handlers/ask.py", line 44, in _process_message
    self.get_llm_chain()
  File "/home/hejia/.conda/envs/hejia@stellar-2/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 83, in get_llm_chain
    self.create_llm_chain(lm_provider, lm_provider_params)
  File "/home/hejia/.conda/envs/hejia@stellar-2/lib/python3.11/site-packages/jupyter_ai/chat_handlers/ask.py", line 29, in create_llm_chain
    self.llm = provider(**provider_params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hejia/.conda/envs/hejia@stellar-2/lib/python3.11/site-packages/jupyter_ai_magics/providers.py", line 270, in __init__
    super().__init__(**kwargs)
  File "/home/hejia/.conda/envs/hejia@stellar-2/lib/python3.11/site-packages/jupyter_ai_magics/providers.py", line 178, in __init__
    super().__init__(*args, **kwargs, **model_kwargs)
  File "/home/hejia/.conda/envs/hejia@stellar-2/lib/python3.11/site-packages/langchain/load/serializable.py", line 74, in __init__
    super().__init__(**kwargs)
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4AllProvider
__root__
  Unable to instantiate model (type=value_error)

Below is some system info FYI:

(hejia@stellar-2) [hejia@stellar-intel ~]$ ll -a .cache/gpt4all/
total 0
drwxr-xr-x. 2 hejia astro  50 Aug 17 04:12 .
drwxr-xr-x. 8 hejia astro 123 Aug 17 04:12 ..
lrwxrwxrwx. 1 hejia astro  48 Aug 17 04:12 ggml-gpt4all-l13b-snoozy.bin -> /home/hejia/tigress/ggml-gpt4all-l13b-snoozy.bin
(hejia@stellar-2) [hejia@stellar-intel ~]$ 
(hejia@stellar-2) [hejia@stellar-intel ~]$ 
(hejia@stellar-2) [hejia@stellar-intel ~]$ conda list | grep "jupyter"
jupyter-ai                2.1.0                    pypi_0    pypi
jupyter-ai-magics         2.1.0                    pypi_0    pypi
jupyter-client            8.2.0                    pypi_0    pypi
jupyter-core              5.3.0                    pypi_0    pypi
jupyter-events            0.6.3                    pypi_0    pypi
jupyter-lsp               2.1.0                    pypi_0    pypi
jupyter-server            2.6.0                    pypi_0    pypi
jupyter-server-terminals  0.4.4                    pypi_0    pypi
jupyterlab                4.0.5                    pypi_0    pypi
jupyterlab-pygments       0.2.2                    pypi_0    pypi
jupyterlab-server         2.22.1                   pypi_0    pypi
@h3jia h3jia added the bug Something isn't working label Aug 17, 2023
@welcome
Copy link

welcome bot commented Aug 17, 2023

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@JasonWeill
Copy link
Collaborator

The generic error messages is itself a known issue: #238

@mtekman
Copy link

mtekman commented Sep 18, 2023

Related: zylon-ai/private-gpt#691

I'm able to get past the error by downgrading:

pip uninstall gpt4all ## this was version 1.0.12
pip install gpt4all==0.3.4
## then restart jupyterlab

But then I encounter new issues with the error message:

LLModel.prompt_model() got an unexpected keyword argument 'max_tokens'

which sounds like jupyter-ai targeting a newer privateGPT api which isn't present there in 0.3.4 version.

I will keep hunting. I'm sure there's a version of gpt4all which this was all tested/developed on and worked.

Edit: Found it, use v1 API not v0.3

Do:

pip uninstall gpt4all ## this was version 1.0.12
pip install gpt4all==1.0.0

Edit: Hmm, v1.0.0 appears to not be able to /generate so well.

Testing versions

gpt4all Text Works? Generate Works ? Error
1.0.12 No No Invalid model
1.0.11 No No Invalid model
1.0.10 No No Invalid model
1.0.9 No No Invalid model
1.0.8 Yes Kinda wasn't able to index that path
1.0.0 Yes Kinda wasn't able to index that path

So gpt4all==1.0.8 is the latest that works for text, and "kinda" works with /generate in the sense that it sends the request, but I think Jupyter is unable to decode the response.

Edit:

A better solution that I've seen is to use a locally hosted GPU model
#389 (comment)

@Sajalj98
Copy link

Is this bug resolved? or any other workaround available.

@mtekman
Copy link

mtekman commented Sep 25, 2023

@Sajalj98 Not really. /generate and /learn don't seem to work with the 1.08 version that I tried. Another way is to use a different LLM that is similar to OpenAI's API. See my #389 comment for more details.

@Sajalj98
Copy link

Sajalj98 commented Sep 26, 2023

I tried using gpt4all 1.0.8 and 1.0.0, i am still getting the same error ("wasn't able to index that path") in the chat interface
image

and when using ai_magic command, the response is the following -
image

and for another model the output is empty.
image

#209 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working project:extensibility Extension points, routing, configuration
Projects
None yet
Development

No branches or pull requests

4 participants