Skip to content
This repository has been archived by the owner on Jun 24, 2023. It is now read-only.

Getting AttributeError: 'Llama' object has no attribute 'ctx' on gpt4all-ui_api_1 #4

Closed
guysoft opened this issue Apr 23, 2023 · 5 comments

Comments

@guysoft
Copy link

guysoft commented Apr 23, 2023

Followed the install, placed a model got this:

docker logs gpt4all-ui_api_1 --follow
INFO:     Will watch for changes in these directories: ['/app']
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [1] using StatReload
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 59, in run
    return asyncio.run(self.serve(sockets=sockets))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in serve
    config.load()
  File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 471, in load
    self.loaded_app = import_from_string(self.app)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/app/fastapi_server.py", line 50, in <module>
    llama = llama_cpp.Llama(
            ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 109, in __init__
    raise ValueError(f"Model path does not exist: {model_path}")
ValueError: Model path does not exist: /models/7B/gpt4all-lora-quantized.ggml
Exception ignored in: <function Llama.__del__ at 0x7fb1cbd7b560>
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 804, in __del__
    if self.ctx is not None:
       ^^^^^^^^
AttributeError: 'Llama' object has no attribute 'ctx'

Any ideas what to do?

@jmtatsch
Copy link

Double check your model path.

@guysoft
Copy link
Author

guysoft commented Apr 23, 2023

Anything else I should check?

/tmp/gpt4all-ui$ ls -l models/
total 4114132
-rwxrwxrwx 1 guy guy 4212864640 Apr 23 14:26 gpt4all-lora-unfiltered-quantized.new.bin

@guysoft
Copy link
Author

guysoft commented Apr 23, 2023

Ok found the issue, had to update this line:
https://github.com/mkellerman/gpt4all-ui/blob/main/docker-compose.yml#L21

Perhaps it might be a good idea to add in the README in this section:
"Set the docker-compose.yml variable to the correct path of the model you want to use"
?

@nutmilk10
Copy link

@guysoft what was the path that you updated the yml file with?

gpt4all-ui-chatgpt-1 | ready - started server on 0.0.0.0:3000, url: http://localhost:3000 gpt4all-ui-api-1 | error loading model: unexpectedly reached end of file gpt4all-ui-api-1 | llama_init_from_file: failed to load model

@mkellerman
Copy link
Owner

Sorry guys, I’m not supporting this. Please check out:
https://github.com/go-skynet/LocalAI/tree/master/examples/chatbot-ui

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants