You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 24, 2023. It is now read-only.
docker logs gpt4all-ui_api_1 --follow
INFO: Will watch for changes in these directories: ['/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using StatReload
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 59, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in serve
config.load()
File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 471, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/app/fastapi_server.py", line 50, in <module>
llama = llama_cpp.Llama(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 109, in __init__
raise ValueError(f"Model path does not exist: {model_path}")
ValueError: Model path does not exist: /models/7B/gpt4all-lora-quantized.ggml
Exception ignored in: <function Llama.__del__ at 0x7fb1cbd7b560>
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 804, in __del__
if self.ctx is not None:
^^^^^^^^
AttributeError: 'Llama' object has no attribute 'ctx'
Any ideas what to do?
The text was updated successfully, but these errors were encountered:
Perhaps it might be a good idea to add in the README in this section:
"Set the docker-compose.yml variable to the correct path of the model you want to use"
?
@guysoft what was the path that you updated the yml file with?
gpt4all-ui-chatgpt-1 | ready - started server on 0.0.0.0:3000, url: http://localhost:3000 gpt4all-ui-api-1 | error loading model: unexpectedly reached end of file gpt4all-ui-api-1 | llama_init_from_file: failed to load model
Followed the install, placed a model got this:
Any ideas what to do?
The text was updated successfully, but these errors were encountered: