Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: There is no current event loop in thread 'AnyIO worker thread'. #8

Open
fleabites opened this issue Sep 24, 2023 · 1 comment

Comments

@fleabites
Copy link

What I've done:

  1. conda create -n gptq python=3.9 -y
  2. conda activate gptq
  3. conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
  4. git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git
  5. cd GPTQ-for-LLaMa
  6. pip install -r requirements.txt
  7. python setup_cuda.py install
  8. pip install guidance
  9. pip install langchain
  10. pip install gradio
  11. python app.py

What I see

Server starts but when I try to run any question I get the following run -time error:

`(gptq) david@shodan:~/Documents/Programming/Personal/llm/langchain/localLLM_guidance-main$ python app.py
start to install package: redis
successfully installed package: redis
start to install package: redis_om
successfully installed package: redis_om
Loading model ...
/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/safetensors/torch.py:99: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
with safe_open(filename, framework="pt", device=device) as f:
Done.
Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/gradio/routes.py", line 516, in predict
output = await route_utils.call_process_api(
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/gradio/route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/gradio/blocks.py", line 1437, in process_api
result = await self.call_function(
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/gradio/blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/gradio/utils.py", line 650, in wrapper
response = f(*args, **kwargs)
File "/home/david/Documents/Programming/Personal/llm/langchain/localLLM_guidance-main/app.py", line 23, in greet
final_answer = custom_agent(name)
File "/home/david/Documents/Programming/Personal/llm/langchain/localLLM_guidance-main/server/agent.py", line 71, in call
prompt_start = self.guidance(prompt_start_template)
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/guidance/init.py", line 22, in call
return Program(template, llm=llm, cache_seed=cache_seed, logprobs=logprobs, silent=silent, async_mode=async_mode, stream=stream, caching=caching, await_missing=await_missing, logging=logging, **kwargs)
File "/home/david/miniconda3/envs/gptq/lib/python3.9/site-packages/guidance/_program.py", line 155, in init
self._execute_complete = asyncio.Event() # fires when the program is done executing to resolve await
File "/home/david/miniconda3/envs/gptq/lib/python3.9/asyncio/locks.py", line 177, in init
self._loop = events.get_event_loop()
File "/home/david/miniconda3/envs/gptq/lib/python3.9/asyncio/events.py", line 642, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'AnyIO worker thread'.
`

Modifications to the code:

os.environ["SERPER_API_KEY"] = '<my SERPER_API_KEY>' MODEL_PATH = '/home/david/ai/text-generation-webui/models/TheBloke_Wizard-Vicuna-7B-Uncensored-GPTQ/' CHECKPOINT_PATH = '/home/david/ai/text-generation-webui/models/TheBloke_Wizard-Vicuna-7B-Uncensored-GPTQ/model.safetensors'
Any clues as to why I'm getting a RuntimeError: There is no current event loop in thread 'AnyIO worker thread' error?

Many thanks,
Dave

@QuangBK
Copy link
Owner

QuangBK commented Sep 25, 2023

Hi, I'm not sure but it seems like due to Guidance. The current version of Guidance is not stable (as they have plan to make a big update soon, so just wait for it). At the moment, you may try with the older version pip install guidance==0.0.63.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants