-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Description
I tried to run the code on Google Colab, but the following error occurred during the image generation phase.
Inputs: 生成一只猫 []
======>Previous memory:
human_prefix='Human' ai_prefix='AI' buffer='' output_key='output' input_key=None memory_key='chat_history'
hitory_memory:, n_tokens: 0
Entering new AgentExecutor chain...
Yes
Action: Generate Image From User Input Text
Action Input: 生成一只猫/usr/local/lib/python3.8/site-packages/transformers/generation/utils.py:1186: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Settingpad_token_idtoeos_token_id:50256 for open-end generation.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/gradio/routes.py", line 384, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.8/site-packages/gradio/blocks.py", line 1032, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.8/site-packages/gradio/blocks.py", line 844, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "visual_chatgpt.py", line 908, in run_text
res = self.agent({"input": text})
File "/usr/local/lib/python3.8/site-packages/langchain/chains/base.py", line 168, in call
raise e
File "/usr/local/lib/python3.8/site-packages/langchain/chains/base.py", line 165, in call
outputs = self._call(inputs)
File "/usr/local/lib/python3.8/site-packages/langchain/agents/agent.py", line 503, in _call
next_step_output = self._take_next_step(
File "/usr/local/lib/python3.8/site-packages/langchain/agents/agent.py", line 420, in _take_next_step
observation = tool.run(
File "/usr/local/lib/python3.8/site-packages/langchain/tools/base.py", line 71, in run
raise e
File "/usr/local/lib/python3.8/site-packages/langchain/tools/base.py", line 68, in run
observation = self._run(tool_input)
File "/usr/local/lib/python3.8/site-packages/langchain/agents/tools.py", line 17, in _run
return self.func(tool_input)
File "visual_chatgpt.py", line 198, in inference
refined_text = self.text_refine_gpt2_pipe(text)[0]["generated_text"]
File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 210, in call
return super().call(text_inputs, **kwargs)
File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1084, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1091, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 992, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 252, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/transformers/generation/utils.py", line 1242, in generate
and torch.sum(inputs_tensor[:, -1] == generation_config.pad_token_id) > 0
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Colab environment is as follows: ubuntu20.04,cuda11.6/12.0,NVIDIA A100-SXM4-40GB