Skip to content

[Bug]: 使用vllm0.7.3对Qwen2.5VL-7b有时会报错 #19786

Open
@wqw0806

Description

@wqw0806

Your current environment

vllm0.7.3

🐛 Describe the bug

启动模型:vllm serve /Qwen/Qwen2___5-VL-7B-Instruct/ --trust-remote-code --served-model-name vl_model --gpu-memory-utilization 0.9 --port 8000 请求的时候有时是正常的,有时是错误的,错误的报错内容为await app(scope, receive, sender)
File "/.local/lib/python3.10/site-packages/starlette/routing.py", line 715, in call
await self.middleware_stack(scope, receive, send)
File "/.local/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/.local/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/.local/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/.local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/.local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/.local/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/.local/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/.local/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/entrypoints/utils.py", line 56, in wrapper
return handler_task.result()
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 410, in create_chat_completion
generator = await handler.create_chat_completion(request, raw_request)
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 181, in create_chat_completion
) = await self._preprocess_chat(
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_engine.py", line 418, in _preprocess_chat
mm_data = await mm_data_future
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/entrypoints/chat_utils.py", line 481, in all_mm_data
return {
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/entrypoints/chat_utils.py", line 482, in
modality: await asyncio.gather(*items)
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/multimodal/utils.py", line 202, in fetch_image_async
return await self.load_from_url_async(
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/multimodal/utils.py", line 127, in load_from_url_async
data = await connection.async_get_bytes(url, timeout=fetch_timeout)
File "/.conda/envs/weitiao/lib/python3.10/site-packages/vllm/connections.py", line 94, in async_get_bytes
async with await self.get_async_response(url, timeout=timeout) as r:
File "/.local/lib/python3.10/site-packages/aiohttp/client.py", line 1425, in aenter
self._resp: _RetType = await self._coro
File "/.local/lib/python3.10/site-packages/aiohttp/client.py", line 607, in _request
with timer:
File "/.local/lib/python3.10/site-packages/aiohttp/helpers.py", line 671, in exit
raise asyncio.TimeoutError from exc_val
asyncio.exceptions.TimeoutError 这个应该怎么解决呢?是什么问题呀?

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions