We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logprob
echo
I'm trying to start writing a logprob & echo support for chat request.
Unfortunately, running test like #1992 when echo is setted as true server doesn't respond.
true
Seeing furtherer I checked that the bug begging in #2449 (sha: dd7e8f5). Previous commit #2463 (sha: d2a6836) worked ok.
vllm-openai-main | INFO 02-01 04:31:38 async_llm_engine.py:385] Received request cmpl-dc7fb40d1b534a879768966f3dc50d39: prompt: None, prefix_pos: None,sampling params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=True, max_tokens=20, logprobs=0, prompt_logprobs=0, skip_special_tokens=True, spaces_between_special_tokens=True), prompt token ids: [2, 12375, 351, 5, 232, 651, 11, 2760, 116, 50118, 6557, 45117, 35, 50118]. vllm-openai-main | ERROR: Exception in ASGI application vllm-openai-main | Traceback (most recent call last): vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 259, in __call__ vllm-openai-main | await wrap(partial(self.listen_for_disconnect, receive)) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 255, in wrap vllm-openai-main | await func() vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 232, in listen_for_disconnect vllm-openai-main | message = await receive() vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 587, in receive vllm-openai-main | await self.message_event.wait() vllm-openai-main | File "/usr/lib/python3.10/asyncio/locks.py", line 214, in wait vllm-openai-main | await fut vllm-openai-main | asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f20005d5150 vllm-openai-main | vllm-openai-main | During handling of the above exception, another exception occurred: vllm-openai-main | vllm-openai-main | Traceback (most recent call last): vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi vllm-openai-main | result = await app( # type: ignore[func-returns-value] vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ vllm-openai-main | return await self.app(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__ vllm-openai-main | await super().__call__(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 116, in __call__ vllm-openai-main | await self.middleware_stack(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__ vllm-openai-main | raise exc vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__ vllm-openai-main | await self.app(scope, receive, _send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 83, in __call__ vllm-openai-main | await self.app(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/aioprometheus/asgi/middleware.py", line 184, in __call__ vllm-openai-main | await self.asgi_callable(scope, receive, wrapped_send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__ vllm-openai-main | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 55, in wrapped_app vllm-openai-main | raise exc vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 44, in wrapped_app vllm-openai-main | await app(scope, receive, sender) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 746, in __call__ vllm-openai-main | await route.handle(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 288, in handle vllm-openai-main | await self.app(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 75, in app vllm-openai-main | await wrap_app_handling_exceptions(app, request)(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 55, in wrapped_app vllm-openai-main | raise exc vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 44, in wrapped_app vllm-openai-main | await app(scope, receive, sender) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 73, in app vllm-openai-main | await response(scope, receive, send) vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 252, in __call__ vllm-openai-main | async with anyio.create_task_group() as task_group: vllm-openai-main | File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__ vllm-openai-main | raise BaseExceptionGroup( vllm-openai-main | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) vllm-openai-main | INFO 02-01 04:31:38 async_llm_engine.py:111] Finished request cmpl-dc7fb40d1b534a879768966f3dc50d39.
The text was updated successfully, but these errors were encountered:
same question
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
I'm trying to start writing a
logprob
&echo
support for chat request.Unfortunately, running test like #1992 when
echo
is setted astrue
server doesn't respond.Seeing furtherer I checked that the bug begging in #2449 (sha: dd7e8f5).
Previous commit #2463 (sha: d2a6836) worked ok.
LOG:
The text was updated successfully, but these errors were encountered: