Skip to content

[Bug]: 4xH800 Qwen/Qwen3-Next-80B-A3B-Instruct MTP, assert error, assert (m.num_reqs * (self.num_spec + 1) <= m.num_actual_tokens #25647

@david6666666

Description

@david6666666

Your current environment

The output of python collect_env.py
Your output of `python collect_env.py` here

🐛 Describe the bug

Your current environment

main branch According recipes Guide Use MTP: [[Qwen3-Next]](https://docs.vllm.ai/projects/recipes/en/latest/Qwen/Qwen3-Next.html)
but "num_speculative_tokens": 3 works well

checkout to 072d7e53e534d337b41262dd44ded9b44aa699ef "num_speculative_tokens": 2 works well

🐛 Describe the bug

uv venv
source .venv/bin/activate
uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly
vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct  \
--tokenizer-mode auto  --gpu-memory-utilization 0.8 \
--speculative-config '{"method": "qwen3_next_mtp", "num_speculative_tokens": 2}' \
--tensor-parallel-size 4 --no-enable-chunked-prefill 2>&1 | tee ./test_mtp.log
vllm bench serve \
  --backend vllm \
  --model Qwen/Qwen3-Next-80B-A3B-Instruct \
  --endpoint /v1/completions \
  --dataset-name random \
  --random-input 2048 \
  --random-output 1024 \
  --max-concurrency 10 \
  --num-prompt 100

Error Log:

Capturing CUDA graphs (mixed prefill-decode, PIECEWISE): 100%|██████████| 67/67 [00:07<00:00,  8.91it/s]
Capturing CUDA graphs (decode, FULL):   0%|          | 0/65 [00:00<?, ?it/s]
(Worker_TP0 pid=69162) INFO 09-25 07:36:29 [custom_all_reduce.py:203] Registering 4087 cuda graph addresses
(Worker_TP3 pid=69165) INFO 09-25 07:36:29 [custom_all_reduce.py:203] Registering 4087 cuda graph addresses
(Worker_TP1 pid=69163) INFO 09-25 07:36:29 [custom_all_reduce.py:203] Registering 4087 cuda graph addresses
(Worker_TP2 pid=69164) INFO 09-25 07:36:29 [custom_all_reduce.py:203] Registering 4087 cuda graph addresses
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671] WorkerProc hit an exception.
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671] Traceback (most recent call last):
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 666, in worker_busy_loop
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     output = func(*args, **kwargs)
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]              ^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 352, in compile_or_warm_up_model
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     cuda_graph_memory_bytes = self.model_runner.capture_model()
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3363, in capture_model
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     self._capture_cudagraphs(
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3431, in _capture_cudagraphs
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     self._dummy_run(num_tokens,
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     return func(*args, **kwargs)
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]            ^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3004, in _dummy_run
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     .build_for_cudagraph_capture(common_attn_metadata)
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/attention/backends/gdn_attn.py", line 317, in build_for_cudagraph_capture
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     assert (m.num_reqs * (self.num_spec + 1) <= m.num_actual_tokens
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671] AssertionError: GDN only supports decode-only full CUDAGraph capture. Make sure all cudagraph capture sizes <= max_num_seq.
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671] Traceback (most recent call last):
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 666, in worker_busy_loop
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     output = func(*args, **kwargs)
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]              ^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 352, in compile_or_warm_up_model
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     cuda_graph_memory_bytes = self.model_runner.capture_model()
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3363, in capture_model
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     self._capture_cudagraphs(
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3431, in _capture_cudagraphs
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     self._dummy_run(num_tokens,
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     return func(*args, **kwargs)
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]            ^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3004, in _dummy_run
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     .build_for_cudagraph_capture(common_attn_metadata)
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/attention/backends/gdn_attn.py", line 317, in build_for_cudagraph_capture
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]     assert (m.num_reqs * (self.num_spec + 1) <= m.num_actual_tokens
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671] AssertionError: GDN only supports decode-only full CUDAGraph capture. Make sure all cudagraph capture sizes <= max_num_seq.
(Worker_TP0 pid=69162) ERROR 09-25 07:36:30 [multiproc_executor.py:671] 
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708] EngineCore failed to start.
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708] Traceback (most recent call last):
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]     engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]     super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 92, in __init__
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]     self._initialize_kv_caches(vllm_config)
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 207, in _initialize_kv_caches
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]     self.model_executor.initialize_from_config(kv_cache_configs)
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 75, in initialize_from_config
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]     self.collective_rpc("compile_or_warm_up_model")
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 264, in collective_rpc
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]     result = get_response(w, dequeue_timeout,
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 248, in get_response
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708]     raise RuntimeError(
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:30 [core.py:708] RuntimeError: Worker failed with error 'GDN only supports decode-only full CUDAGraph capture. Make sure all cudagraph capture sizes <= max_num_seq.', please check the stack trace above for the root cause
(EngineCore_DP0 pid=69029) ERROR 09-25 07:36:32 [multiproc_executor.py:154] Worker proc VllmWorker-0 died unexpectedly, shutting down executor.
(EngineCore_DP0 pid=69029) Process EngineCore_DP0:
(EngineCore_DP0 pid=69029) Traceback (most recent call last):
(EngineCore_DP0 pid=69029)   File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=69029)     self.run()
(EngineCore_DP0 pid=69029)   File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=69029)     self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 712, in run_engine_core
(EngineCore_DP0 pid=69029)     raise e
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=69029)     engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=69029)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=69029)     super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 92, in __init__
(EngineCore_DP0 pid=69029)     self._initialize_kv_caches(vllm_config)
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 207, in _initialize_kv_caches
(EngineCore_DP0 pid=69029)     self.model_executor.initialize_from_config(kv_cache_configs)
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 75, in initialize_from_config
(EngineCore_DP0 pid=69029)     self.collective_rpc("compile_or_warm_up_model")
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 264, in collective_rpc
(EngineCore_DP0 pid=69029)     result = get_response(w, dequeue_timeout,
(EngineCore_DP0 pid=69029)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=69029)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 248, in get_response
(EngineCore_DP0 pid=69029)     raise RuntimeError(
(EngineCore_DP0 pid=69029) RuntimeError: Worker failed with error 'GDN only supports decode-only full CUDAGraph capture. Make sure all cudagraph capture sizes <= max_num_seq.', please check the stack trace above for the root cause
(APIServer pid=68859) Traceback (most recent call last):
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/bin/vllm", line 10, in <module>
(APIServer pid=68859)     sys.exit(main())
(APIServer pid=68859)              ^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 54, in main
(APIServer pid=68859)     args.dispatch_function(args)
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 50, in cmd
(APIServer pid=68859)     uvloop.run(run_server(args))
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
(APIServer pid=68859)     return __asyncio.run(
(APIServer pid=68859)            ^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
(APIServer pid=68859)     return runner.run(main)
(APIServer pid=68859)            ^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=68859)     return self._loop.run_until_complete(task)
(APIServer pid=68859)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
(APIServer pid=68859)     return await main
(APIServer pid=68859)            ^^^^^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1867, in run_server
(APIServer pid=68859)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1885, in run_server_worker
(APIServer pid=68859)     async with build_async_engine_client(
(APIServer pid=68859)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=68859)     return await anext(self.gen)
(APIServer pid=68859)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 178, in build_async_engine_client
(APIServer pid=68859)     async with build_async_engine_client_from_engine_args(
(APIServer pid=68859)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=68859)     return await anext(self.gen)
(APIServer pid=68859)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_client_from_engine_args
(APIServer pid=68859)     async_llm = AsyncLLM.from_vllm_config(
(APIServer pid=68859)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/utils/__init__.py", line 1570, in inner
(APIServer pid=68859)     return fn(*args, **kwargs)
(APIServer pid=68859)            ^^^^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 207, in from_vllm_config
(APIServer pid=68859)     return cls(
(APIServer pid=68859)            ^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 134, in __init__
(APIServer pid=68859)     self.engine_core = EngineCoreClient.make_async_mp_client(
(APIServer pid=68859)                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 102, in make_async_mp_client
(APIServer pid=68859)     return AsyncMPClient(*client_args)
(APIServer pid=68859)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 769, in __init__
(APIServer pid=68859)     super().__init__(
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 448, in __init__
(APIServer pid=68859)     with launch_core_engines(vllm_config, executor_class,
(APIServer pid=68859)   File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__
(APIServer pid=68859)     next(self.gen)
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 732, in launch_core_engines
(APIServer pid=68859)     wait_for_engine_startup(
(APIServer pid=68859)   File "/root/.cache/cwq/qwen3-next/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 785, in wait_for_engine_startup
(APIServer pid=68859)     raise RuntimeError("Engine core initialization failed. "
(APIServer pid=68859) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions