Skip to content

Enable Qwen3-Omni vision decode#3741

Open
hengtaoguo wants to merge 1 commit intomainfrom
hengtaoguo-omni-img
Open

Enable Qwen3-Omni vision decode#3741
hengtaoguo wants to merge 1 commit intomainfrom
hengtaoguo-omni-img

Conversation

@hengtaoguo
Copy link
Copy Markdown
Collaborator

@hengtaoguo hengtaoguo commented Apr 24, 2026

Description

Fixes inference for Qwen3-Omni multimodal (image) decode:

  • moe.py: Fix RaggedDotGroupSizes representative value: commit 083293fc3 (Use Tokamax's representative group sizes. #3434) regressed len(inputs) (an int) where a tuple[int, ...] is required. Restored as (inputs.shape[0] // kernel.shape[0],) * kernel.shape[0]. Also minor formatting cleanup from the sparsity PR.
  • decode.py: Respect config.add_bos flag instead of hardcoding not has_chat_template. Add batch dimension when calling get_rope_index for input_ids/attention_mask.
  • maxengine.py: Cast next_pos to the decode state's dtype before dynamic_update_index_in_dim to avoid dtype mismatch when MRoPE position IDs are float vs int.
  • qwen3.py: Fix Qwen3OmniMoeVisionEncoder.__call__: explicitly reshape hidden_states to patch-level layout before patch_embed, then reshape output back to (batch, seq, hidden). Previously assumed a flattened input that didn't account for batch size.
  • processor_qwen3_omni.py: Reshape preprocessed image pixel values to (1, C, T*t, H*h, W*w) to match the updated vision encoder's expected input layout, carrying explicit grid-thw information.

Note: Video decoding is working too but pending an input interface refactor. To manage the growing number of model-specific inputs (video_values, mask, grid_thw), we are planning a follow-up PR to standardize the interface.

Tests

Unit tests passing:

python -m pytest tests/unit/qwen3_omni_layers_test.py -vv -s

= 28 passed, 2119 warnings in 91.25s (0:01:31) =

Decode with a test image:

python -m maxtext.inference.decode maxtext/configs/base.yml model_name=qwen3-omni-30b-a3b tokenizer_path=Qwen/Qwen3-Omni-30B-A3B-Instruct tokenizer_type=huggingface load_parameters_path=gs://hengtaoguo-maxtext-logs/checkpoints/qwen3-omni-30b-a3b/unscanned/001/0/items per_device_batch_size=1 run_name=ht_test steps=1 async_checkpointing=false scan_layers=false use_multimodal=true prompt=\'What\ can\ you\ see\?\ Answer\ in\ one\ short\ sentence.\' image_path=\'tests/assets/test_image.jpg\' max_prefill_predict_length=361 max_target_length=391 attention=\'dot_product\' hf_access_token=<hf_access_token> ici_tensor_parallelism=4 add_bos=False override_model_config=True
Input `<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>What can you see? Answer in one short sentence.<|im_end|>
<|im_start|>assistant
` -> `A city skyline with a prominent space needle, snow-capped mountains in the distance, and a clear blue sky.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 24, 2026

Codecov Report

❌ Patch coverage is 55.55556% with 4 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/maxtext/models/qwen3.py 0.00% 3 Missing ⚠️
src/maxtext/layers/moe.py 83.33% 0 Missing and 1 partial ⚠️

📢 Thoughts on this report? Let us know!

@github-actions
Copy link
Copy Markdown

🤖 Hi @hengtaoguo, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@github-actions
Copy link
Copy Markdown

🤖 I'm sorry @hengtaoguo, but I was unable to process your request. Please see the logs for more details.

@hengtaoguo hengtaoguo force-pushed the hengtaoguo-omni-img branch 2 times, most recently from 95784f5 to 1890ae4 Compare April 24, 2026 22:21
code style fix
Copy link
Copy Markdown
Collaborator

@aireenmei aireenmei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does decode work with video?

@hengtaoguo
Copy link
Copy Markdown
Collaborator Author

Does decode work with video?

The logics works for videos too but now the direct video decode is not available. I plan to raise a follow up PR adding the video inputs field to the model pipeline. It may involve around 10+ lines of change through out different interfaces.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants