Skip to content

Adding chat template to vllm decode.#2978

Merged
copybara-service[bot] merged 1 commit intomainfrom
nicogrande/add-chat-template-vllm-decode
Feb 14, 2026
Merged

Adding chat template to vllm decode.#2978
copybara-service[bot] merged 1 commit intomainfrom
nicogrande/add-chat-template-vllm-decode

Conversation

@NicoGrande
Copy link
Copy Markdown
Collaborator

@NicoGrande NicoGrande commented Jan 20, 2026

Description

Adds optional support for chat templates in vllm_decode.py. This helps achieve similar responses to the native vLLM model implementation when compared to MaxText on vLLM.

FIXES: b/476253050

Tests

Running the following command resulted in the following diff with respect to the vLLM native model:

python3 -m MaxText.vllm_decode --model_name=qwen3-8b --load_parameters_path=$CHECKPOINT_PATH --tokenizer_path=Qwen/Qwen3-8B --hf_model_name=Qwen/Qwen3-8B --max_target_length=1024  --prompt="Suggest some famous landmarks in London." --decode_sampling_temperature=0.0 --decode_sampling_nucleus_p=1.0 --decode_sampling_top_k=0 --hf_config_path=src/MaxText/integration/vllm/maxtext_vllm_adapter --use_chat_template=true

https://diff.googleplex.com/#key=j8DNmAFF934Q

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Copy Markdown
Collaborator

@khatwanimohit khatwanimohit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment thread src/MaxText/vllm_decode.py Outdated
@NicoGrande NicoGrande force-pushed the nicogrande/add-chat-template-vllm-decode branch from 5a52d69 to 883dd10 Compare January 21, 2026 21:38
@ChingTsai
Copy link
Copy Markdown
Collaborator

hi @NicoGrande,
I tried to test it with this command.

python3 -m MaxText.vllm_decode --model_name=qwen3-4b --load_parameters_path=<xxx> --tokenizer_path=Qwen/Qwen3-4B --hf_model_name=Qwen/Qwen3-4B --max_target_length=1024  --prompt="Suggest some famous landmarks in London." --decode_sampling_temperature=0.0 --decode_sampling_nucleus_p=1.0 --decode_sampling_top_k=0 --hf_config_path=src/MaxText/integration/vllm/maxtext_vllm_adapter --use_chat_template=true

but I keep getting the error below.

pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
  Value error, Invalid repository ID or local directory specified: 'src/MaxText/integration/vllm/maxtext_vllm_adapter'.
Please verify the following requirements:
1. Provide a valid Hugging Face repository ID.
2. Specify a local directory that contains a recognized configuration file.
   - For Hugging Face models: ensure the presence of a 'config.json'.
   - For Mistral models: ensure the presence of a 'params.json'.
 [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]

However, I tried commenting out this line and it works, and the generation looks good to me now.
Is hf_config_path really needed, or did I miss anything?"

@NicoGrande
Copy link
Copy Markdown
Collaborator Author

NicoGrande commented Jan 23, 2026

hi @NicoGrande, I tried to test it with this command.

python3 -m MaxText.vllm_decode --model_name=qwen3-4b --load_parameters_path=<xxx> --tokenizer_path=Qwen/Qwen3-4B --hf_model_name=Qwen/Qwen3-4B --max_target_length=1024  --prompt="Suggest some famous landmarks in London." --decode_sampling_temperature=0.0 --decode_sampling_nucleus_p=1.0 --decode_sampling_top_k=0 --hf_config_path=src/MaxText/integration/vllm/maxtext_vllm_adapter --use_chat_template=true

but I keep getting the error below.

pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
  Value error, Invalid repository ID or local directory specified: 'src/MaxText/integration/vllm/maxtext_vllm_adapter'.
Please verify the following requirements:
1. Provide a valid Hugging Face repository ID.
2. Specify a local directory that contains a recognized configuration file.
   - For Hugging Face models: ensure the presence of a 'config.json'.
   - For Mistral models: ensure the presence of a 'params.json'.
 [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]

However, I tried commenting out this line and it works, and the generation looks good to me now. Is hf_config_path really needed, or did I miss anything?"

Hi @ChingTsai ! To include hf_config_path you need to download the HuggingFace config.json file for the target model and save it in the hf_config_path. You will also need to modify the architectures in config.json to be MaxTextForCausalLM. When you include these changes, the MaxText model implementation will be used for inference. If you do not include hf_config_path then the vLLM native implementation will be used for inference.

@cychiuak
Copy link
Copy Markdown

cychiuak commented Jan 27, 2026

Hi @NicoGrande ,

I just tested out your PR, it seems like it only supports unscanned model, meaning that only models with scan_layers=false can successfully ran. When using scanned model, I ran into error where the model generates unreasonable tokens. As we use scanned models for SFT, I would like to ask if this PR could also support scanned models?

cc: @ChingTsai

@NicoGrande
Copy link
Copy Markdown
Collaborator Author

Hi @NicoGrande ,

I just tested out your PR, it seems like it only supports unscanned model, meaning that only models with scan_layers=false can successfully ran. When using scanned model, I ran into error where the model generates unreasonable tokens. As we use scanned models for SFT, I would like to ask if this PR could also support scanned models?

cc: @ChingTsai

Hi @cychiuak! Thank you for flagging this! I think we should be able to add some functionality to support this. The codepath will likely involve unrolling the scanned checkpoint and then using the unscanned version for inference. I can add this to my TODO list and update this PR with this feature request when this is ready.

@ChingTsai
Copy link
Copy Markdown
Collaborator

ChingTsai commented Jan 28, 2026

Thanks @NicoGrande ! Supporting scanned checkpoints would significantly enhance the training user experiment.
Looking forward to this feature becoming available!

@chishuen
Copy link
Copy Markdown
Collaborator

Hi @cychiuak! Thank you for flagging this! I think we should be able to add some functionality to support this. The codepath will likely involve unrolling the scanned checkpoint and then using the unscanned version for inference. I can add this to my TODO list and update this PR with this feature request when this is ready.

+1 Thank you @NicoGrande for taking on this! This feature will unblock us. Otherwise, the current only viable workflow is to convert the ckpt in orbax format to HF (safetensors) format and run the inference, which is a pain in hyperparameter searches. Would be great if you can help to add the support. Thanks!

@NicoGrande NicoGrande force-pushed the nicogrande/add-chat-template-vllm-decode branch 2 times, most recently from 96690ab to b706213 Compare February 3, 2026 17:27
@NicoGrande NicoGrande force-pushed the nicogrande/add-chat-template-vllm-decode branch from b706213 to 0b68629 Compare February 13, 2026 18:53
@NicoGrande
Copy link
Copy Markdown
Collaborator Author

I will work to get this PR merged for now and open a follow-up PR with support for scanned checkpoints @ChingTsai @cychiuak

Copy link
Copy Markdown
Collaborator

@bvandermoon bvandermoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally LGTM, left some questions but not all of them are related to this PR

Comment thread src/maxtext/vllm_decode.py
Comment thread src/maxtext/vllm_decode.py
Comment thread src/maxtext/vllm_decode.py
Comment thread src/maxtext/vllm_decode.py Outdated
@NicoGrande NicoGrande force-pushed the nicogrande/add-chat-template-vllm-decode branch from 0b68629 to ed447f1 Compare February 13, 2026 21:06
Copy link
Copy Markdown
Collaborator

@bvandermoon bvandermoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, were you planning to remove the enable_expert_parallel flag?

Comment thread src/maxtext/vllm_decode.py Outdated
@NicoGrande NicoGrande force-pushed the nicogrande/add-chat-template-vllm-decode branch from ed447f1 to 912d0c9 Compare February 13, 2026 21:54
@copybara-service copybara-service Bot merged commit 514d0db into main Feb 14, 2026
42 checks passed
@copybara-service copybara-service Bot deleted the nicogrande/add-chat-template-vllm-decode branch February 14, 2026 02:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants