Skip to content

Support using chat template in the tokenizer and use the generation config in model for vllm and openai endpoint #3141

Support using chat template in the tokenizer and use the generation config in model for vllm and openai endpoint

Support using chat template in the tokenizer and use the generation config in model for vllm and openai endpoint #3141

build (3.10)

succeeded May 21, 2024 in 28s