-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models #5649
Conversation
…= "auto"; set default `tool_choice` to "auto"
… instead of vllm/engine/arg_utils.py
…none" AND NOT "auto"
… to receive the arguments & validate them
…long with validation
…to-tool-choice and --tool-use-prompt-template are specified
Progress! I as of current commits, I can now get the hermes 2 pro model to generate a tool call using the Server: python -m vllm.entrypoints.openai.api_server --model NousResearch/Hermes-2-Pro-Llama-3-8B --tool-use-prompt-template examples/tool_template_hermes_2_pro.jinja --enable-api-tools --enable-auto-tool-choice Client: python examples/openai_chat_completion_client_with_tools.py Result
Now, working on getting it to work for non-streaming responses - then, streaming! |
A question I asked in the discord, with some open questions about how to handle configuration: Setting up function calling models for an open model requires a lot of configurations if you want to be unopinionated about the model. Here is a brief list of all the parameters that would be needed:
The question is, it is better to have all of these as separate CLI flags, or would a JSON configuration file be preferable so that people can create (and track in version control!) configs that work for popular models?) |
Please see ongoing conversation with the Hugging Face team, Nous Research & transformers maintainer here - this will make it MUCH easier to implement OpenAI API-compatible tool calling into vLLM regardless of model prompt/tokenizer configs. HF PR for Hermes 2 Pro: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B/discussions/13#66724ea9bd5875ad665f1416 HF PR for Mistral 7B instruct v0.3: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3/discussions/35 Once these are merged, there will be a STANDARD way in transformers to handle templating in tool responses just like for templating chat conversations into prompts with a chat template, and hopefully to pull out tool calls from generated text. |
hey great initiative and nice to see Hermes Pro model's tool calls working. there's a slight issue with this tool call -- our format requires new lines after <tool_call> XML tags: <tool_call> Also tool choice should also work since it's basically passing the chosen tool only as part of |
Thanks! Tool choice is already working via guided decoding, but I will update the PR to fix the template |
Ok the most recent commit seems to fix it:
|
@K-Mistele
In addition, I can't find openai_chat_completion_client_with_tools.py in the example fille. Can you give me some advice? |
Please see my reply here. This is a draft pull request, which means it has not been merged into vLLM's codebase, and it is not ready to be merged yet. It is still a work-in-progress. As such, none of its additions or features are available in vLLM yet. If you are interested in testing or contributing to this pull request, please see the origin fork at the |
Given that guided decoding is not enabled for "auto" tool use, what is the error handling planned in case the LLM does not output valid JSON? |
Great question! Unfortunately since each model that supports tool calling uses its' own format for function calls (as opposed to tool choice with guided decoding, where we're forcing the LLM to call a specific tool in a specific format at decode-time) the response format is up to the model and its' trainer. At this point, we are still exploring ways to handle extraction of tool calls from disparate formats to OpenAI-compatible calls in a way that isn't opinionated (we want to support multiple formats including Mistral, Hermes 2 Pro, Firefunction, etc). Until we have a good answer on this, we probably won't have a good answer for how to handle errors. There are a couple possible options once we have attempted to extract the model's tool call format into OpenAI's:
In the interim, we may try to solve this problem by "ignoring" it - detect if the model is generating a tool call, but avoid the destructuring/extraction issue by returning the tool call in the model's format to the client. Not exactly sure what this would look like, but it's a possibility. |
Copy/pasting from discord: I'd really like to make progress on tool use and right now with hugging face adding support into transformers for passing tools into the chat template, that solves one of the main issues I think the blocker now is figuring out how to handle decodiong "auto" choice tool calls from each model's specific format (mistral vs. Hermes pro vs. firefunction) and returning that to the client ESPECIALLY when streaming is requested. Until there's a "canonical" way to do decode model-specific tool-calls into the OpenAI format e.g. through transformers or "reverse templates" or something, it might be best to approach this like chat templates which is to try & support it as good as we can where possible Here's what I mean by this:
So each one needs a very different implementation. I propose creating a ToolCallParser abstract class that can be implemented for different models like mistral and hermes. If a user is using a tool-calling model, they can use a CLI flag to toggle which parser they want, if they want one at all. If not, a tool call would be treated like a regular chat completion, and they can handle it client-side. This way, we can ship support for tool calls incrementally in the absence of a commonly-accepted "best practice" on how to do this. People can also add support for other models that are important to them in a minimally-invasive way. Then, once a better way to implement this more broadly is available, we can deprecate this approach I'd really appreciate feedback on this before moving forward and would especially love to hear if @mgoin and @simon-mo would consider this an appropriate approach that would be likely to be approved. |
@br3no would love to get your feedback and review for this PR! |
FWIW I verified that spec decode tests pass locally 🔥 |
Thank you all for your hard work! Especially you @K-Mistele! Great getting this into main!!! |
I was following along the whole time, thank you so much @K-Mistele ❤️ |
How can I use it with llama 3.1 8b model? |
@meetzuber you have to write a prompt template for l3.1 if I understand the implementation correctly. |
From my understanding you have to write a I'm experimenting with a more general approach to tool parsing that generalizes over different models Since OpenAI API compatible tool calls only require two string arguments (?=.*?\(\?P<name>.*?\))(?=.*?\(\?P<arguments>.*?\)) The tool calling regex must then be provided with a cli arg Then as an example for the Llama 3.1 model with JSON based tool calling your {"name": "(?P<name>.*?)", "parameters": (?P<arguments>.*?)} which would then extract the tool call: https://regex101.com/r/RhZ4zx/1 For a different custom tool calling format just provide another <function=(?P<name>.*?)>(?P<arguments>.*?)<\/function> which also extracts tool call(s): https://regex101.com/r/9pM6IL/2 |
I will be adding support in a separate PR |
Unfortunately this approach does not work with streaming. Supporting tools in streaming mode was a core requirement of this PR since many applications use streaming mode by default if they are user-facing, and there is no way to “turn off streaming” if the model starts generating a tool call. |
@K-Mistele Documentation would be much appreciated, so the community can be quick to add support for future models with tool support. |
Only below options are mentioned in tool/function calling. and there are only 2 options for tool call parser mistral and hermes.
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#automatic-function-calling |
That's not implemented on the main branch |
These are implemented in v0.6.0 release. |
@K-Mistele can mentioned which is the pull request for llama3.1 tool support...ithink most people is been waiting for it. |
@K-Mistele can mentioned which is the pull request for Qwen2 tool support...ithink most people is been waiting for it. |
There is not a pull request for it yet because it's still a work-in-progress and not ready for review. I can create a draft though. |
There is not a PR for this, and at this point I hadn't planned on it. If this is something you're interested, please feel free to create an issue for a feature request, and tag me in it. If it gets enough interest, I'll add it. |
commit a1d8742 Author: Simon Mo <simon.mo@hey.com> Date: Mon Sep 9 23:21:00 2024 -0700 Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (vllm-project#8319) commit 6cd5e5b Author: Dipika Sikka <dipikasikka1@gmail.com> Date: Mon Sep 9 23:02:52 2024 -0400 [Misc] Fused MoE Marlin support for GPTQ (vllm-project#8217) commit c7cb5c3 Author: Kyle Sayers <kylesayrs@gmail.com> Date: Mon Sep 9 16:27:26 2024 -0400 [Misc] GPTQ Activation Ordering (vllm-project#8135) commit f9b4a2d Author: Vladislav Kruglikov <vladislavkruglikov@outlook.com> Date: Mon Sep 9 21:20:46 2024 +0300 [Bugfix] Correct adapter usage for cohere and jamba (vllm-project#8292) commit 58fcc85 Author: Adam Lugowski <alugowski@gmail.com> Date: Mon Sep 9 11:16:37 2024 -0700 [Frontend] Add progress reporting to run_batch.py (vllm-project#8060) Co-authored-by: Adam Lugowski <adam.lugowski@parasail.io> commit 08287ef Author: Kyle Mistele <kyle@mistele.com> Date: Mon Sep 9 09:45:11 2024 -0500 [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility (vllm-project#8272) commit 4ef41b8 Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> Date: Sun Sep 8 00:01:51 2024 -0400 [Bugfix] Fix async postprocessor in case of preemption (vllm-project#8267) commit cfe712b Author: Joe Runde <Joseph.Runde@ibm.com> Date: Sat Sep 7 14:03:16 2024 -0600 [CI/Build] Use python 3.12 in cuda image (vllm-project#8133) Signed-off-by: Joe Runde <Joseph.Runde@ibm.com> commit b962ee1 Author: sumitd2 <91451282+sumitd2@users.noreply.github.com> Date: Sat Sep 7 23:48:40 2024 +0530 ppc64le: Dockerfile fixed, and a script for buildkite (vllm-project#8026) commit 36bf815 Author: Isotr0py <2037008807@qq.com> Date: Sun Sep 8 01:45:44 2024 +0800 [Model][VLM] Decouple weight loading logic for `Paligemma` (vllm-project#8269) commit e807125 Author: Isotr0py <2037008807@qq.com> Date: Sat Sep 7 16:38:23 2024 +0800 [Model][VLM] Support multi-images inputs for InternVL2 models (vllm-project#8201) commit 9f68e00 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Sat Sep 7 16:02:39 2024 +0800 [Bugfix] Fix broken OpenAI tensorizer test (vllm-project#8258) commit ce2702a Author: youkaichao <youkaichao@gmail.com> Date: Fri Sep 6 22:40:46 2024 -0700 [tpu][misc] fix typo (vllm-project#8260) commit 795b662 Author: Wei-Sheng Chin <wschin@outlook.com> Date: Fri Sep 6 20:18:16 2024 -0700 Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) (vllm-project#8241) commit 2f707fc Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Sat Sep 7 10:57:24 2024 +0800 [Model] Multi-input support for LLaVA (vllm-project#8238) commit 41e95c5 Author: Kyle Mistele <kyle@mistele.com> Date: Fri Sep 6 21:49:01 2024 -0500 [Bugfix] Fix Hermes tool call chat template bug (vllm-project#8256) Co-authored-by: Kyle Mistele <kyle@constellate.ai> commit 12dd715 Author: William Lin <SolitaryThinker@users.noreply.github.com> Date: Fri Sep 6 17:48:48 2024 -0700 [misc] [doc] [frontend] LLM torch profiler support (vllm-project#7943) commit 29f49cd Author: Patrick von Platen <patrick.v.platen@gmail.com> Date: Sat Sep 7 01:02:05 2024 +0200 [Model] Allow loading from original Mistral format (vllm-project#8168) Co-authored-by: Michael Goin <michael@neuralmagic.com> commit 23f3222 Author: Dipika Sikka <dipikasikka1@gmail.com> Date: Fri Sep 6 18:29:03 2024 -0400 [Misc] Remove `SqueezeLLM` (vllm-project#8220) commit 9db52ea Author: rasmith <Randall.Smith@amd.com> Date: Fri Sep 6 17:26:09 2024 -0500 [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput (vllm-project#8248) commit 1447c97 Author: Alexey Kondratiev(AMD) <143633163+alexeykondrat@users.noreply.github.com> Date: Fri Sep 6 14:51:03 2024 -0400 [CI/Build] Increasing timeout for multiproc worker tests (vllm-project#8203) commit de80783 Author: Rui Qiao <161574667+ruisearch42@users.noreply.github.com> Date: Fri Sep 6 09:18:35 2024 -0700 [Misc] Use ray[adag] dependency instead of cuda (vllm-project#7938) commit e5cab71 Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com> Date: Fri Sep 6 12:01:14 2024 -0400 [Frontend] Add --logprobs argument to `benchmark_serving.py` (vllm-project#8191) commit baa5467 Author: Nick Hill <nickhill@us.ibm.com> Date: Thu Sep 5 20:39:29 2024 -0700 [BugFix] Fix Granite model configuration (vllm-project#8216) commit db3bf7c Author: Jiaxin Shan <seedjeffwan@gmail.com> Date: Thu Sep 5 18:10:33 2024 -0700 [Core] Support load and unload LoRA in api server (vllm-project#6566) Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> commit 2febcf2 Author: sroy745 <142070531+sroy745@users.noreply.github.com> Date: Thu Sep 5 13:25:29 2024 -0700 [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (vllm-project#7962) commit 2ee4528 Author: Michael Goin <michael@neuralmagic.com> Date: Thu Sep 5 11:09:46 2024 -0400 Move verify_marlin_supported to GPTQMarlinLinearMethod (vllm-project#8165) commit 9da25a8 Author: Alex Brooks <alex.brooks@ibm.com> Date: Thu Sep 5 06:48:10 2024 -0600 [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) (vllm-project#8029) Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com> Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk> commit 8685ba1 Author: manikandan.tm@zucisystems.com <94887255+Manikandan-Thangaraj-ZS0321@users.noreply.github.com> Date: Thu Sep 5 17:03:37 2024 +0530 Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) (vllm-project#7860) commit 288a938 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Thu Sep 5 18:51:53 2024 +0800 [Doc] Indicate more information about supported modalities (vllm-project#8181) commit e39ebf5 Author: Elfie Guo <164945471+elfiegg@users.noreply.github.com> Date: Wed Sep 4 22:12:26 2024 -0700 [Core/Bugfix] Add query dtype as per FlashInfer API requirements. (vllm-project#8173) commit ba262c4 Author: Kevin H. Luu <kevin@anyscale.com> Date: Wed Sep 4 20:33:12 2024 -0700 [ci] Mark LoRA test as soft-fail (vllm-project#8160) Signed-off-by: kevin <kevin@anyscale.com> commit 4624d98 Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Wed Sep 4 20:31:48 2024 -0700 [Misc] Clean up RoPE forward_native (vllm-project#8076) commit 1afc931 Author: William Lin <SolitaryThinker@users.noreply.github.com> Date: Wed Sep 4 17:35:36 2024 -0700 [bugfix] >1.43 constraint for openai (vllm-project#8169) Co-authored-by: Michael Goin <michael@neuralmagic.com> commit e01c2be Author: Maureen McElaney <mmcelaney@users.noreply.github.com> Date: Wed Sep 4 19:50:13 2024 -0400 [Doc] [Misc] Create CODE_OF_CONDUCT.md (vllm-project#8161) commit 32e7db2 Author: Simon Mo <simon.mo@hey.com> Date: Wed Sep 4 16:34:27 2024 -0700 Bump version to v0.6.0 (vllm-project#8166) commit 008cf88 Author: Harsha vardhan manoj Bikki <39381063+hbikki@users.noreply.github.com> Date: Wed Sep 4 16:33:43 2024 -0700 [Neuron] Adding support for adding/ overriding neuron configuration a… (vllm-project#8062) Co-authored-by: Harsha Bikki <harbikh@amazon.com> commit 77d9e51 Author: Cody Yu <hao.yu.cody@gmail.com> Date: Wed Sep 4 13:23:22 2024 -0700 [MISC] Replace input token throughput with total token throughput (vllm-project#8164) Co-authored-by: Michael Goin <michael@neuralmagic.com> commit e02ce49 Author: Kyle Mistele <kyle@mistele.com> Date: Wed Sep 4 15:18:13 2024 -0500 [Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (vllm-project#5649) Co-authored-by: constellate <constellate@1-ai-appserver-staging.codereach.com> Co-authored-by: Kyle Mistele <kyle@constellate.ai> commit 561d6f8 Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Wed Sep 4 13:05:50 2024 -0700 [CI] Change test input in Gemma LoRA test (vllm-project#8163) commit d1dec64 Author: alexeykondrat <143633163+alexeykondrat@users.noreply.github.com> Date: Wed Sep 4 14:57:54 2024 -0400 [CI/Build][ROCm] Enabling LoRA tests on ROCm (vllm-project#7369) Co-authored-by: Simon Mo <simon.mo@hey.com> commit 2ad2e56 Author: Cody Yu <hao.yu.cody@gmail.com> Date: Wed Sep 4 11:53:25 2024 -0700 [MISC] Consolidate FP8 kv-cache tests (vllm-project#8131) commit d331156 Author: wnma <wnma3mz@gmail.com> Date: Wed Sep 4 18:55:37 2024 +0800 [Bugfix] remove post_layernorm in siglip (vllm-project#8106) commit ccd7207 Author: TimWang <7367474+haitwang-cloud@users.noreply.github.com> Date: Wed Sep 4 14:17:05 2024 +0800 chore: Update check-wheel-size.py to read MAX_SIZE_MB from env (vllm-project#8103) commit 855c262 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Wed Sep 4 13:22:17 2024 +0800 [Frontend] Multimodal support in offline chat (vllm-project#8098) commit 2be8ec6 Author: Peter Salas <peter@fixie.ai> Date: Tue Sep 3 21:38:21 2024 -0700 [Model] Add Ultravox support for multiple audio chunks (vllm-project#7963) commit e16fa99 Author: Dipika Sikka <dipikasikka1@gmail.com> Date: Tue Sep 3 22:12:41 2024 -0400 [Misc] Update fbgemmfp8 to use `vLLMParameters` (vllm-project#7972) Co-authored-by: Michael Goin <michael@neuralmagic.com> commit 61f4a93 Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Tue Sep 3 18:35:33 2024 -0700 [TPU][Bugfix] Use XLA rank for persistent cache path (vllm-project#8137) commit d4db9f5 Author: Nick Hill <nickhill@us.ibm.com> Date: Tue Sep 3 17:57:41 2024 -0700 [Benchmark] Add `--async-engine` option to benchmark_throughput.py (vllm-project#7964) commit 2188a60 Author: Dipika Sikka <dipikasikka1@gmail.com> Date: Tue Sep 3 17:21:44 2024 -0400 [Misc] Update `GPTQ` to use `vLLMParameters` (vllm-project#7976) commit dc0b606 Author: Simon Mo <simon.mo@hey.com> Date: Tue Sep 3 14:11:42 2024 -0700 [CI] Change PR remainder to avoid at-mentions (vllm-project#8134) commit 0af3abe Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Tue Sep 3 13:29:24 2024 -0700 [TPU][Bugfix] Fix next_token_ids shape (vllm-project#8128) commit f1575dc Author: Kevin H. Luu <kevin@anyscale.com> Date: Tue Sep 3 13:25:09 2024 -0700 [ci] Fix GHA workflow (vllm-project#8129) Signed-off-by: kevin <kevin@anyscale.com> commit c02638e Author: tomeras91 <57313761+tomeras91@users.noreply.github.com> Date: Tue Sep 3 22:37:08 2024 +0300 [CI/Build] make pip install vllm work in macos (for import only) (vllm-project#8118) commit 652c83b Author: Antoni Baum <antoni.baum@protonmail.com> Date: Tue Sep 3 12:28:25 2024 -0700 [Misc] Raise a more informative exception in add/remove_logger (vllm-project#7750) commit 6d646d0 Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> Date: Tue Sep 3 14:50:29 2024 -0400 [Core] Optimize Async + Multi-step (vllm-project#8050) commit 95a178f Author: Kevin H. Luu <kevin@anyscale.com> Date: Tue Sep 3 11:32:27 2024 -0700 [CI] Only PR reviewers/committers can trigger CI on PR (vllm-project#8124) Signed-off-by: kevin <kevin@anyscale.com> commit bd852f2 Author: Cody Yu <hao.yu.cody@gmail.com> Date: Tue Sep 3 10:49:18 2024 -0700 [Performance] Enable chunked prefill and prefix caching together (vllm-project#8120) Co-authored-by: Tao He <sighingnow@gmail.com> Co-authored-by: Juelianqvq <Juelianqvq@noreply.github.com> commit ec26653 Author: Isotr0py <2037008807@qq.com> Date: Tue Sep 3 21:37:52 2024 +0800 [Bugfix][VLM] Add fallback to SDPA for ViT model running on CPU backend (vllm-project#8061) commit 0fbc669 Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Mon Sep 2 20:35:42 2024 -0700 [Bugfix] Fix single output condition in output processor (vllm-project#7881) commit 6e36f4f Author: wang.yuqi <noooop@126.com> Date: Tue Sep 3 05:20:12 2024 +0800 improve chunked prefill performance [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (vllm-project#7874) commit dd2a6a8 Author: Isotr0py <2037008807@qq.com> Date: Mon Sep 2 23:48:56 2024 +0800 [Bugfix] Fix internlm2 tensor parallel inference (vllm-project#8055) commit 4ca65a9 Author: Isotr0py <2037008807@qq.com> Date: Mon Sep 2 20:43:26 2024 +0800 [Core][Bugfix] Accept GGUF model without .gguf extension (vllm-project#8056) commit e2b2aa5 Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Sun Sep 1 23:09:46 2024 -0700 [TPU] Align worker index with node boundary (vllm-project#7932) commit e6a26ed Author: Lily Liu <lilyliupku@gmail.com> Date: Sun Sep 1 21:23:29 2024 -0700 [SpecDecode][Kernel] Flashinfer Rejection Sampling (vllm-project#7244) commit f8d6014 Author: Shawn Tan <shawn@wtf.sg> Date: Sun Sep 1 21:37:18 2024 -0400 [Model] Add Granite model (vllm-project#7436) Co-authored-by: Nick Hill <nickhill@us.ibm.com> commit 5b86b19 Author: Roger Wang <136131678+ywang96@users.noreply.github.com> Date: Sun Sep 1 14:46:57 2024 -0700 [Misc] Optional installation of audio related packages (vllm-project#8063) commit 5231f08 Author: Roger Wang <136131678+ywang96@users.noreply.github.com> Date: Sat Aug 31 16:35:53 2024 -0700 [Frontend][VLM] Add support for multiple multi-modal items (vllm-project#8049) commit 8423aef Author: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com> Date: Sat Aug 31 15:44:03 2024 -0400 [BugFix][Core] Multistep Fix Crash on Request Cancellation (vllm-project#8059) commit 4f5d844 Author: Nicolò Lucchesi <nicolo.lucchesi@gmail.com> Date: Sat Aug 31 09:27:58 2024 +0200 [Bugfix] Fix ModelScope models in v0.5.5 (vllm-project#8037) commit d05f0a9 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Sat Aug 31 13:26:55 2024 +0800 [Bugfix] Fix import error in Phi-3.5-MoE (vllm-project#8052) commit 622f8ab Author: Pavani Majety <pmajety@nvidia.com> Date: Fri Aug 30 22:18:50 2024 -0700 [Bugfix] bugfix and add model test for flashinfer fp8 kv cache. (vllm-project#8013) commit 1248e85 Author: Wenxiang <8460860+wenxcs@users.noreply.github.com> Date: Sat Aug 31 03:42:57 2024 +0800 [Model] Adding support for MSFT Phi-3.5-MoE (vllm-project#7729) Co-authored-by: Your Name <you@example.com> Co-authored-by: Zeqi Lin <zelin@microsoft.com> Co-authored-by: Zeqi Lin <Zeqi.Lin@microsoft.com> commit 2684efc Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Fri Aug 30 09:01:26 2024 -0700 [TPU][Bugfix] Fix tpu type api (vllm-project#8035) commit 058344f Author: Kaunil Dhruv <dhruv.kaunil@gmail.com> Date: Fri Aug 30 08:21:02 2024 -0700 [Frontend]-config-cli-args (vllm-project#7737) Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> Co-authored-by: Kaunil Dhruv <kaunil_dhruv@intuit.com> commit 98cef6a Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Fri Aug 30 23:20:34 2024 +0800 [Core] Increase default `max_num_batched_tokens` for multimodal models (vllm-project#8028) commit f97be32 Author: Jungho Christopher Cho <wjdgh6655@gmail.com> Date: Sat Aug 31 00:19:27 2024 +0900 [VLM][Model] TP support for ViTs (vllm-project#7186) Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com> Co-authored-by: Roger Wang <ywang@roblox.com> commit afd39a4 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Fri Aug 30 23:03:28 2024 +0800 [Bugfix] Fix import error in Exaone model (vllm-project#8034) commit 2148441 Author: Richard Liu <39319471+richardsliu@users.noreply.github.com> Date: Fri Aug 30 00:27:40 2024 -0700 [TPU] Support single and multi-host TPUs on GKE (vllm-project#7613) commit dc13e99 Author: Yohan Na <nayohan13@gmail.com> Date: Fri Aug 30 15:34:20 2024 +0900 [MODEL] add Exaone model support (vllm-project#7819) commit 34a0e96 Author: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com> Date: Fri Aug 30 11:11:39 2024 +0700 [Kernel] changing fused moe kernel chunk size default to 32k (vllm-project#7995) commit 80c7b08 Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Thu Aug 29 19:35:29 2024 -0700 [TPU] Async output processing for TPU (vllm-project#8011) commit 428dd14 Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com> Date: Thu Aug 29 22:19:08 2024 -0400 [Core] Logprobs support in Multi-step (vllm-project#7652) commit 4abed65 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Fri Aug 30 08:49:04 2024 +0800 [VLM] Disallow overflowing `max_model_len` for multimodal models (vllm-project#7998) commit 0c785d3 Author: Wei-Sheng Chin <wechi@microsoft.com> Date: Thu Aug 29 16:48:11 2024 -0700 Add more percentiles and latencies (vllm-project#7759) commit 4664cea Author: chenqianfzh <51831990+chenqianfzh@users.noreply.github.com> Date: Thu Aug 29 16:09:08 2024 -0700 support bitsandbytes 8-bit and FP4 quantized models (vllm-project#7445) commit 257afc3 Author: Harsha vardhan manoj Bikki <39381063+hbikki@users.noreply.github.com> Date: Thu Aug 29 13:58:14 2024 -0700 [Neuron] Adding support for context-lenght, token-gen buckets. (vllm-project#7885) Co-authored-by: Harsha Bikki <harbikh@amazon.com> commit 86a677d Author: Dipika Sikka <dipikasikka1@gmail.com> Date: Thu Aug 29 16:46:55 2024 -0400 [misc] update tpu int8 to use new vLLM Parameters (vllm-project#7973) commit d78789a Author: Isotr0py <2037008807@qq.com> Date: Fri Aug 30 03:54:49 2024 +0800 [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism (vllm-project#7954) commit c334b18 Author: kushanam <42385577+kushanam@users.noreply.github.com> Date: Thu Aug 29 12:15:04 2024 -0700 extend cuda graph size for H200 (vllm-project#7894) Co-authored-by: youkaichao <youkaichao@126.com> commit 6b34215 Author: Pavani Majety <pavanimajety@gmail.com> Date: Thu Aug 29 11:53:11 2024 -0700 [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend. + BugFix for kv_cache_dtype=auto (vllm-project#7985) Co-authored-by: Simon Mo <simon.mo@hey.com> Co-authored-by: Cody Yu <hao.yu.cody@gmail.com> commit 3f60f22 Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com> Date: Thu Aug 29 14:18:26 2024 -0400 [Core] Combine async postprocessor and multi-step (vllm-project#7921) commit f205c09 Author: Jonas M. Kübler <44084297+jmkuebler@users.noreply.github.com> Date: Thu Aug 29 07:18:13 2024 +0200 [Bugfix] Unify rank computation across regular decoding and speculative decoding (vllm-project#7899) commit ef99a78 Author: youkaichao <youkaichao@gmail.com> Date: Wed Aug 28 21:27:06 2024 -0700 Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (vllm-project#7982) commit 74d5543 Author: Peter Salas <peter@fixie.ai> Date: Wed Aug 28 20:24:31 2024 -0700 [VLM][Core] Fix exceptions on ragged NestedTensors (vllm-project#7974) commit a7f65c2 Author: youkaichao <youkaichao@gmail.com> Date: Wed Aug 28 17:32:26 2024 -0700 [torch.compile] remove reset (vllm-project#7975) commit 4289cad Author: Nick Hill <nickhill@us.ibm.com> Date: Wed Aug 28 17:22:43 2024 -0700 [Frontend] Minor optimizations to zmq decoupled front-end (vllm-project#7957) Co-authored-by: Robert Shaw <rshaw@neuralmagic> commit af59df0 Author: Michael Goin <michael@neuralmagic.com> Date: Wed Aug 28 19:19:17 2024 -0400 Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test (vllm-project#7961) commit ce6bf3a Author: youkaichao <youkaichao@gmail.com> Date: Wed Aug 28 16:10:12 2024 -0700 [torch.compile] avoid Dynamo guard evaluation overhead (vllm-project#7898) Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> commit 3cdfe1f Author: bnellnm <49004751+bnellnm@users.noreply.github.com> Date: Wed Aug 28 18:11:49 2024 -0400 [Bugfix] Make torch registration of punica ops optional (vllm-project#7970) commit fdd9daa Author: Mor Zusman <mor.zusmann@gmail.com> Date: Thu Aug 29 01:06:52 2024 +0300 [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (vllm-project#7651) commit 8c56e57 Author: Stas Bekman <stas00@users.noreply.github.com> Date: Wed Aug 28 13:54:23 2024 -0700 [Doc] fix 404 link (vllm-project#7966) commit eeffde1 Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Wed Aug 28 13:10:21 2024 -0700 [TPU] Upgrade PyTorch XLA nightly (vllm-project#7967) commit e5697d1 Author: rasmith <Randall.Smith@amd.com> Date: Wed Aug 28 14:37:47 2024 -0500 [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ (vllm-project#7386) commit b98cc28 Author: Pavani Majety <pavanimajety@gmail.com> Date: Wed Aug 28 10:01:22 2024 -0700 [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. (vllm-project#7798) Co-authored-by: Simon Mo <simon.mo@hey.com> commit ef9baee Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Wed Aug 28 23:11:18 2024 +0800 [Bugfix][VLM] Fix incompatibility between vllm-project#7902 and vllm-project#7230 (vllm-project#7948) commit 98c12cf Author: Stas Bekman <stas00@users.noreply.github.com> Date: Wed Aug 28 05:12:32 2024 -0700 [Doc] fix the autoAWQ example (vllm-project#7937) commit f52a43a Author: youkaichao <youkaichao@gmail.com> Date: Wed Aug 28 01:27:07 2024 -0700 [ci][test] fix pp test failure (vllm-project#7945) commit e358053 Author: Cody Yu <hao.yu.cody@gmail.com> Date: Wed Aug 28 00:36:31 2024 -0700 [Performance] Enable chunked prefill and prefix caching together (vllm-project#7753)
…l models (vllm-project#5649) Co-authored-by: constellate <constellate@1-ai-appserver-staging.codereach.com> Co-authored-by: Kyle Mistele <kyle@constellate.ai>
…l models (vllm-project#5649) Co-authored-by: constellate <constellate@1-ai-appserver-staging.codereach.com> Co-authored-by: Kyle Mistele <kyle@constellate.ai>
…l models (vllm-project#5649) Co-authored-by: constellate <constellate@1-ai-appserver-staging.codereach.com> Co-authored-by: Kyle Mistele <kyle@constellate.ai> Signed-off-by: Alvant <alvasian@yandex.ru>
DRAFT: OpenAI Tool Use Checklist
This (Draft) PR will add support for OpenAI-style tool calling in a way that is minimally opinionated about tool use formats & prompt formatting.
The following features are expected to be supported:
tool_choice="auto"
- named tool choice is already supported via guided decodingI'd welcome anyone who wants to contribute on this, and would be happy to add you to the Constellate AI vllm fork that this PR is based off of - please just leave a comment!
Checklist/roadmap:
tools
andtool_choice
tool_choice="auto"
response.tool_calls
tool_use
chat templateFIX #3237 #4656 (link existing issues this PR will resolve)
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!