feat(vllm): add optional --tokenizer argument for k8s deployments#841
Merged
feat(vllm): add optional --tokenizer argument for k8s deployments#841
Conversation
Add support for specifying a custom tokenizer when deploying vLLM to Kubernetes. The tokenizer argument is optional and defaults to the model's built-in tokenizer. This implementation is for --target=k8s only. Support for --target=gce is TODO. Changes: - Add --tokenizer CLI argument to vllm up command - Pass tokenizer through to k8s deployment manifest - Update deployment.yml to conditionally add --tokenizer flag to vllm serve - Box VllmCommands and ImageCommands to fix clippy large_enum_variant warning - Add comprehensive test coverage for tokenizer functionality Made with Bob Signed-off-by: Nick Mitchell <nickm@us.ibm.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add support for specifying a custom tokenizer when deploying vLLM to Kubernetes.
The tokenizer argument is optional and defaults to the model's built-in tokenizer.
This implementation is for
--target=k8sonly. Support for--target=gceis TODO.Changes
--tokenizerCLI argument to vllm up command--tokenizerflag to vllm serveUsage
Testing
✅ All 15 k8s tests pass
✅ Clippy checks pass with
-D warnings✅ Rustfmt formatting is correct
Made with Bob