Skip to content

feat: use tiktoken for tokenizing prompts #927

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Xunzhuo
Copy link
Member

@Xunzhuo Xunzhuo commented Jun 5, 2025

feat: use tiktoken for tokenizing prompts

Copy link

netlify bot commented Jun 5, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit fcdb8f7
🔍 Latest deploy log https://app.netlify.com/projects/gateway-api-inference-extension/deploys/6842aa255ad96b00080dabf3
😎 Deploy Preview https://deploy-preview-927--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 5, 2025
@k8s-ci-robot k8s-ci-robot requested review from danehans and robscott June 5, 2025 12:35
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Xunzhuo
Once this PR has been reviewed and has the lgtm label, please assign jeffwan for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jun 5, 2025
Signed-off-by: bitliu <bitliu@tencent.com>
@ahg-g
Copy link
Contributor

ahg-g commented Jun 5, 2025

I think we should make tokenization a plugin in our framework

@Xunzhuo
Copy link
Member Author

Xunzhuo commented Jun 6, 2025

@ahg-g how about adding an opt in pkg/epp/scheduling/config/config.go to let user to choose which tokenizer to use by passing envs? Plz see latest updates.

Signed-off-by: bitliu <bitliu@tencent.com>
@@ -11,6 +11,8 @@ require (
github.com/google/uuid v1.6.0
github.com/onsi/ginkgo/v2 v2.23.4
github.com/onsi/gomega v1.37.0
github.com/pkoukk/tiktoken-go v0.1.7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Xunzhuo it's great to see you getting involved in the project! Why use github.com/pkoukk/tiktoken-go instead of github.com/tiktoken-go/tokenizer? The latter seems to be better maintained.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @danehans, just planning to get involved in this project deeper, would love to help.

QueueThresholdCritical: envutil.GetEnvInt("QUEUE_THRESHOLD_CRITICAL", commonconfig.DefaultQueueThresholdCritical, baseLogger),
QueueingThresholdLoRA: envutil.GetEnvInt("QUEUING_THRESHOLD_LORA", defaultQueueingThresholdLoRA, baseLogger),
LoraAffinityThreshold: envutil.GetEnvFloat("LORA_AFFINITY_THRESHOLD", defaultLoraAffinityThreshold, baseLogger),
PrefixCacheTokenizerType: envutil.GetEnvString("PREFIX_CACHE_TOKENIZER_TYPE", "characters", baseLogger),
Copy link
Contributor

@danehans danehans Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the tokenizer dependent on the prefix cache plugin being enabled?

@nirrozenbaum
Copy link
Contributor

nirrozenbaum commented Jun 8, 2025

cc @liu-cong @vMaroon @kfirtoledo
please have a look, this PR might affect the work you're doing on converging llm-d prefix with GIE prefix.

@@ -224,7 +226,11 @@ func (m *Plugin) getPrefixState(cycleState *types.CycleState) (*schedulingContex
// For block i, hash(i) = hash(block i content, hash(i-1)).
func hashPrompt(ctx context.Context, request *types.LLMRequest, cacheBlockSize int, maxPrefixBlocks int) []BlockHash {
loggerDebug := log.FromContext(ctx).V(logutil.DEBUG)
prompt := []byte(request.Prompt)
prompt, err := tokenizer.New(config.Conf.PrefixCacheTokenizerType).Tokenize(request.Prompt)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why convert the prompts to tokens?
It doesn't provide any benefit in the prefix scorer and only increases the scorer's computation time. (@liu-cong)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

It was intentional not to tokenize the prompt since this is an estimated prefix cache indexer. Tokenization won't necessarily increase the accuracy. There are some ongoing investigations on whether to actually probe the cache indexes from the model servers and match the tokenizers. But that should be a separate discussion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++

cool to see what an implementation of a tokenizer in the EPP may look like, but I'm not sure if we have a need quite yet.

@vMaroon
Copy link
Contributor

vMaroon commented Jun 8, 2025

I think tokenization doesn't define a standalone feature - tokenization should serve an end. Like @kfirtoledo said, prefix-cache locality estimation in the router does not require tokenization.

My thoughts:

  1. Tokenization should ideally stay off the blocking path unless absolutely necessary - see this llm-d-inference-extension KV-Cache aware scorer that does require tokenization, but still keeps it off the blocking path while caching and managing tokenizers
  2. As far as I know, the Tiktoken tokenizer is formally supports OpenAI's closed models, e.g., for use against the OpenAI service APIs. See this list in the imported repo: https://github.com/pkoukk/tiktoken-go#available-models
    • While it may be usable for other models that build on the same encodings, the formal support for the broader group is through HuggingFace's tokenizer

@nirrozenbaum
Copy link
Contributor

nirrozenbaum commented Jun 8, 2025

based on @kfirtoledo's and @vMaroon's comments, this change might cause performance degradation and additional negative implications with no significant advantages.
/hold until the need for this change is settled.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 8, 2025
@Xunzhuo
Copy link
Member Author

Xunzhuo commented Jun 10, 2025

Thanks for the input, let us hold for this PR for now and waiting for more feedbacks.

@k8s-ci-robot
Copy link
Contributor

@Xunzhuo: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-gateway-api-inference-extension-test-unit-main fcdb8f7 link true /test pull-gateway-api-inference-extension-test-unit-main
pull-gateway-api-inference-extension-test-e2e-main fcdb8f7 link true /test pull-gateway-api-inference-extension-test-e2e-main

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants