Skip to content

Conversation

ahao-anyscale
Copy link
Contributor

@ahao-anyscale ahao-anyscale commented Sep 15, 2025

Purpose

Closes #24460

Test Plan

Unit tests in vllm/tests/test_config.py

Test Result

Unit tests:

=========================================== test session starts ============================================
platform linux -- Python 3.11.11, pytest-8.4.2, pluggy-1.5.0 -- /home/ray/anaconda3/bin/python
cachedir: .pytest_cache
rootdir: /home/ray/default/workspace/fork/vllm
configfile: pyproject.toml
plugins: locust-2.40.4, asyncio-1.2.0, anyio-3.7.1
asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function                                                                                               
collected 3 items                                                                                          

tests/test_config.py::test_s3_url_model_tokenizer_paths[s3://air-example-data/rayllm-ossci/facebook-opt-350m/] PASSED [ 33%]                                                                                            
tests/test_config.py::test_s3_url_model_tokenizer_paths[s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/] PASSED [ 66%]                                                                                   
tests/test_config.py::test_s3_url_different_models_create_different_directories PASSED               [100%]

============================================ 3 passed in 29.29s ============================================

Terminal output when running vllm serve twice to generate and use cache:

(base) ray@ip-10-0-84-181:~/default/workspace/fork/vllm$ vllm serve s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/ --load-format runai_streamer
...
(EngineCore_DP0 pid=371203) INFO 09-15 15:26:12 [core.py:75] Initializing a V1 LLM engine (v0.1.dev9482+gb8303d21c) with config: model='/tmp/bb659210', speculative_config=None, tokenizer='/tmp/bb659210', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=runai_streamer, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":1,"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null}
...
(EngineCore_DP0 pid=371203) [RunAI Streamer] CPU Buffer size: 8 Bytes for files: ['s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/model.safetensors']
(EngineCore_DP0 pid=371203) [RunAI Streamer] CPU Buffer size: 16.4 KiB for files: ['s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/model.safetensors']
(EngineCore_DP0 pid=371203) [RunAI Streamer] CPU Buffer size: 2.3 GiB for files: ['s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/model.safetensors']
Loading safetensors using Runai Model Streamer:   0% Completed | 0/146 [00:00<?, ?it/s]
Loading safetensors using Runai Model Streamer:   1% Completed | 1/146 [00:00<00:26,  5.46it/s]
Loading safetensors using Runai Model Streamer:  23% Completed | 34/146 [00:00<00:00, 143.39it/s]
Loading safetensors using Runai Model Streamer:  36% Completed | 53/146 [00:00<00:00, 155.32it/s]
Loading safetensors using Runai Model Streamer:  60% Completed | 87/146 [00:00<00:00, 217.54it/s]
Loading safetensors using Runai Model Streamer:  81% Completed | 118/146 [00:00<00:00, 243.86it/s]
Loading safetensors using Runai Model Streamer:  99% Completed | 145/146 [00:00<00:00, 175.13it/s]
Read throughput is 2.66 GB per second 
Loading safetensors using Runai Model Streamer: 100% Completed | 146/146 [00:00<00:00, 156.78it/s]
(EngineCore_DP0 pid=371203) 
(EngineCore_DP0 pid=371203) [RunAI Streamer] Overall time to stream 2.3 GiB of all files: 1.25s, 1.8 GiB/s
(EngineCore_DP0 pid=371203) INFO 09-15 15:26:15 [gpu_model_runner.py:2408] Model loading took 2.3185 GiB and 1.706393 seconds
(EngineCore_DP0 pid=371203) INFO 09-15 15:26:18 [backends.py:539] Using cache directory: /home/ray/.cache/vllm/torch_compile_cache/5896247dff/rank_0_0/backbone for vLLM's torch.compile
(EngineCore_DP0 pid=371203) INFO 09-15 15:26:18 [backends.py:550] Dynamo bytecode transform time: 2.60 s
(EngineCore_DP0 pid=371203) INFO 09-15 15:26:20 [backends.py:194] Cache the graph for dynamic shape for later use
(EngineCore_DP0 pid=371203) INFO 09-15 15:26:29 [backends.py:215] Compiling a graph for dynamic shape takes 11.21 s
(EngineCore_DP0 pid=371203) INFO 09-15 15:26:30 [monitor.py:34] torch.compile takes 13.81 s in total
...

(base) ray@ip-10-0-84-181:~/default/workspace/fork/vllm$ vllm serve s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/ --load-format runai_streamer
...
(EngineCore_DP0 pid=372958) INFO 09-15 15:27:02 [core.py:75] Initializing a V1 LLM engine (v0.1.dev9482+gb8303d21c) with config: model='/tmp/bb659210', speculative_config=None, tokenizer='/tmp/bb659210', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=runai_streamer, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":1,"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null}
...
(EngineCore_DP0 pid=372958) [RunAI Streamer] CPU Buffer size: 8 Bytes for files: ['s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/model.safetensors']
(EngineCore_DP0 pid=372958) [RunAI Streamer] CPU Buffer size: 16.4 KiB for files: ['s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/model.safetensors']
(EngineCore_DP0 pid=372958) [RunAI Streamer] CPU Buffer size: 2.3 GiB for files: ['s3://air-example-data/rayllm-ossci/meta-Llama-3.2-1B-Instruct/model.safetensors']
Loading safetensors using Runai Model Streamer:   0% Completed | 0/146 [00:00<?, ?it/s]
Loading safetensors using Runai Model Streamer:   1% Completed | 1/146 [00:00<00:23,  6.19it/s]
Loading safetensors using Runai Model Streamer:  23% Completed | 34/146 [00:00<00:00, 151.81it/s]
Loading safetensors using Runai Model Streamer:  43% Completed | 63/146 [00:00<00:00, 198.33it/s]
Loading safetensors using Runai Model Streamer:  67% Completed | 98/146 [00:00<00:00, 247.83it/s]
Loading safetensors using Runai Model Streamer:  95% Completed | 139/146 [00:00<00:00, 299.89it/s]
Read throughput is 3.92 GB per second 
Loading safetensors using Runai Model Streamer: 100% Completed | 146/146 [00:00<00:00, 217.79it/s]
(EngineCore_DP0 pid=372958) 
(EngineCore_DP0 pid=372958) [RunAI Streamer] Overall time to stream 2.3 GiB of all files: 0.86s, 2.7 GiB/s
(EngineCore_DP0 pid=372958) INFO 09-15 15:27:05 [gpu_model_runner.py:2408] Model loading took 2.3185 GiB and 1.345015 seconds
(EngineCore_DP0 pid=372958) INFO 09-15 15:27:08 [backends.py:539] Using cache directory: /home/ray/.cache/vllm/torch_compile_cache/5896247dff/rank_0_0/backbone for vLLM's torch.compile
(EngineCore_DP0 pid=372958) INFO 09-15 15:27:08 [backends.py:550] Dynamo bytecode transform time: 2.64 s
(EngineCore_DP0 pid=372958) INFO 09-15 15:27:09 [backends.py:161] Directly load the compiled graph(s) for dynamic shape from the cache, took 1.011 s
(EngineCore_DP0 pid=372958) INFO 09-15 15:27:09 [monitor.py:34] torch.compile takes 2.64 s in total


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Copy link
Collaborator

@kouroshHakha kouroshHakha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, asking for some quick stuf:

Comment on lines 430 to 431
# Verify that the paths are deterministic based on the hash
import hashlib
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove this test since it's implementation dependent. Checking for the determinism through creating a config twice and seeing if the resulting paths match is the better test, which you are already doing.

Comment on lines 497 to 498

# Verify that the directory names contain different hashes
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove these specifics of hashing pattern from test as well.

# Only download tokenizer if needed and not already handled
if is_runai_obj_uri(tokenizer):
object_storage_tokenizer = ObjectStorageModel()
directory = hashlib.sha256(str(tokenizer).encode()).hexdigest()[:8]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create a util function for these hashing specifics that is shared between tokenizer and model?

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Copy link
Collaborator

@kouroshHakha kouroshHakha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@kouroshHakha kouroshHakha added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 16, 2025
@ahao-anyscale ahao-anyscale marked this pull request as ready for review September 16, 2025 00:36
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
…directory creation logic in tests

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Copy link
Collaborator

@ruisearch42 ruisearch42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly minor issues, otherwise LGTM

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Copy link
Collaborator

@ruisearch42 ruisearch42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for addressing the comments, some final nits

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Copy link

mergify bot commented Sep 17, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ahao-anyscale.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 17, 2025
@hmellor
Copy link
Member

hmellor commented Sep 18, 2025

@22quinn / @ProExpertProg could you please take a look at this PR from a model loader / torch compile perspective?

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
@mergify mergify bot removed the needs-rebase label Sep 18, 2025
Copy link

mergify bot commented Sep 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ahao-anyscale.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 19, 2025
Copy link
Collaborator

@ProExpertProg ProExpertProg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM don't see anything of note w.r.t. compile cache

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
@mergify mergify bot removed the needs-rebase label Sep 22, 2025
@mgoin mgoin merged commit c8bde93 into vllm-project:main Sep 24, 2025
41 checks passed
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
…gether (vllm-project#24922)

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
…gether (#24922)

Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: ModelConfig Hashing for Torch.compile cache when using S3
7 participants