Skip to content

Conversation

@larryliu0820
Copy link
Contributor

@larryliu0820 larryliu0820 commented Nov 12, 2025

This PR adds a CUDA memory tracker and integrates it into Stats of LLM Runner. We added gpu_total_bytes, gpu_free_before_load_bytes, gpu_free_after_load_bytes, gpu_free_after_generate_bytes and gpu_peak_usage_mb information.

See logging:

PyTorchObserver {"prompt_tokens":387,"generated_tokens":68,"model_load_start_ms":1762976881583,"model_load_end_ms":1762976883487,"inference_start_ms":1762976887396,"inference_end_ms":1762976888589,"prompt_eval_end_ms":1762976887815,"first_token_ms":1762976887815,"aggregate_sampling_time_ms":17,"gpu_total_bytes":17094475776,"gpu_free_before_load_bytes":15589179392,"gpu_free_after_load_bytes":11455692800,"gpu_free_after_generate_bytes":10530848768,"gpu_peak_usage_mb":4824,"SCALING_FACTOR_UNITS_PER_SECOND":1000}

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15780

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 Cancelled Job, 5 Unrelated Failures

As of commit 59e6a65 with merge base 1034a0f (image):

CANCELLED JOB - The following job was cancelled. Please retry:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 12, 2025
@larryliu0820 larryliu0820 marked this pull request as ready for review November 12, 2025 19:59
Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding that! The logic LGTM, i just have some concern if we should use the word peak gpu memory

}

if (!is_loaded()) {
stats_->model_load_start_ms = time_in_ms();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we want to delete the loading time recording?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved it inside load()

gpu_free_before_load_bytes = static_cast<uint64_t>(-1);
gpu_free_after_load_bytes = static_cast<uint64_t>(-1);
gpu_free_after_generate_bytes = static_cast<uint64_t>(-1);
gpu_peak_usage_mb = -1.0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can rename it for better description. I don't think it is actually the peak gpu usage, it is the memory usage gap between start and finish running. It could be happen if the peak moment happen in the middle of execution.

Same as other places for "peak gpu memory"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The meaning of this field is "among all the samples that I collected using log_sample() what is the peak gpu memory". I think we can add a few more sample points, but from what I observed, the memory usage has been quite stable after load() finishes.

@larryliu0820 larryliu0820 added the release notes: desktop for desktop/laptop workstream label Nov 12, 2025
@larryliu0820 larryliu0820 merged commit b9751b1 into main Nov 12, 2025
167 of 179 checks passed
@larryliu0820 larryliu0820 deleted the memory_tracker branch November 12, 2025 23:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: desktop for desktop/laptop workstream

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants