-
Notifications
You must be signed in to change notification settings - Fork 730
Add a CUDA memory tracker and use it in voxtral runner #15780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15780
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 Cancelled Job, 5 Unrelated FailuresAs of commit 59e6a65 with merge base 1034a0f ( CANCELLED JOB - The following job was cancelled. Please retry:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
7e59c3f to
383b91f
Compare
Gasoonjia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding that! The logic LGTM, i just have some concern if we should use the word peak gpu memory
| } | ||
|
|
||
| if (!is_loaded()) { | ||
| stats_->model_load_start_ms = time_in_ms(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we want to delete the loading time recording?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved it inside load()
| gpu_free_before_load_bytes = static_cast<uint64_t>(-1); | ||
| gpu_free_after_load_bytes = static_cast<uint64_t>(-1); | ||
| gpu_free_after_generate_bytes = static_cast<uint64_t>(-1); | ||
| gpu_peak_usage_mb = -1.0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can rename it for better description. I don't think it is actually the peak gpu usage, it is the memory usage gap between start and finish running. It could be happen if the peak moment happen in the middle of execution.
Same as other places for "peak gpu memory"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The meaning of this field is "among all the samples that I collected using log_sample() what is the peak gpu memory". I think we can add a few more sample points, but from what I observed, the memory usage has been quite stable after load() finishes.
This PR adds a CUDA memory tracker and integrates it into
Statsof LLM Runner. We addedgpu_total_bytes,gpu_free_before_load_bytes,gpu_free_after_load_bytes,gpu_free_after_generate_bytesandgpu_peak_usage_mbinformation.See logging: