-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Data] Add Runtime Metrics String #43790
Conversation
Signed-off-by: Matthew Owen <mowen@anyscale.com>
Signed-off-by: Matthew Owen <mowen@anyscale.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Re the example
Runtime Metrics:
* ReadParquet->SplitBlocks(24): 1.42s (80.860%)
* Map(f): 290.89ms (16.539%)
* Sort: 0us (0.000%)
* Filter(g): 39.68ms (2.256%)
* Scheduling: 166.04ms (9.440%)
* Total: 1.76s (100%)
Any reason why Sort
took 0us in metrics here?
Hmm I think it might be a quirk of Sort having two sub-operators. From a little pdb investigation it seems like the SortMap and SortReduct |
#43790 Unearthed a bug with the calculation of time_total_s for suboperators, this fixes that bug by moving the calculation of the time to the top level of the from_block_metadata function. --------- Signed-off-by: Matthew Owen <mowen@anyscale.com>
This adds an additional set of runtime metrics printed to help identify bottlenecks in the Ray Data code. Signed-off-by: Matthew Owen <mowen@anyscale.com>
ray-project#43790 Unearthed a bug with the calculation of time_total_s for suboperators, this fixes that bug by moving the calculation of the time to the top level of the from_block_metadata function. --------- Signed-off-by: Matthew Owen <mowen@anyscale.com>
Why are these changes needed?
This adds an additional set of runtime metrics printed to help identify bottlenecks in the Ray Data code. For example for the following code:
the following runtime metrics would be printed:
Unrelated to the main changes, I noticed that I was using the wrong computation for the
total_wall_time
used in the dataset throughput, so I fixed that. That unearthed a bug in theDatasetStatsSummary.get_total_wall_time
, so I fixed that as well.Related issue number
Closes #42804
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.