Skip to content

Conversation

@pgmpablo157321
Copy link
Contributor

Fixes #977

  • Count the number of overlatency queries for server scenario: In this case samples per query is always 1, sample_latencies are used as the query latencies

  • Log the number of queries for all scenarios and the number of overlatency queries for the server scenario

@github-actions
Copy link
Contributor

github-actions bot commented Jan 11, 2022

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@rnaidu02 rnaidu02 requested a review from psyhtest January 11, 2022 16:47
@nvpohanh
Copy link
Contributor

Otherwise, looks good to me. thanks!

@pgmpablo157321 pgmpablo157321 force-pushed the log_queries branch 2 times, most recently from ce65c57 to 374308d Compare January 13, 2022 20:54
// Count the number of overlatency queries for the Server or SingleStream
// scenarios. Since in this scenarios the number of samples per query is 1,
// sample_latencies are used. Later this will be calculated for the
// MultiStream scenario, but for this case it is necesary to use
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

necesary -> necessary

@nvpohanh
Copy link
Contributor

LGTM now! @psyhtest @rnaidu02 FYI

@rnaidu02
Copy link
Contributor

@pgmpablo157321 You branch is out of date with the main. Can you sync your branch?

@rnaidu02 rnaidu02 merged commit dba14f8 into mlcommons:master Jan 25, 2022
@github-actions github-actions bot locked and limited conversation to collaborators Jan 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Number of Queries and Number of Overlatency Queries

3 participants