Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Exact) time budget(s) for search use-case? #35

Closed
adamsolomou opened this issue Dec 8, 2017 · 1 comment
Closed

(Exact) time budget(s) for search use-case? #35

adamsolomou opened this issue Dec 8, 2017 · 1 comment

Comments

@adamsolomou
Copy link

Similar to #27 would the search use-case be tested for a single time budget or over a range of increasing time budgets ?

From readme.md

the amount of time the program is allowed to run before the user terminates it (this will be of the order of 10s of seconds).

Is it possible to provide the (exact) time budget(s) that we should optimise for? I'm mainly asking because a modification might cause the program to print the results after 15sec, but if the program is terminated at 12 sec then nothing will be printed to stdout.

@m8pple
Copy link
Contributor

m8pple commented Dec 8, 2017

I'm reluctant to give an exact time budget as it encourages people to over-optimise,
which is not the point. Where possible I prefer cw6 to reflect genuine practise, so
try to avoid introducing things which bring it back to being just an assessment.

The idea is that this is targeting a rate-based metric, so that requires a particular mind-set
and approach from a parallelism point of view. (The three use-cases are broadly following
the latency, throughput, and latency-distortion metrics identified in the lectures).
If the exact time-budget is given, then it no longer becomes a throughput oriented
metric, it is instead latency oriented.

The intent of saying "10s of seconds" was to indicate that there will be enough time to
overcome startup costs, but to avoid people thinking in terms of single-batch: the key
to good throughput is to be outputting progressively, rather than outputting everything
at the end.

I'll refine it to be "between 10 and 100 seconds" as that keeps the spirit of optimising
for throughput rather than latency, but at least puts a bound on it.

@m8pple m8pple closed this as completed in c9a23f1 Dec 8, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants