An exciting and daring idea about trade off between query latency and accuracy. #1465
Labels
type/question
Type: question about the product
wontfix
Solution: this will not be worked on recently
Our demand to query latency is strict, at the same time, we can afford less accuracy by missing some records. Mostly, we would discard parts of the result records if more than expected.
To avoid uncontrollable massive Intermediate output, why not assign a time limit in a step in
job DAG
/non parallelizable execution plan
. If max execution time is reached, simply go on to the next step, and drop the rest reply from peers.As TCP is natively non parallelizable, making full use of UDP may be a better solution.
I wander if the problem be solved by provide
max_exec_time/discard_rest_after_sometime
semantics in the DSL.The text was updated successfully, but these errors were encountered: