Currently each input is fuzzed until is causes a crash or returns a single interesting mutation, before moving on to the next input. This input is then only fuzzed again once the input queue cycles. This means that an input that results in interesting outputs is fuzzed equally as inputs which produce nothing interesting.
Ideally the fuzzing shouldn't stop when the first interesting mutation is found, but after some computed number of cycles (where that number of cycles is picked based on some heuristic that favors inputs that produce interesting output, have higher coverage than other inputs, are smaller than other inputs, execute faster than other inputs, etc).
This would probably be implemented at the worker level, with the results either being streamed back to the coordinator, or returned in one big batch (probably the former is better, since it'd require holding less memory and would allow for other workers to start working on a returned input, if everything else has already been consumed).