You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A generator that constructs and outputs instances dynamically for a given instance size may not be optimized to run to completion in time if the given instance size is too large. In an iterated battle, it is assumed that this does not automatically disqualify the generator as it may well be randomized and produce output on the next higher increment.
I would like to keep giving the generator additional chances for higher instance sizes, but would like to advocate cutting it of after a given number of repeating failures due to a timeout.
Without this cutoff, battles currently run very slowly towards the iteration cap, as the generator is granted the maximum allowed running time for each step even if one can rather safely assume that no sane output will be generated for subsequent iteration sizes. My suggestion is to introduce a new configuration option that sets the number of tolerated generator failures in a row before the solving team is awarded its current iteration cap value. It should however still be possible to accept an arbitrary number of failures.
The text was updated successfully, but these errors were encountered:
A
generator
that constructs and outputs instances dynamically for a given instance size may not be optimized to run to completion in time if the given instance size is too large. In aniterated
battle, it is assumed that this does not automatically disqualify thegenerator
as it may well be randomized and produce output on the next higher increment.I would like to keep giving the
generator
additional chances for higher instance sizes, but would like to advocate cutting it of after a given number of repeating failures due to a timeout.Without this cutoff, battles currently run very slowly towards the iteration cap, as the
generator
is granted the maximum allowed running time for each step even if one can rather safely assume that no sane output will be generated for subsequent iteration sizes. My suggestion is to introduce a new configuration option that sets the number of toleratedgenerator
failures in a row before the solving team is awarded its current iteration cap value. It should however still be possible to accept an arbitrary number of failures.The text was updated successfully, but these errors were encountered: