You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the important techniques for testing determinism is to run 2 instances of the same simulation on different devices, and continuously verify that they remain synchronized – even with network delay affecting the fiat events issued by each device. This requires a full TimeSteward implementation, and also some additional features we haven't settled on yet.
simply_synchronized is now working and successfully caught me using HashSet inappropriately. There is much more work to be done in this area – including making the panic messages more detailed and understandable – but I consider this sufficient for MVP.
With the API changes, the old testing is no longer applicable, and there are many new tests that need to be done.
My idea is to make an auditing TimeSteward – a separate TimeSteward implementor that does costly operations to detect as many possible incorrect behaviors as possible. I'm thinking it would also be able to take a replay – a record of fiat events plus a record of what order all events were run – and rerun the simulation in that order, even if the original simulation was run by a real-time TimeSteward instead of the auditing one.
There are various ways that TimeSteward callers can misbehave, which we should find ways to audit for.
The text was updated successfully, but these errors were encountered: