-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run multiple pre-defined benchmarks #947
Conversation
25e9cfa
to
06086b4
Compare
Transactions CostsSizes and execution budgets for Hydra protocol transactions. Note that unlisted parameters are currently using
Script summary
Cost of Init Transaction
Cost of Commit TransactionThis is using ada-only outputs for better comparability.
Cost of CollectCom Transaction
Cost of Close Transaction
Cost of Contest Transaction
Cost of Abort TransactionSome variation because of random mixture of still initial and already committed outputs.
Cost of FanOut TransactionInvolves spending head output and burning head tokens. Uses ada-only UTxO for better comparability.
|
65134da
to
24c9e86
Compare
b8f7d66
to
1cf3264
Compare
And use those to generate Summary
We only run one client per node anyway so the cluster size is always exactly the number of client datasets
... in the hope it will allow OS to clean resources and prevent spurious errors across different scenarios
Those transactions were put back in the submission which, I suspect, caused the process to hang as the consuming side just stopped before that.
Given the way we generate our datasets we should never observe invalid txs because 1/ the dataset is a linear sequence of txs consuming the previous one's output and 2/ we always wait for a tx to be confirmed before submitting the next one. However, it seems that under some circumstances, this can happen so we would like to know when it does. Therefore, we fail the test with a specific message to give more context about the failure and help investigate when this happens.
77a28db
to
cdf7039
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BAM!
This PR introduces the ability for ETE benchmarks to run multiple pre-defined datasets and group their results into a single output. The idea is that this will allow checking in well-known datasets, representative of behaviour we want to optimise for or track, and compare the evolution of their performance over time.
The comparison part is still pending.