-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add graviton tests for OpUp #607
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great addition! I put forward some optional suggestions, mostly around asserting more properties for increased test granularity.
|
||
result = _dryrun(maximize_budget, sp, accounts) | ||
|
||
assert result.passed() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should some additional properties on result
be asserted as well? E.g., consider cost
and budget_consumed
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another possible assertion, but probably outside the scope of the goals for the PR.
One could assert on the actual semantics of the program. In particular, in one case I observed that the opcode global OpcodeBudget
was evaluating to 72,016 and one could assert this (if one wanted) via something like:
# another import at the top:
from graviton.blackbox import DryRunEncoder
...
DryRunEncoder.hex(72_016) == result.last_log()
# or alternatively:
72_016 == result.stack_top()
However, this particular field's value is an artifact of dry-run hijacking OpcodeBudget
for its own purposes, and I'm not convinced its worthy of asserting on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hehe... I should have read your comment below before conducting my own search...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tzaffi I ended up reverting the above assertion addition. There's some interaction between local (passes) vs CI (fails) execution that isn't adding up.
I'm annoyed that I don't understand the difference, but I'm out of time to investigate here. If we'd like to retain the additional assertion, I can document a story.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm investigating just a wee bit
sp.flat_fee = True | ||
sp.fee = algosdk.constants.min_txn_fee | ||
result = _dryrun(maximize_budget, DryRunExecutor.SUGGESTED_PARAMS, []) | ||
assert not result.passed() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should more assertions be added in this code path? When I ran through a debugger I observed an overflowing value of budget_consumed
, but it may still be worthwhile making some kind of assertion on it and for result.budget_added() == 0
as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tzaffi Let me know if you prefer adding an assertion after reading the note?
- From my perspective, it's sufficient as is. Given the semantics of Algorand's blockchain, I feel a failed transaction makes clear no effects were realized.
- That said, I don't feel too strongly and if you prefer, I'll add
assert result.budget_added() == 0
.
Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com>
Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com>
This reverts commit ffd9078.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following up from our conversation, I timeboxed an investigation into why tests are failing in parallel. I employed the following strategies to try and break any contention over resources:
- Provide a very large funding amount to the app-account, instead of the precisely measured amount currently being provided
- Use a random sender address instead of the currently used
ZERO_ADDRESS
- Use a random application id, instead of the hardcoded
DryRunExecutor.EXISTING_APP_CALL == 42
None of these approaches impacted the results and the tests continued passing when run serially, yet failing when run in parallel.
I suspect that the dry-run process in go-algorand
is itself unsafe to run in parallel. I.e., when several dry run requests are served at the same time, there could be simultaneous go routines that serve the requests and interfere with each other. However, I didn't take any investigative actions per this hunch.
Also, oddly enough, I was not able to recreate the single-case failures that I had observed last week.
Approving as the tests are great improvement in confidence level even in their current state.
@tzaffi Thanks for the investigation and flexibility here to approve with a less-than-desirable test implementation. |
Adds graviton-based tests for OpUp to defend against AVM execution regressions.
Prior to the PR, pyteal contains no tests proving OpUp evaluation increases opcode budget as designed. The PR leverages https://github.com/algorand/graviton/releases/tag/v0.5.0 for asserting on
budget_added
.