-
Notifications
You must be signed in to change notification settings - Fork 946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BLS and testing #1074
Comments
I don't mind waiting a couple of weeks for state tests that don't require test-specific options to ignore failed signature check. For sig-verification errors, we can either use a field (type?), the file name or comments to signal what should be tested, a field is probably better because you can do something like this def my_test_case(test_cases) =
...
try:
runStateTransition(test_cases['test_empty_block_transition']
except BLSVerificationError:
if test_cases['test_empty_block_transition'].failure_type == "BLS_signature":
pass
else:
raise # reraise if a BLS error is not expected |
I agree with the third option.
What work is required here? |
I'd bet for 3rd option though I liked first one at the beginning as it sounds like "we don't test this thing again and again", but in fact real BLS verifies not only BLS itself but logic of methods preparing inputs too, plus it's not a big overhead when running whole test in time. |
Few doubts here:
|
Plan, after discussion with @djrtwo IRL:
What we get:
|
I agree with this, I'd like to see all tests with valid BLS sigs/keys. Clients can choose which tests run with "fake crypto". We're presently failing the SSZ tests because they're using fundamentally invalid signatures. Our alternative is to switch the tests to our fake crypto library, but then we're no longer testing production SSZ implementations. We had the same problem with the |
Decided I wanted to get this out to explain the current state of testing, and collect feedback (implementers please comment) on what you need from testing, and your feelings about BLS usage in tests.
BLS and testing
The two pain-points to get a pretty (and large) set of test-vectors out for clients are:
And side-issue, but easily resolved:
efficient creation of a genesis state:
When BLS functionality is implemented in test-code (creation of signed deposits, and verification).
Solution would be to either cache it, or create it directly, without going through the spec functions (current temporary solution on experiment branch).
Status
Talking about the status on
spectest-deco
PR 1052 here, based on thev06x
branch, where we are developing 0.6 improvements. (to be merged back into dev later)The testing pipeline currently looks like:
generator_mode=true
to each of them, making them output a test-vector.Pytests status:
tests/
toeth2spec/test
, i.e. part of packagepytest
@spec_test
or similar (see PR 1052)Test-generation status:
v06x
branch)operations
test-gen uses test-package ability to output test-vectors for each test-casesanity
tests updated and can be cleanly used for test-generation, but requires more work to define the format of the test-vectors, as they is more variety.epoch
processing tests also updated, also can be used, not as complete as block-processing, lower priority.Possible ways forward:
assert verify_...
, justverify_...
, and make it raise a specialBLSVerificationError
(or something like that)A work-in-progress introduction of actual full BLS usage in the pytests is started here:
tests-with-sigs
branchSuggestions welcome.
The text was updated successfully, but these errors were encountered: