Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BLS and testing #1074

Closed
protolambda opened this issue May 11, 2019 · 6 comments
Closed

BLS and testing #1074

protolambda opened this issue May 11, 2019 · 6 comments

Comments

@protolambda
Copy link
Collaborator

Decided I wanted to get this out to explain the current state of testing, and collect feedback (implementers please comment) on what you need from testing, and your feelings about BLS usage in tests.

BLS and testing

The two pain-points to get a pretty (and large) set of test-vectors out for clients are:

  • BLS Signature creation
  • BLS Signature verification

And side-issue, but easily resolved:
efficient creation of a genesis state:
When BLS functionality is implemented in test-code (creation of signed deposits, and verification).
Solution would be to either cache it, or create it directly, without going through the spec functions (current temporary solution on experiment branch).

Status

Talking about the status on spectest-deco PR 1052 here, based on the v06x branch, where we are developing 0.6 improvements. (to be merged back into dev later)

The testing pipeline currently looks like:

  • py-spec, calls BLS stub
  • test-helpers, don't create self-signed objects with valid signatures
  • py-test code, unified with test-vector-creation (see PR 1052)
  • py-test runner to run spec-tests, purely for assertions
  • test-generator running the spec-tests, passing generator_mode=true to each of them, making them output a test-vector.

Pytests status:

  • move from tests/ to eth2spec/test, i.e. part of package
    • removed use of pytest
    • annotated with @spec_test or similar (see PR 1052)
  • as part of test-generation effort, yay for shared effort:
  • stuck on BLS stub (no sig creation/verification)

Test-generation status:

  • BLS, SSZ-generic, SSZ-static, shuffling test generators still all in place and up to date (v06x branch)
  • operations test-gen uses test-package ability to output test-vectors for each test-case
    • but no valid signatures
    • lack of a definition how to handle this signature problem as a test-consumer
      • there are no signature-related testcases
      • turning BLS off would effectively let you check conformance, but it's hacky, and not remotely a good practice to have even an option for...
    • it's approx. ~140MB worth (iirc) of yaml encoded state-transitions, covering many edge-cases. Worth to get in the hands of implementers quick.
  • sanity tests updated and can be cleanly used for test-generation, but requires more work to define the format of the test-vectors, as they is more variety.
  • epoch processing tests also updated, also can be used, not as complete as block-processing, lower priority.

Possible ways forward:

  • Simple but hacky: "turn BLS off for testing"
  • No "BLS off", BLS ON on client side, but only partially on spec side. Rely on signature verification not being hit before anything else during testing
    • valid test cases generated with valid signatures
    • invalid test cases marked: does it error because of BLS? And runners should check the reason for aborting processing: if it doesn't match, the test should fail. Now these pytests don't need full BLS update work, and can be released somewhat quicker
  • "BLS on", more work (~1 week)
    • slower on test-generation, but we get the best kind of test-vectors: correct, BLS verification ON.
    • blocker: what if a test case fails because of a signature error (test setup not creating the sig correctly), instead of a real assertion case. Spec will look correct, passes tests, but things are not right. We need to mark Sig-verification errors distinctly, so we can catch these problems when we turn BLS on in the pyspec. How: instead of assert verify_..., just verify_..., and make it raise a special BLSVerificationError (or something like that)
    • We likely still want to mark tests as "signature related" or not, so implementers can catch it easily if their code is not aborting properly before signature verification, to assure invalid inputs are not costly.

A work-in-progress introduction of actual full BLS usage in the pytests is started here: tests-with-sigs branch

Suggestions welcome.

@mratsim
Copy link
Contributor

mratsim commented May 11, 2019

I don't mind waiting a couple of weeks for state tests that don't require test-specific options to ignore failed signature check.

For sig-verification errors, we can either use a field (type?), the file name or comments to signal what should be tested, a field is probably better because you can do something like this

def my_test_case(test_cases) =
    ...
    try:
      runStateTransition(test_cases['test_empty_block_transition']
    except BLSVerificationError:
      if test_cases['test_empty_block_transition'].failure_type == "BLS_signature":
        pass
      else:
         raise # reraise if a BLS error is not expected

@djrtwo
Copy link
Contributor

djrtwo commented May 12, 2019

I agree with the third option.

  • @mratsim, the type is certainly more powerful. Seems like a good idea.
  • on the test-with-sigs branch, I'm inclined to have a stub for writing and verifying signatures to still be present on most cases when generator_mode=False to keep CI moving along.
  • assert verify_..., just verify_..., and make it raise a special BLSVerificationError (or something like that)

    • change to validate_… if the function is going to throw the error itself.
  • to be clear, the two things that need to happen are the following. am I missing anything?
    1. modify test case format to be able to express that it is a BLS error
    2. modify test creation to ensure usage of proper signing

sanity tests updated and can be cleanly used for test-generation, but requires more work to define the format of the test-vectors, as they is more variety.

What work is required here?

@zilm13
Copy link
Contributor

zilm13 commented May 13, 2019

I'd bet for 3rd option though I liked first one at the beginning as it sounds like "we don't test this thing again and again", but in fact real BLS verifies not only BLS itself but logic of methods preparing inputs too, plus it's not a big overhead when running whole test in time.

@protolambda
Copy link
Collaborator Author

@djrtwo

Few doubts here:

  • don't want generator_mode to do more than it says, let's keep BLS on/off separate.

  • don't like renaming/removing verify. It's core to BLS spec, and I want it to show in the spec. Maybe we can convert the returned bool to an error, with some helper? E.g. enforce(verify_sig(...))

  • What needs changing:
    i. yes, but maybe more general. Like error_type, and then assert, bls, and more in future possibly (or more precise assertion reasons). And then clients can map their own errors/reporting to these error-types, to verify it's exactly correct, and not just a happy coincident error.
    ii. Modify test creation for proper signing. yes. I'm hesitant to implement a "BLS off" thing for multiple reasons:

    • if it's bugged, tests can be wrong
    • it opens poor design practice (BLS should always be on in production, code to turn it off should not be accessible in any way in a client, security first)
    • it's very hard to implement on/off cleanly. Configuration is hacky enough, but that doesn't affect anything per-test-case. And I do want invalid-sig tests, and a few tests where signatures are used for side-effects.

    ii.

@protolambda
Copy link
Collaborator Author

Plan, after discussion with @djrtwo IRL:

  • provide test vectors with BLS ON during generation, i.e. proper signing
  • mark tests that are dependent on BLS being ON.
    • clients can decide to disable BLS for tests that don't depend on it, for speed. But they don't have to. (or can support both)
  • mark error type in tests, for more assurance that code is doing things right. I.e. not the wrong error causing a "abort processing on this input" test to pass.
  • decorate pytests with BLS-switch (implemented here just now: bls-test-deco branch)
    • this let's you pass bls_active=False/True to the test, similar to generator_mode
    • generation of tests will pass bls_active=True to all tests.
    • pytests will run by default with bls_active=False
    • some pytests are decorated with @always_bls, to force BLS to be active, regardless of what is passed as argument. For the bls-dependent tests.
    • we may run some pytests with bls_active=True in some occasions/triggers (TBD), just to check signing works as intended.

What we get:

  • clients get flexibility to choose for fast CI running (if they are limited by BLS performance), yet also have proper BLS vectors to test with
  • CI in specs repo is fast, as it stubs BLS where possible
  • We can have non-stub BLS pytests
  • We improve the test format with error-types. (again, optional for clients, but helps ensure their logic is correct, i.e. less blackbox)

@paulhauner
Copy link
Contributor

paulhauner commented May 14, 2019

provide test vectors with BLS ON during generation, i.e. proper signing mark tests that are dependent on BLS being ON.

I agree with this, I'd like to see all tests with valid BLS sigs/keys. Clients can choose which tests run with "fake crypto".

We're presently failing the SSZ tests because they're using fundamentally invalid signatures. Our alternative is to switch the tests to our fake crypto library, but then we're no longer testing production SSZ implementations.

We had the same problem with the v0.5.1 state tests and it forced us to run those tests with fake crypto. Our experience was that either made our build process more complex/error-prone or weakened our production code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants