Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/testing language #19

Merged
merged 12 commits into from
Oct 11, 2017
Merged

Feat/testing language #19

merged 12 commits into from
Oct 11, 2017

Conversation

naterush
Copy link
Collaborator

@naterush naterush commented Oct 10, 2017

  • add basic testing language that supports control of block generation, sending, safety checks, forkchoice checks, etc.
  • temp. update function addition to settings (until Create ValidatorSet class #15), so we can change things while running tests easier.
  • add an explanation of testing to readme.

@naterush
Copy link
Collaborator Author

Would like to change reporting (displaying the view graphs) to off by default, but should be able to turn it on from some command line flag.

Not sure how to handle this w/ running unittest discover though (but I haven't tried anything either :) ). Let me know if you have any thoughts - otherwise, I'm going to mess around a bit when I get the chance.

@naterush naterush requested a review from djrtwo October 10, 2017 00:21
@naterush
Copy link
Collaborator Author

Also, worth noting that the tests are wildly incomplete; more tests should be added ASAP so we can better protect ourselves from any big changes (e.g. ValidatorSet).



def update(val_weights):
global NUM_VALIDATORS
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't have to redeclare these global vars

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think we need to do this to be able to modify existing global variables. See here: https://stackoverflow.com/questions/423379/using-global-variables-in-a-function-other-than-the-one-that-created-them

@djrtwo
Copy link
Collaborator

djrtwo commented Oct 10, 2017

Can use https://stackoverflow.com/questions/11380413/python-unittest-passing-arguments/20702984#20702984 for passing in args.

Could use this and configure TestLangCBC to report/not-report depending on something being passed in or not.

@naterush
Copy link
Collaborator Author

naterush commented Oct 10, 2017

The issue w/ this is that code (if __name__ == "__main__":) is only run when the test is called directly w/ python -m test.test_SOMETHING.

When it's run from the root directory (which I think we have to do for paths for test to find imports) w/ python -m unittest test.test_SOMETHING, this code doesn't run and so we don't get command line args (at least easily).

@djrtwo
Copy link
Collaborator

djrtwo commented Oct 10, 2017

I see. I was a bit skeptical...


self.assertNotEqual(block_0, block_1)

def test_non_equality_of_copies_of_non_genesis(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@naterush This function name is confusing. Maybe test_non_equality_of_test_lang_blocks.

The point of this function is to make sure that separate blocks created by the test_lang are in fact different blocks, right?

Copy link
Collaborator

@djrtwo djrtwo Oct 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe, test_unique_block_creation_in_test_lang


self.assertFalse(block_0.is_in_blockchain(block_1))

def test_in_blockchain(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_is_in_blockchain__test_lang

block_0 = Block(None, Justification(), 0)
block_1 = Block(None, Justification(), 1)

self.assertFalse(block_0.is_in_blockchain(block_1))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might call this test_is_in_blockchain__separate_genesis

block_0 = Block(None, Justification(), 0)
block_1 = Block(None, Justification(), 1)

self.assertFalse(block_0.is_in_blockchain(block_1))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run the assertion the other way too, block1.is_in_blockchain(block0)

"""
test_string = "B1-A S0-A B0-B S1-B S2-A B2-C S1-C H1-C"
testLang = TestLangCBC(test_string, [5, 6, 5])
with self.assertRaises(AssertionError):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This construct hides what assertion actually fails in the testLang. When I read this test, I'm not actually sure which component of the test_string us supposed to fail, and you as the tester can't be sure where it fails either.

Might be worth having particular Error types that correspond to failing assertions in testLang. For example, if check_head_equals_block fails, you can throw a ForkChoiceError. Then here you can check that a ForkChoiceError was raised, rather than say a SafetyError.

Open to other ways of solving this. Let me know your thoughts. Goal would be to have more transparency as to what exactly you are testing.

Copy link
Collaborator Author

@naterush naterush Oct 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is a good point - some reporting here would def be good here, as we don't even know what is causing this test to fail.

Once we move to pytest, we could also likely use the assert statements and 'sys' library to parse the error message that comes with it (see here) - but this is pretty messy. Adding user-defined errors seems like the best way of accomplishing this (and they would only need to exist in the testing world).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djrtwo Ah, so I misinterpreted a bit what errors we would need here. In this case, this test throws an asserting error b/c of some safety check in the forkchoice itself.

I was thinking that we would add errors to the testing language (which I think is a great idea and what you were suggesting), but do you think we should also add these to the forkchoice as well? Not sure what the best practice is here :~)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Throwing particular errors (ones native to python or our own) allows for us or any developer using this stuff to more easily handle the errors as they please. Assertions should be used for sanity checks and debugging.

This particular example lies somewhere in between. Because we are testing for this and because there are a ton of other assertions currently throughout the codebase, I think we should raise a more specific error.

@@ -0,0 +1,58 @@
from testing_language import TestLangCBC
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of this suite of tests will be one function in py.test with the use of parametrize. woooo

Copy link
Collaborator

@djrtwo djrtwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I went through and modded things to generally conform to pep8.
  • I also changed multi-line test_string declarations to use parenthesis. I think this is nice and readable.
  • a couple of naming questions.
  • longer comment about errors raised inside of testLang and how best to distinguish between them

Excellent work. This is a fantastic addition and will make dev way better.

@djrtwo djrtwo mentioned this pull request Oct 10, 2017
## Run Tests
To run all unit tests:
```
python -m unittest discover
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this hangs indefinitely w/ no output, and makes a graph with no values.

Copy link
Collaborator

@djrtwo djrtwo Oct 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you repeatedly kill the graphs as they come up, it moves through the tests. Going to disable plotting for testing by default before merge. Will add option to run plotting as a param when migrate to py.test

@naterush Disable plotting before merge

@djrtwo djrtwo merged commit 6c38fef into blockchain Oct 11, 2017
@djrtwo djrtwo deleted the feat/testing_language branch October 11, 2017 11:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants