-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/testing language #19
Conversation
Would like to change reporting (displaying the view graphs) to off by default, but should be able to turn it on from some command line flag. Not sure how to handle this w/ running |
Also, worth noting that the tests are wildly incomplete; more tests should be added ASAP so we can better protect ourselves from any big changes (e.g. ValidatorSet). |
|
||
|
||
def update(val_weights): | ||
global NUM_VALIDATORS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't have to redeclare these global vars
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think we need to do this to be able to modify existing global variables. See here: https://stackoverflow.com/questions/423379/using-global-variables-in-a-function-other-than-the-one-that-created-them
Can use https://stackoverflow.com/questions/11380413/python-unittest-passing-arguments/20702984#20702984 for passing in args. Could use this and configure TestLangCBC to report/not-report depending on something being passed in or not. |
The issue w/ this is that code ( When it's run from the root directory (which I think we have to do for paths for test to find imports) w/ |
I see. I was a bit skeptical... |
test/test_block.py
Outdated
|
||
self.assertNotEqual(block_0, block_1) | ||
|
||
def test_non_equality_of_copies_of_non_genesis(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@naterush This function name is confusing. Maybe test_non_equality_of_test_lang_blocks
.
The point of this function is to make sure that separate blocks created by the test_lang are in fact different blocks, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe, test_unique_block_creation_in_test_lang
test/test_block.py
Outdated
|
||
self.assertFalse(block_0.is_in_blockchain(block_1)) | ||
|
||
def test_in_blockchain(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test_is_in_blockchain__test_lang
block_0 = Block(None, Justification(), 0) | ||
block_1 = Block(None, Justification(), 1) | ||
|
||
self.assertFalse(block_0.is_in_blockchain(block_1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might call this test_is_in_blockchain__separate_genesis
block_0 = Block(None, Justification(), 0) | ||
block_1 = Block(None, Justification(), 1) | ||
|
||
self.assertFalse(block_0.is_in_blockchain(block_1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Run the assertion the other way too, block1.is_in_blockchain(block0)
""" | ||
test_string = "B1-A S0-A B0-B S1-B S2-A B2-C S1-C H1-C" | ||
testLang = TestLangCBC(test_string, [5, 6, 5]) | ||
with self.assertRaises(AssertionError): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This construct hides what assertion actually fails in the testLang. When I read this test, I'm not actually sure which component of the test_string us supposed to fail, and you as the tester can't be sure where it fails either.
Might be worth having particular Error
types that correspond to failing assertions in testLang. For example, if check_head_equals_block
fails, you can throw a ForkChoiceError
. Then here you can check that a ForkChoiceError
was raised, rather than say a SafetyError
.
Open to other ways of solving this. Let me know your thoughts. Goal would be to have more transparency as to what exactly you are testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is a good point - some reporting here would def be good here, as we don't even know what is causing this test to fail.
Once we move to pytest, we could also likely use the assert statements and 'sys' library to parse the error message that comes with it (see here) - but this is pretty messy. Adding user-defined errors seems like the best way of accomplishing this (and they would only need to exist in the testing world).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djrtwo Ah, so I misinterpreted a bit what errors we would need here. In this case, this test throws an asserting error b/c of some safety check in the forkchoice itself.
I was thinking that we would add errors to the testing language (which I think is a great idea and what you were suggesting), but do you think we should also add these to the forkchoice as well? Not sure what the best practice is here :~)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Throwing particular errors (ones native to python or our own) allows for us or any developer using this stuff to more easily handle the errors as they please. Assertions should be used for sanity checks and debugging.
This particular example lies somewhere in between. Because we are testing for this and because there are a ton of other assertions currently throughout the codebase, I think we should raise a more specific error.
1248fba
to
6d4afa7
Compare
test/test_utils.py
Outdated
@@ -0,0 +1,58 @@ | |||
from testing_language import TestLangCBC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of this suite of tests will be one function in py.test with the use of parametrize
. woooo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I went through and modded things to generally conform to pep8.
- I also changed multi-line
test_string
declarations to use parenthesis. I think this is nice and readable. - a couple of naming questions.
- longer comment about errors raised inside of testLang and how best to distinguish between them
Excellent work. This is a fantastic addition and will make dev way better.
## Run Tests | ||
To run all unit tests: | ||
``` | ||
python -m unittest discover |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this hangs indefinitely w/ no output, and makes a graph with no values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you repeatedly kill the graphs as they come up, it moves through the tests. Going to disable plotting for testing by default before merge. Will add option to run plotting as a param when migrate to py.test
@naterush Disable plotting before merge
update
function addition to settings (until Create ValidatorSet class #15), so we can change things while running tests easier.