-
Notifications
You must be signed in to change notification settings - Fork 54
Refactor tests #400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor tests #400
Conversation
|
fpush to re-add some erroneously-dropped comments, |
pinheadmz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fuck yeah bravo, muchas gracias this is great.
some suggestions below
test/scenarios_test.py
Outdated
| def run_and_check_scenario_from_file(self, scenario_file): | ||
| self.log.info(f"Running scenario from file: {scenario_file}") | ||
| self.warcli(f"scenarios run-file {scenario_file} --allnodes --interval=1") | ||
| self.wait_for_predicate(lambda: self.check_blocks(32)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isnt great because there might still be a race from the last test not stopping in time. Maybe you can start the updated tx_scenario here instead, and wait for predicate can check for getchaintxstats["tx_count"] > getblockcount (or ensure there are 50 more txs than blocks i.e. coinbase txs). Although I guess if you do that youll need to keep miner_std running as well so the txs confirm and get counted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, tried to fixup with 61e3331#diff-3c6951d855400160cce6184d9af9c0e5a663af6ae1f55a232fd53c53d6971188R51
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmmm ok Id still prefer loading a different scenario file entirely, maybe even a fragment we can keep in test/data or test/framework_tests
| def test_scenarios(self): | ||
| self.check_available_scenarios() | ||
| self.run_and_check_scenario("miner_std") | ||
| self.run_and_check_scenario_from_file("src/scenarios/miner_std.py") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heh oh kay, OR you could just put these lines in run_test ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add more files in the future though? (probably not, lol!)
Gunna leave though I think
test/test_base.py
Outdated
| fh.setLevel(logging.DEBUG) | ||
| ch = logging.StreamHandler() | ||
| ch.setLevel(logging.DEBUG) | ||
| formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something is a bit funky here, im seeing double timestamps in some but not all log messages:
2024-07-08 15:18:43,320 - TestFramework - DEBUG - Node 2 has 1 settled invoices
2024-07-08 15:18:48,326 - TestFramework - DEBUG - Executing warcli command: lncli 2 listinvoices
2024-07-08 15:18:48,711 - TestFramework - DEBUG - 2024-07-08T19:18:48+0000 - DEBUG - {'jsonrpc': '2.0', 'method': 'tank_lncli', 'params': {'network': 'warnet-test-wspv4wt2', 'node': 2, 'command': ['listinvoices']}, 'id': 1}
2024-07-08 15:18:48,714 - TestFramework - DEBUG - 2024-07-08T19:18:48+0000 - DEBUG - Running lncli cmd=lncli --network=regtest listinvoices on tank.index=2
2024-07-08 15:18:48,714 - TestFramework - DEBUG - 2024-07-08T19:18:48+0000 - DEBUG - Running exec_cmd=['/bin/sh', '-c', 'lncli --network=regtest listinvoices'] on tank_index=2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, i need to update this to use our logging.config too: #384 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| for service in ["bitcoin", "lightning", "circuitbreaker"]: | ||
| status = tank.get(f"{service}_status") | ||
| if status: | ||
| stats[status] = stats.get(status, 0) + 1 | ||
| self.log.info(f"Waiting for all tanks to reach '{target}': {stats}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something else funky here, total shouldn't be the smallest number!
examples from local tests
Waiting for all tanks to reach 'stopped': {'total': 4, 'stopped': 3, 'pending': 4, 'running': 2}
Waiting for all tanks to reach 'stopped': {'total': 4, 'stopped': 9}
This test still stopped eventually because of
ERROR - Error bringing network down: Timed out waiting for predicate Truth
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try the ln_test because it has 4 tanks but a bunch of other pods for ln and cb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in e1373a9
willcl-ark
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @pinheadmz
Taken almost everything in 5e361f9f60580c9e12046670e9843ed44d556ce2
| def test_scenarios(self): | ||
| self.check_available_scenarios() | ||
| self.run_and_check_scenario("miner_std") | ||
| self.run_and_check_scenario_from_file("src/scenarios/miner_std.py") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might add more files in the future though? (probably not, lol!)
Gunna leave though I think
test/scenarios_test.py
Outdated
| def run_and_check_scenario_from_file(self, scenario_file): | ||
| self.log.info(f"Running scenario from file: {scenario_file}") | ||
| self.warcli(f"scenarios run-file {scenario_file} --allnodes --interval=1") | ||
| self.wait_for_predicate(lambda: self.check_blocks(32)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, tried to fixup with 61e3331#diff-3c6951d855400160cce6184d9af9c0e5a663af6ae1f55a232fd53c53d6971188R51
| for service in ["bitcoin", "lightning", "circuitbreaker"]: | ||
| status = tank.get(f"{service}_status") | ||
| if status: | ||
| stats[status] = stats.get(status, 0) + 1 | ||
| self.log.info(f"Waiting for all tanks to reach '{target}': {stats}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| self.server_thread = threading.Thread( | ||
| target=self.output_reader, args=(self.server.stdout, self.log.debug) | ||
| target=self.output_reader, args=(self.server.stdout, print) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why bring print back from earlier in the PR? ca382c7#diff-1a293b58d544ac5f9d384afd0652dc7e9e17ae7564df252dd5a8fc39543ee0e6L99
In #384, the route I took was to create an instance method that takes one argument and does a call to logger: https://github.com/mplsgrant/warnet/blob/3e2d110d3bfbf73518a0af492832c57ae04372bd/test/test_base.py#L48
def passthrough_logger(self, message):
self.logger.info(f"{self.pod_name} - {message}")But maybe we can just leave this as is and change it in #384.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why bring print back from earlier in the PR? ca382c7#diff-1a293b58d544ac5f9d384afd0652dc7e9e17ae7564df252dd5a8fc39543ee0e6L99
The reason is this is being logged from outside already (well, here in test_base), and we are using the output_reader to capture all output from stdout and log it. So if we log, it prepends all the log info (time data etc.). We then capture this entire line via stdout and prepend a second layer of log info in output_reader, so we end up with double log details.
But maybe we can just leave this as is and change it in #384.
Hopefully this :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The passtrhough logger does feel quite nice though... it would probably be nicer to get an actual (sub) logger sharing the same logfile/stdout stream though
a1ad3ea to
5700227
Compare
| self.warcli(f"scenarios run-file {scenario_file} --allnodes --interval=1") | ||
| start = int(self.warcli("rpc 0 getblockcount")) | ||
| self.wait_for_predicate(lambda: self.check_blocks(2, start=start)) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there also be a stop_scenario() here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, thanks. I think it does get cleaned up ok, but better to have it in :)
|
A problem I had with the |
They are marked as |
Not entirely sure why this changed, but some changes in a previous commit caused the (correct) issue that these services were unavailable, to surface as a proper error, causing the graph test to fail.
This is currently a little confusing in the code that it's a global enum, and has cause me some mild debugging issues.
Oh maybe, I didn't understand what not active meant. |
|
LGTM. Ship it! |
| def setup(self): | ||
| signal.signal(signal.SIGTERM, self.handle_sigterm) | ||
|
|
||
| # Must setuup warnet first to avoid double formatting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setuuuuuuup!
| "simple": { | ||
| "format": " %(asctime)s - %(levelname)s - %(message)s", | ||
| "datefmt": "%Y-%m-%dT%H:%M:%S%z" | ||
| "format": "%(asctime)s │ %(levelname)-7s │ %(name)-17s │ %(message)s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would prefer to leave out | to save a bit more space, and limit (name) as well, i bet we can cap it at 8 chars and come up with short little nicknames for each module

This has been bothering me for a long while... Fix it by refactoring all the tests (but not
WarnetTestFramework, yet) to give them more homogenous structure.There are a few "tidy-ups" in here too, sorry, but it's but mainly just breaking things down into a function-per-test, much like the bitcoin core functional test suite.
Makes it easier to add new tests, and re-use parts of other tests.
Also improve test logging using
loggingmodule where possible.