-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Liu 186 post changes #170
Liu 186 post changes #170
Conversation
The unit tests for the dlg daemon check at various times that the underlying nm/dim/mm processes the daemon starts or stops have actually been started or stopped by checking if the manager REST port is open. They did this using our utils.portIsOpen/Closed functions together with a timeout. How these functions work is a bit special: they don't give up on their mission until the condition has been met or the timeout has expired, and therefore they should be used with caution. For example, portIsOpen will return False only after trying to connect to the given host/port until the timeout expires, and True if within that timeout it succeeds in connecting. As such, it's better suited for checking if a service has become offline, but not really for checking if a service is down, as the full timeout would have to be waited for to make that assessment. This was the situation in which some unit tests were on: they waited for long times to check a condition that would be checked much faster if you turned the tables. In the case above, instead of waiting for False to be returned from portIsOpen, a faster alternative is to wait for True to be returned from portIsClosed. This commit changes these checks in a couple of places, bringing the total runtime of the test_daemon unit tests by about 30 seconds (from ~140 to ~110 seconds in my laptop). Signed-off-by: Rodrigo Tobar <rtobar@icrar.org>
The previous version of these functions worked, but I found its design was a bit over-complicated. I believe this also led to using it incorrectly, with some calls doing a "double wait_until" invocation, where an outer wait_until call in the test code used as a condition a function that itself used wait_until. This commit re-designs the wait_until function and its callers. The first difference in this new version is that the timeout and interval values always use their default values (although they can still be set by on each invocation), as in the previous version all invocations passed the same value explicitly. Secondly, instead of passing an update function and a set of arguments to call it with, we simply pass an update function, which in the caller site is bound to local arguments (using a lambda in our cases, but could be a functools.partial if required). These two changes simplified the function calling convention a bit. Finally, I've removed the "double wait_until" instances with what I believe is a correct replacement. While I'm not particularly sure I've done the right thing, I still see the same test code coverage, and all tests are still passing. This leads me to believe I did the right thing, but I will need confirmation. As a bonus, this change reduced the test walltime from ~110 seconds to ~< 60. Signed-off-by: Rodrigo Tobar <rtobar@icrar.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, I've checked that the updates still test the correct conditions.
Thanks @pritchardn! While doing this I realised that the coverage information currently includes the tests' coverage, which is probably a bit misleading (it artificially brings the reported coverage up). I'll deal with that later though. On a separate note: I saw you posted some fixes to merklelib to have it working on 3.10. Do you know if the author is planning to release a new version to PyPI any time soon? Otherwise until then our only option to get those fixes is to install the code directly from github. |
These changes are to speed up the daemon tests, and simplify the test code as I mentioned I'd try to do in #156 (review). See individual commit messages for more details.
In particular I'd like to make sure the tests are testing the right thing with the last commit. As I mention in my message, coverage stays the same and the tests all still pass, so while I'm 50% sure I did the right thing, I'd still like a second pair of eyes to corroborate it.