New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-41182: Add basic functionality test for PropertyMapTractAnalysisTask #157
Conversation
4712cbb
to
cb4e25b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Image comparison is tricky. But you can't use a hash and expect it'll match with different os's and versions of everything. See my comment below.
tests/test_propertyMapPlot.py
Outdated
"90c68d7c6a70536485cea093b9401038", | ||
"04385cbc6ead899d210fcaa628f88e45", | ||
"b66ca03562bfcfcd758461c5ccb73837", | ||
"98f145f450001642b2f1538c629f683f", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be very shocked if this works; and if it works today it certainly won't tomorrow. Matplotlib images differ subtly from mac to linux and different os and matplotlib versions. You can either use something like test_scatterPlot.py
or check in sample images to the tests and use the matplotlib image comparison testing routines which allow some amount of variance from plotted image to reference image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I eliminated the hash verification and introduced the validateFigureStructure()
and validateRGBFractions()
methods to accommodate a degree of variation between the rendered image and the reference image.
tests/test_propertyMapPlot.py
Outdated
self.atool.finalize() | ||
|
||
def tearDown(self): | ||
removeTestTempDir(self.testDir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this needs to go at the end of the teardown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done! I didn't notice a difference, though.
854e489
to
cd102bd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine, though a bit non-standard in the tests.
# Check if the actual fractions meet the expected fractions within the | ||
# given tolerance. | ||
errors = [] | ||
if not np.abs(rActualFraction - rFraction) <= rtol: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine, but we usually use self.assertFloatsAlmostEqual
which allows an rtol and/or an atol. I suppose the advantage here is that you get more information on all the channels rather than the first failure, but I wanted to point out the standard tools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the heads-up. Correct, I went this route because seeing info on all errors is quite handy for quick debugging, especially when multiple criteria are being checked in one go.
cd102bd
to
cbcf1d3
Compare
While I was at it, I fixed typos, tweaked the plot action to work with the test, and made other necessary adjustments throughout.