Serialize comparison of multiple baseline images. #7674

Merged
merged 1 commit into from Dec 23, 2016

Conversation

Projects
None yet
4 participants
Member

QuLogic commented Dec 23, 2016

This prevents the results from stomping on one another, though it doesn't fix the case of multiple tests using the same baseline image.

See #7673 for an explanation. Since there are so many tests with multiple copies of the same baseline (it's not a huge amount, but enough, apparently), this seems like a reasonable workaround.

@QuLogic QuLogic Serialize comparison of multiple baseline images.
This prevents the results from stomping on one another, though it
doesn't fix the case of multiple tests using the same baseline image.

See #7673 for an explanation.
dfe0f63

QuLogic added the Testing label Dec 23, 2016

QuLogic added this to the 2.1 (next point release) milestone Dec 23, 2016

Member

QuLogic commented Dec 23, 2016

And it died on something else; not sure if that's proof it's working or just bad luck.

Owner

tacaswell commented Dec 23, 2016

=================================== FAILURES ===================================
____________________ test_mixedsubplots[0-mixedsubplot-svg] ____________________
[gw0] linux -- Python 3.5.2 /home/travis/build/matplotlib/matplotlib/venv/bin/python
expected = '/home/travis/build/matplotlib/matplotlib/result_images/test_mplot3d/mixedsubplot-expected.svg'
actual = '/home/travis/build/matplotlib/matplotlib/result_images/test_mplot3d/mixedsubplot.svg'
tol = 0
    def raise_on_image_difference(expected, actual, tol):
        __tracebackhide__ = True
    
        err = compare_images(expected, actual, tol, in_decorator=True)
    
        if not os.path.exists(expected):
            raise ImageComparisonFailure('image does not exist: %s' % expected)
    
        if err:
            raise ImageComparisonFailure(
                'images not close: %(actual)s vs. %(expected)s '
>               '(RMS %(rms).3f)' % err)
E           matplotlib.testing.exceptions.ImageComparisonFailure: images not close: /home/travis/build/matplotlib/matplotlib/result_images/test_mplot3d/mixedsubplot_svg.png vs. /home/travis/build/matplotlib/matplotlib/result_images/test_mplot3d/mixedsubplot-expected_svg.png (RMS 0.104)
lib/matplotlib/testing/decorators.py:216: ImageComparisonFailure
== 1 failed, 6221 passed, 9 skipped, 27 xfailed, 3 xpassed in 566.35 seconds ===

That looks like bad-luck, restarted.

Member

Kojoley commented Dec 23, 2016

Yeah, this test fails for all builds from time to time.

codecov-io commented Dec 23, 2016 edited

Current coverage is 62.06% (diff: 100%)

Merging #7674 into master will decrease coverage by 4.51%

@@             master      #7674   diff @@
==========================================
  Files           109        174     +65   
  Lines         46633      56007   +9374   
  Methods           0          0           
  Messages          0          0           
  Branches          0          0           
==========================================
+ Hits          31049      34763   +3714   
- Misses        15584      21244   +5660   
  Partials          0          0           

Powered by Codecov. Last update c4caab8...dfe0f63

@tacaswell tacaswell merged commit c21d189 into matplotlib:master Dec 23, 2016

4 of 5 checks passed

coverage/coveralls Coverage decreased (-4.5%) to 62.069%
Details
codecov/patch Coverage not affected when comparing c4caab8...dfe0f63
Details
codecov/project Absolute coverage decreased by -4.51% but relative coverage increased by +33.41% compared to c4caab8
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

QuLogic deleted the QuLogic:serialize-baseline-compare branch Dec 24, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment