Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_backend_pgf: TypeError #1843

Merged

Conversation

mgiuca-google
Copy link
Contributor

The tol parameter was missing in the call to compare_images().

The tol parameter was missing in the call to compare_images().
@mgiuca-google
Copy link
Contributor Author

This is a fix for Issue #1842.

The problem was that I deleted the tol parameter from all the calls to decorators.image_comparison (since it
defaults to 10 and we wanted to minimize the calls that did not use the default). However, I removed this one from compare_images which does not have a default tol.

In the spirit of the original patch, I have set tol to 10 (the default in image_comparison), rather than 50 which it was originally set to. Apparently, neither my local build, nor Travis, is running this test, so could somebody please figure out how to run it to make sure the tolerance of 10 is not too strict.

@mdboom
Copy link
Member

mdboom commented Mar 22, 2013

This works for me on my local machine

@ajdawson
Copy link
Contributor

I'm getting two failed tests with this: matplotlib.tests.test_backend_pgf.test_rcupdate and matplotlib.tests.test_backend_pgf.test_pdflatex. In both cases the failed diffs indicate the differences are solely in text. I'm not sure if this means the tolerance should be changed, or if this is some perculiarity of my set up.

pgf_pdflatex_pdf-failed-diff

@ajdawson
Copy link
Contributor

I fiddled with the tolerance parameter and found that for my system a tolerance of 14 was enough to remove both failures, in case that helps to put my previous comment into context...

@mgiuca
Copy link

mgiuca commented Mar 23, 2013

Thanks for testing again, Andrew.

We've* noticed that the text rendering can be a bit off on different systems, which is why there is tolerance at all. So I think we agreed to generally set the tolerances at 10 by default, and bump them up to whatever is necessary whenever someone has a "correct" render that fails the test (as opposed to something genuinely wrong). Good thing you attached the image; it looks like you have hit just this case, so I am bumping up the tolerance to 14. Can you test it again and make sure it passes?

*I say "we" not implying that I'm a maintainer of this library. I just contributed a rewrite of the comparison code, and these were my findings at the time.

@ajdawson
Copy link
Contributor

This successfully removes the test failures for me.

@dmcdougall
Copy link
Member

$ python tests.py matplotlib.tests.test_backend_pgf
...
----------------------------------------------------------------------
Ran 3 tests in 5.570s

OK

dmcdougall added a commit that referenced this pull request Mar 23, 2013
@dmcdougall dmcdougall merged commit 74918a4 into matplotlib:master Mar 23, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants