Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Fix failing tests on maintenance branch #779

Merged
merged 5 commits into from

6 participants

Michael Droettboom John Hunter Thomas Robitaille Benjamin Root Paul Ivanov Jens Hedegaard Nielsen
Michael Droettboom
Owner

Fix failing tests -- many of them were broken due to changes in the snapping algorithm that made the axes off by one pixel. Others were broken due to changes in text alignment. Also fixes a bug in the SVG backend's Gouraud shading.

John Hunter
Owner

I just accidentally ran the tests on a remote box w/ no X11 connection and got a ton of failures because the test import triggered my backend import (which was GTKAgg). The traceback is below. The question is: would it do any harm to call:

use('agg') 

as the first line of matplotlib.test, eg before importing nose here: https://github.com/matplotlib/matplotlib/blob/v1.1.x/lib/matplotlib/__init__.py#L995

Michael, if you think this is a good idea maybe we should just tack it on to this PR

here is the traceback:
johnh@lettuce:mpl_test> python -c 'import matplotlib as m; m.test()'
/usr/lib64/python2.7/site-packages/gtk-2.0/gtk/init.py:57: GtkWarning: could not open display
warnings.warn(str(e), _gtk.Warning)
/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/backends/backend_gtk.py:49: GtkWarning: IA__gdk_cursor_new_for_display: assertion `GDK_IS_DISPLAY (display)' failed

ERROR: Failure: RuntimeError (could not create GdkCursor object)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/loader.py", line 379, in loadTestsFromName
module = resolve_name(addr.module)
File "/usr/lib/python2.7/site-packages/nose/util.py", line 321, in resolve_name
module = __import__('.'.join(parts_copy))
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/tests/test_dates.py", line 3, in <module>
from matplotlib.testing.decorators import image_comparison, knownfailureif, cleanup
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 8, in <module>
from matplotlib import pyplot as plt
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/pyplot.py", line 95, in <module>
new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/backends/backend_gtk.py", line 49, in <module>
cursors.MOVE          : gdk.Cursor(gdk.FLEUR),
RuntimeError: could not create GdkCursor object
Michael Droettboom
Owner

@jdh2358: Sure -- that works for me. Added to this PR.

Thomas Robitaille

I also had issues where the tests would try and open many interactive windows, so I'd be in favor of explicitly setting the backend for the tests to 'Agg'.

John Hunter
Owner

I'm still getting 50 test failures and I haven't checked all of them but the ones I've spot checked appear to be the dreaded font differences. This can be ignored for now, but we need to put our heads together and come up with a system where we are not overriding each other's baseline images because of local configuration differences. One approach would be to get some racked hosting and give all developers access to a common build environment for testing. Another would be to standardize on a freetype version and config.

Michael Droettboom
Owner

@jdh2358: I'd like to experiment with this font issue a little bit. Can you send me one image with lots of mismatched text? I'm going to see if I can somehow reproduce your output by playing with Freetype's various flags.

Thomas Robitaille

@mdboom - I'm getting lots of test failures because PIL isn't installed, but I wasn't getting this with the master branch, because I think the PIL dependence was removed in the mean time. I'll create a new environment with PIL to test this pull request.

Thomas Robitaille

@jdh2358 - regarding font differences, I like the option to explicitly state that the tests should pass with a specific version of freetype. If you know which versions of freetype cause issues, one could even mark the relevant tests as known fails (for those 'bad' freetype versions). This requires knowing the freetype version in the tests though.

Michael Droettboom
Owner

@astrofrog: Yes, PIL is required for 1.1.x, but not for master.

The problem with relying on a particular version of freetype is that it affects virtually every test -- we'll have a basically useless
test suite if that's the case. And I don't want to force all developers to build their own freetype if I can avoid it. I'm still looking into ways that the freetype renderer may be forced to behave a certain way.

Thomas Robitaille

I've tested the branch in this PR with various Python and Numpy versions, and it does fix some of the test_axes tests:

matplotlib.tests.test_axes.test_marker_edges.test
matplotlib.tests.test_axes.test_markevery.test
matplotlib.tests.test_axes.test_polar_annotations.test
matplotlib.tests.test_axes.test_polar_rmin.test
matplotlib.tests.test_axes.test_polar_theta_position.test
matplotlib.tests.test_axes.test_polar_wrap.test
matplotlib.tests.test_axes.test_polar_wrap.test
matplotlib.tests.test_axes.test_symlog.test

There are still some tests failing, but looking at the diff images, this seems to be due to fonts (the residuals for the other test_axes tests now don't have a frame).

The log for the build/install/tests is here: http://db.tt/iNBIScaA

The new test images including residuals is here: http://db.tt/tvbk02A0

So the bottom line is that this is an improvement (67 instead of 77 failures), and I think most of the remaining issues are font-based (but if you look at the residuals, you'll see some exceptions).

I agree that it would be better to figure out how to force freetype to render a certain way, rather than requiring a specific version.

Michael Droettboom
Owner

I'm not finding anything useful to fix the freetype version difference problem. There are some other more blunt options, like converting text to rectangles when in test mode or something, but that brings its own set of problems.

I'd still lean toward merging this before the release, despite its problems, with a note that the baseline images were generated with freetype 13.1.7 and any other versions may cause problems with the tests.

John Hunter
Owner

Yes, I'm in favor of merging too rather than worrying about this long-standing issue right now. This is something we can work on ahead of the next release. @astrofrog, could you tell us which test failures you are seeing that don't appear font related?

Thomas Robitaille

I agree that this should be merged without waiting for a font-related fix. If we know which freetype version should work, we can at least set up the continuous integration with that version. I'll try and do that on my side.

Tests that look like they are not just font related:

matplotlib.tests.test_axes.test_canonical.test (frame issue)
matplotlib.tests.test_text.test_font_styles.test (frame issue)
matplotlib.tests.test_axes.test_basic_annotate.test (frame issue)
matplotlib.tests.test_axes.test_shaped_data.test (frame and markers issues)
matplotlib.tests.test_axes.test_single_point.test (frame and markers issues)

The images are all in the tar file I linked to previously.

John Hunter
Owner

I can confirm that canonical, shaped_data, font_styles and single_point have changed (I am not seeing any images for test_basic_annotate at all). They did not trigger an error on my system because the diff was too small. They look like the difference is from #695 because there is a 1 pixel shift in the entire axes bounding box and tick markers. My suggestion is to add the new baseline image for these tests after you have had a chance to confirm that the new look correct to you Michael.

Michael Droettboom
Owner

The version of freetype I'm using to generate the images is 13.1.7. The version John is using (which has different results) is 12.2.6. Perhaps anything in the 13 series is close enough... I'm not sure how to determine except by experimentation.

This is messy. In this pull request, I only updated the images that were failing for me -- which were primarily related to the frame snapping offset differences. There are some bonafide frame snapping offset differences still in there, but they don't cause the threshold to be tripped unless, as on @astrofrog's machine, a different version of freetype is being used. I'm thinking the best thing to do may be to update all the pngs to my output set (I've already manually verified that everything is close enough to correct). It would then be nice to lower the threshold to be more sensitive to changes once we can assume that freetype differences don't need to be accounted for. Better yet, to not have a threshold but to expect a perfect match would be even better...

John Hunter
Owner

Agree on pushing all of your result images as the new baseline since you have manually verified them and the diffs we are seeing are wither font related or pixel shifting changes where the new result images are the correct ones.

Thomas Robitaille

I had freetype 14.1.8. I'm just setting up my Jenkins environment with 13.1.7 so I can get rid of the font errors and hopefully be able to focus on the real failures. I should be able to report back in a couple of hours.

By the way, out of interest, what are all the known failures? Are they all font-related?

Thomas Robitaille

Side note: did you see that Github allows you to show the actual difference image between the old and new images in your commit? (default is just side by side, but difference is an option) Very cool!

Benjamin Root
Collaborator

I performed the tests on Linux with freetype 13.1.7 (before commit 152fb09), and I had two Known fails and 1 other failure. The failure was with test_axes.test_pcolormesh with the svg output.

Thomas Robitaille

Is there any way that the libpng version could affect things? Which version of libpng are you using? I tried running the tests building Matplotlib with freetype 13.1.7 (instead of 14.1.8), and I'm still getting the same label-related failures. It almost looks more like an anti-aliasing difference than differences in actual font/characters. In most cases it seems the characters are the same, but the differences are in the faint anti-aliased part of the font, so the difference images show a 'fuzz' around each character.

Benjamin Root
Collaborator

Looks like the baseline image for pcolormesh_svg.png is missing the gourond shading example (it is blank) meanwhile, the generated gourond shading for svg does not look as smooth as it is for the png output

Benjamin Root
Collaborator

obviously, I can't spell.... gouraud shading

Thomas Robitaille

By the way, is it normal there are so many known failures on MacOS X?

FAILED (KNOWNFAIL=526, failures=67)

(this is with freetype 13.1.7)

John Hunter
Owner

@astrofrog , you have so many known fails because you don't have the requirements to test one or more of the PDF, PS or SVG backends. the test infrastructure will check for a dependency for a given image type, and if it is not there, register it as a knownfail.

http://matplotlib.sourceforge.net/devel/coding_guide.html#running-the-tests

Thomas Robitaille

@jdh2358 - ah, that makes sense, thanks! I'm running these in a clean environment, so I'll try and add the requirements for the other tests.

Regarding the font issue, I'm fairly convinced it's an anti-aliasing issue (with the fonts) - is there any way that freetype on Mac might include a different anti-aliasing algorithm than on Linux?

Michael Droettboom
Owner

@WeatherGod: I neglected to include the fixed pcolormesh.svg baseline image. It's normal that the gouraud shading in SVG is not as smooth as the other backends -- it has to be "faked" since it isn't native to the format.

As for differences in freetype, it may be that it is configured differently. This is the ftconfig.h for my machine (it is the prebuilt package for Fedora 16):

http://db.tt/I5hfjXke

Seeing how that compares with your ftconfig.h may offer some clues.

See also this:

http://www.freetype.org/freetype2/docs/ft2faq.html#builds-differences

Thomas Robitaille

My ftconfig.h file is pretty long... http://db.tt/9OPuGJeH

Though according to the second link you sent - maybe we should compare ftoption.h? Mine is: http://db.tt/6Jo7a8wH

Michael Droettboom
Owner

Ah, indeed. The correct file to compare is ftoption.h. Mine is here:

http://db.tt/pZgy6q52

Michael Droettboom
Owner

Hmmm... and they're identical. Perhaps there is some sort of gamma correction or colorspace conversion going on the PNG reading/writing. (It should otherwise be lossless). As an experiment -- can you run the simple_plot.py example and save it in rgba format (this is just a raw dump of the image buffer). I've put mine up here.

http://db.tt/t8ma7qVt

If there are no differences here, but there are when saved as png we can rule out freetype and point the finger at png (though it seems like a longshot).

Thomas Robitaille

Here is mine: http://db.tt/BTRSNLJN

However, I wasn't sure how to generate it - is the following correct?

import matplotlib
matplotlib.use('Agg')

from pylab import *

t = arange(0.0, 2.0, 0.01)
s = sin(2*pi*t)
plot(t, s, linewidth=1.0)

xlabel('time (s)')
ylabel('voltage (mV)')
title('About as simple as it gets, folks')
grid(True)
show()
savefig('simple_plot.rgba')
Thomas Robitaille

After installing inkscape and ghostscript into the testing environment, I now get:

FAILED (KNOWNFAIL=2, errors=2, failures=69)

For the record, the two errors are inkscape-related: http://pastebin.com/yHTszS8e

Paul Ivanov
Collaborator

i'm jumping in late into the game here, but just a thought: maybe we should ship some libre font with matplotlib, and only use that font for the baseline images, that way there's a better chance that font issues won't plague us in the future? Or is the problem deeper than that, and it's actually the case that the same font renders differently under different versions of freetype?

Thomas Robitaille

I've done some more investigations, and I now don't believe it's an anti-aliasing issue (but of course I might be wrong). I also don't think it's a font issue. I've created a test script and have uploaded some example results: https://github.com/astrofrog/mpl_font_testing

You'll see that in this example, the top of the letter is set at the same position, but the vertical size is slightly different, between linux and mac. Am I missing a parameter in my script to force these to be the same?

I decorated all functions/methods in font_manager.py to follow the execution, and it seems that the same methods/functions are called on linux and mac, and in the same order (with this example).

I hope this helps figure out the issue!

Thomas Robitaille

I don't know if this is a red herring, but if I add:

std::cout << "image width = " << image_width << "\n";
std::cout << "image height = " << image_height << "\n";
std::cout << "char width = " << char_width << "\n";
std::cout << "char height = " << char_height << "\n";

in FT2Image::draw_bitmap in ft2font.cpp, I get different results on different platforms for the single letter example I posted in my previous comment.

On Mac:

image width = 258
image height = 322
char width = 257
char height = 320
image width = 258
image height = 322
char width = 257
char height = 320

On Linux:

image width = 258
image height = 321
char width = 257
char height = 319
image width = 258
image height = 321
char width = 257
char height = 319

But I don't know if these differences are enough to explain the shift in the different images.

Thomas Robitaille

Just an update - it seems the vertical shift was an issue specific to Vera.ttf (the default font). With another font, I don't see the issue (and the widths and heights from ft2font.cpp agree). There must be a rounding error somewhere for Vera.ttf. I've updated the results here: https://github.com/astrofrog/mpl_font_testing

Thomas Robitaille

And finally (apologies for all the comments) I've added diffs in this folder: https://github.com/astrofrog/mpl_font_testing/tree/master/linux_13.1.7

For Vera.ttf, there is a shift that is causing a high RMS (0.000761577310586)

For cmr10.ttf, there is no shift, and the residual seems to be due to anti-aliasing, and the RMS is lower (0.000244948974278)

Thomas Robitaille

I promised no more comments, but this is important - if I now reduce the character size to a normal size, and add more characters, I also see similar effects, i.e. Vera.ttf seems to have positional differences due to some kind of rounding (which are so bad you can see them even without the diff file), while the agreement is much better for cmr10.ttf.

I've added the new examples here: https://github.com/astrofrog/mpl_font_testing - including the diffs.

So I think there are two conclusions from my 'investigation':

  • For Vera.ttf, it seems there are some platform-dependent rounding errors that cause high RMS values in the image comparisons. As far as I know, this is currently the default font.

  • If using e.g. cmr10.ttf (or maybe some other fonts, but using that example since it's already bundled with matplotlib), it seems that there are no positional differences, just anti-aliasing differences.

So the bottom line is that until we understand the issue with Vera.ttf, it might be sensible to switch to one of the other bundled fonts which doesn't show positional issues (for the tests that is), and explicitly specify that font for the tests (so that matplotlibrc settings don't affect things).

@mdboom, @jdh2358 - what do you think?

John Hunter
Owner

@astrofrog , no need to worry about the deluge of comments. They are very welcome. It would be a major victory if we can solve this font issue, because then our tests will pass most of the time for most of our developers on most of our platforms, and we can ratchet the tolerance way down to catch more issues like 1 pixel rounding shifts. So please keep experimenting and reporting.

As for the Vera vs cmr switch, I am not ready to throw my hands up. Now that we know there is a platform dependent height/baseline issue, I'd like to know if we can solve it. It might be worth posting on a freetype mailing list to see if you can get any hints there (no pun intended).

You mention "explicitly specify that font for the tests (so that matplotlibrc settings don't affect things)" so I wanted to make sure you are aware of the test setup method, which does set the default font.family and turns off hinting to reduce platform specific variation

https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/tests/__init__.py

@mdboom , I just noticed the test setup does call use('agg') in addition to setting the rc params. So we are duplicating some effort in calling it before the nose imports. It turns out is is important to call it before the nose import, because this is triggering a pyplot import which is triggering a GUI load. Do you know where matplotlib.test.setup is being called -- is this automatic in nose? If so, is there a "teardown" method where we could restore the backend and rc settings?

John Hunter
Owner

@ivanov , we do ship lots of our own fonts. The canonical one is Bitstream's Vera.ttf, and we do set this to be our default font for the tests. What we are seeing is cross freetype version differences in aliasing, and cross platform variation in baseline or height even with the same version of freetype. We're trying to figure out a way to force freetype to behave consistently, eg by turning off anti-aliasing, hinting, or anything else we can think of to generate test images consistently across platforms. Since we have a freetype extension module, we have full access to their C-API, but this has not been enough to get the damned thing to behave so far.

Thomas Robitaille

@jdh2358 - regarding posting to a mailing list, is there a way we can create a simple pure c++ program that can reproduce this issue? I worry that if I post something related to our current findings, they might just say that it's a matplotlib issue. Unfortunately, I'm not proficient at c++, so I'm not able to do this, so if someone else here can create a test program, that would be great!

Thomas Robitaille

Regarding my previous comment, I noticed one place where there is a difference is when calling FT_Glyph_Get_CBox in ft2font.cpp, so if we could implement a call to that for a single character in the font dictionary and show that it varies across platforms, that would make it easier to show the freetype mailing list.

Michael Droettboom
Owner

@jdh2358: tests/__init__.py:setup() gets called by nose when it first imports the tests -- one of the first things that happens when nose.run() is called in matplotlib.test(). Yes, we should probably add a teardown -- again, I didn't conceive of the use case where someone would run the tests as part of a larger interactive session.

@astrofrog: I'll try to write a standalone C app today and will link to it from here.

Michael Droettboom
Owner

Ok -- I have a standalone C program to test the FT_Glyph_Get_CBox difference you were seeing. For each ASCII character, it prints out the results in a number of different hinting modes, but the only one that should matter is "NO_HINTING", which is used during the matplotlib tests.

https://github.com/mdboom/freetype_test

Unfortunately, I am not seeing any differences. Note there's something confusing about freetype version numbers -- freetype-config --version returns a different version string than what the library returns internally. I'm showing both here (the library / freetype-config).

I tested on Fedora 16 using the included freetype package (2.4.6 / 13.1.7) and Mac OS-X Lion using the freetype included with Apple's X11 (2.4.4 / 12.2.6). The logs are here, but they are identical (except for the version number):

http://db.tt/989Y03Yi
http://db.tt/oe3KtOu0

@astrofrog: I can try to compile the same version of freetype on both these machines, but given that the different versions already produce the same results, I'm not sure that would be fruitful. I'm curious whether you can reproduce this. There must be something else that the matplotlib code is doing that isn't reproduced here -- there are a few different calls to FT_Glyph_Get_CBox in ft2font.cpp, so it may be that we're talking about different call sites.

Thomas Robitaille

@mdboom - thanks! I think I can reproduce the issues, but not for the image size/dpi you have - could you make it easier to specify the image size and dpi via parameters so I can test this more easily?

Michael Droettboom
Owner

Just for completeness: changing the "mode" in FT_Glyph_Get_CBox from ft_glyph_bbox_pixels to ft_glyph_bbox_subpixels doesn't seem to matter.

Michael Droettboom
Owner

Ok -- I've added ptsize and dpi commandline parameters (see the README). Doesn't seem to make a difference for me, but YMMV.

Thomas Robitaille

@mdboom - thanks for writing the program and for adding the parameters!

Ok, here's some settings where I'm seeing differences:

Command:

./freetype_test Vera.ttf 50 10

Results:

Linux: http://db.tt/9hV7YtOd
Mac: http://db.tt/PjwG6Fpz

Command:

./freetype_test Vera.ttf 20 100

Results:

Linux: http://db.tt/wPwj72vB
Mac: http://db.tt/W1P6HqJs

I can produce other output on request. At first glance, it does seem that this affects only the FORCE_AUTOHINT tests. I'll look into it some more.

Thomas Robitaille

By the way, my experimentation was not done with autohint off (at least, I didn't turn it off), so maybe the differences we are seeing with freetype_test explain the effects I was seeing, but not the differences in tests? I'm going to see if turning off hinting resolves the issues I was seeing.

Michael Droettboom
Owner

Ah, of course. And I should have read your test code more carefully. The tests in astrofrog/mpl_font_testing don't turn off hinting, so it's not surprising that you're getting different results. If you set rcParams['text.hinting'] = False do you get matching results? I'm going to try that here.

Thomas Robitaille

If I turn off hinting (new script + results at https://github.com/astrofrog/mpl_font_testing) there is no more real difference between the two different fonts in terms of RMS, though the two platforms still give different results. It's still intriguing that once hinting is turned on, the results are different on different platforms for a same font (and not for both fonts we test). So that seems worth raising with the freetype people, but as you said it's not going to improve the tests.

It seems that the problem is in the aliasing of particular curve segments (see e.g. failed-diff-cmr10_nohint.png). Is there a way that you could now modify the test script (or create another one) to output a bitmap stream/string for a given character? Then if we see a difference in that, then we have isolated the root of the problem for the tests.

Thomas Robitaille

Looking more at the difference images, it really does seem like it is only particular segments (always curves, but not all curves) that have anti-aliasing differences. I wonder whether maybe the same numerical issue that is affecting the bounding boxes is also independently affecting the aliasing algorithm?

Michael Droettboom
Owner

Note that the bounding boxes are only different when hinting is on -- I think the C test in its current form confirms that. That's not at all surprising, given that the hinting algorithm is entirely different on Linux and Mac OS-X due to patent reasons. (Apple licenses the Microsoft TrueType hinting patents, so I assume their X11 distro turns that feature on). The patents expired in 2010, so free software should be using them as well now, but it seems not everything has caught up.

I think the numerical issues in the curve fitting are perhaps separate, but most likely what's causing the tests to mismatch. I'll update the C program to output bitmaps -- I agree that that's the level of difference we need to look at.

Thomas Robitaille

@mdboom - thanks, and sorry about the red herring with hinting, I didn't even know hinting existed before today ;-)

Michael Droettboom
Owner

@astrofrog: Ok -- I have some code that writes out the bitmap to PGM files (PGM is a dead-easy format to write). Indeed, between my two versions, there is a difference along the curves for the following:

./freetype_test Vera.ttf 97 20 500 8 output.pgm

as well as turning off the stretching that matplotlib does (doesn't seem to make any difference):

./freetype_test Vera.ttf 97 20 500 1 output.pgm

I'm curious to confirm that even between the same nominal version of freetype (which you have, and I currently don't) if there are differences. If there are, then I think we're ready to go to the freetype mailing list with this.

Thomas Robitaille

@mdboom - thanks! I do see differences across (even close) versions, e.g. between 2.4.6 and 2.4.9.

However, I had an issue with the fact that even if I compiled against 2.4.9, it would pick up 2.4.6 at runtime, because it is dynamically linked (I noticed that the version numbers output by your code didn't match what I thought I had built against). If I ensure that the versions truly are the same at runtime (using DYLD_LIBRARY_PATH on mac and LD_LIBRARY_PATH on linux), then the images are the same regardless of platform. If I now go back to my simple matplotlib tests, I can now get the tests to agree if both are using 2.4.9.

I'm now trying to run both tests with 13.1.7, but I'm running into issues - my computer has a MacPorts install of freetype 2.4.9. However, I'm setting PKG_CONFIG_PATH and DYLD_LIBRARY_PATH to point to a 2.4.6 (13.1.7) install, so when I build matplotlib, I get:

REQUIRED DEPENDENCIES
             numpy: 1.6.1
         freetype2: 13.1.7

However, if I look at the built ft2font.so, it points to the MacPorts version (2.4.9)!

/opt/local/lib/libfreetype.6.dylib (compatibility version 15.0.0, current version 15.1.0)

This is why when I was running Jenkins, it was still picking up the more recent version of freetype even though everything else indicated that it should be using 13.1.7. My guess is that /opt/local/lib is already in the list of libraries to search, so gcc picks it up from there before even trying the library specified by -L. Confusing!

So the bottom line is that this does not appear to be a freetype bug as such, though maybe we should still go to the freetype list to find out if there is a way to make freetype behave the same across versions? (and show them your script to explain the issue).

Now on how to proceed on the matplotlib front:

  • Is there a way to guarantee that matplotlib actually uses the version listed in the dependencies at the start of the build?

  • Is there a way to print out the current freetype version when running the tests to avoid any confusion?

Michael Droettboom
Owner

If you want ft2font.so to pick up libfreetype from a non-default location, you'll need to have the custom DYLD_LIBRARY_PATH or LD_LIBRARY_PATH exported at dynamic link (i.e. run) time, not just at compile time. Is that the case here?

And, yes, it would be handy to have a function in ft2font.so we can call to get the loaded freetype version. I've added that to the PR.

Thomas Robitaille

I was having issues that required setting DYLD_LIBRARY_PATH, but I'm now encountering a more annoying issue (described above) that even when building, ft2font is linked with the wrong library, which means that even with DYLD_LIBRARY_PATH set correctly, I run into issues. To demonstrate, I have two freetype libraries, one in /opt/local/lib (14.1.8 / 2.4.9) and one in /Users/tom/usr/lib (13.1.7 / 2.4.6). I have:

air:matplotlib-1.1.0 tom$ echo $PKG_CONFIG_PATH 
/Users/tom/usr/lib/pkgconfig/
air:matplotlib-1.1.0 tom$ echo $DYLD_LIBRARY_PATH 
/Users/tom/usr/lib
air:matplotlib-1.1.0 tom$ echo $LD_LIBRARY_PATH 
/Users/tom/usr/lib

I then build matplotlib:

air:matplotlib-1.1.0 tom$ python setup.py build
basedirlist is: []
============================================================================
BUILDING MATPLOTLIB
            matplotlib: 1.1.0
                python: 2.7.2 (default, Jan 31 2012, 22:38:06)  [GCC 4.2.1
                        (Based on Apple Inc. build 5658) (LLVM build
                        2335.9)]
              platform: darwin

REQUIRED DEPENDENCIES
                 numpy: 1.6.1
             freetype2: 13.1.7

...
building 'matplotlib.ft2font' extension
creating build/temp.macosx-10.6-x86_64-2.7
creating build/temp.macosx-10.6-x86_64-2.7/src
creating build/temp.macosx-10.6-x86_64-2.7/CXX
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/usr/include/freetype2 -I/Users/tom/usr/include -I. -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/ft2font.cpp -o build/temp.macosx-10.6-x86_64-2.7/src/ft2font.o
...

Note that in the build summary at the start, the version of freetype is the one I want (13.1.7). Now, check this out:

air:matplotlib-1.1.0 tom$ otool -L build/lib.macosx-10.6-x86_64-2.7/matplotlib/ft2font.so 
build/lib.macosx-10.6-x86_64-2.7/matplotlib/ft2font.so:
    /opt/local/lib/libfreetype.6.dylib (compatibility version 15.0.0, current version 15.1.0)

(it is linked to the wrong dynamic library). Now here's what I get when I try and run a matplotlib script:

air:mpl_font_testing tom$ python simpler_plot.py 
Traceback (most recent call last):
  File "simpler_plot.py", line 6, in <module>
    import matplotlib.pyplot as plt
...
  File "/Users/tom/Library/Python/2.7/lib/python/site-packages/matplotlib/font_manager.py", line 52, in <module>
    from matplotlib import ft2font
ImportError: dlopen(/Users/tom/Library/Python/2.7/lib/python/site-packages/matplotlib/ft2font.so, 2): Library not loaded: /opt/local/lib/libfreetype.6.dylib
  Referenced from: /Users/tom/Library/Python/2.7/lib/python/site-packages/matplotlib/ft2font.so
  Reason: Incompatible library version: ft2font.so requires version 15.0.0 or later, but libfreetype.6.dylib provides version 14.0.0

So it's picking up the right one at runtime, but because it actually linked against the wrong one, chaos ensues. Am I doing something wrong here?

Michael Droettboom
Owner

Can you also provide the compiler output when ft2font.so is linked (not just when ft2font.o is compiled)? That's probably where the error is happening -- perhaps in the ordering of arguments.

The matplotlib build uses only the results of pkg_config freetype2 --libs to determine the location of the freetype library. *_LIBRARY_PATH isn't used by the setup.py script (and distutils), but LDFLAGS would be, if you had one.

Thomas Robitaille

Here's the full log for the ft2font extension - there are no warnings, but the issue is that it includes both /opt/local/lib and $HOME/usr/lib in the list of library paths. Since python is in /opt/local/lib, I guess that's why it includes /opt/local/lib in the linking. But that prevents me from linking to any other freetype version. I guess that if all else fails, I have to not use the /opt/local/bin python, but build a special version specifically for Jenkins, and remove /opt/local/bin from the $PATH, but that is not ideal.

building 'matplotlib.ft2font' extension
creating build/temp.macosx-10.6-x86_64-2.7
creating build/temp.macosx-10.6-x86_64-2.7/src
creating build/temp.macosx-10.6-x86_64-2.7/CXX
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/usr/include/freetype2 -I/Users/tom/usr/include -I. -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/ft2font.cpp -o build/temp.macosx-10.6-x86_64-2.7/src/ft2font.o
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/usr/include/freetype2 -I/Users/tom/usr/include -I. -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/mplutils.cpp -o build/temp.macosx-10.6-x86_64-2.7/src/mplutils.o
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/usr/include/freetype2 -I/Users/tom/usr/include -I. -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c CXX/cxx_extensions.cxx -o build/temp.macosx-10.6-x86_64-2.7/CXX/cxx_extensions.o
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/usr/include/freetype2 -I/Users/tom/usr/include -I. -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c CXX/cxxsupport.cxx -o build/temp.macosx-10.6-x86_64-2.7/CXX/cxxsupport.o
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/usr/include/freetype2 -I/Users/tom/usr/include -I. -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c CXX/IndirectPythonInterface.cxx -o build/temp.macosx-10.6-x86_64-2.7/CXX/IndirectPythonInterface.o
/Developer/usr/bin/llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -pipe -O2 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/Users/tom/usr/include/freetype2 -I/Users/tom/usr/include -I. -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c CXX/cxxextensions.c -o build/temp.macosx-10.6-x86_64-2.7/CXX/cxxextensions.o
/Developer/usr/bin/llvm-g++-4.2 -bundle -undefined dynamic_lookup -isysroot / -L/opt/local/lib build/temp.macosx-10.6-x86_64-2.7/src/ft2font.o build/temp.macosx-10.6-x86_64-2.7/src/mplutils.o build/temp.macosx-10.6-x86_64-2.7/CXX/cxx_extensions.o build/temp.macosx-10.6-x86_64-2.7/CXX/cxxsupport.o build/temp.macosx-10.6-x86_64-2.7/CXX/IndirectPythonInterface.o build/temp.macosx-10.6-x86_64-2.7/CXX/cxxextensions.o -L/Users/tom/usr/lib -lfreetype -lz -lstdc++ -lm -o build/lib.macosx-10.6-x86_64-2.7/matplotlib/ft2font.so
Thomas Robitaille

Maybe this is an issue that is too specific to my case though - I found a solution, which is to edit freetype-config in $HOME/usr/bin and change:

Libs: -L${libdir} -lfreetype
Libs.private: -lz -lbz2 

to

Libs: -L${libdir} ${libdir}/libfreetype.dylib
Libs.private: -lz -lbz2 

This guarantees that I will link against the right dynamic library.

Thomas Robitaille

Going back to your test program, I can now confirm that it gives the same results for 2.4.6 between linux and mac, and for 2.4.9 between linux and mac (but different between 2.4.6 and 2.4.9).

Now for something interesting - if I replace (in your code) FT_RENDER_MODE_NORMAL by FT_RENDER_MODE_MONO (which essentially disables anti-aliasing), then the results are the same for 2.4.6 and 2.4.9 (though they are scrambled because it's a 1-bit image instead of 8-bit, so other things need to be updated). But if there was a way to map this option to a (hypothetical) font.antialias option in the rc parameters, one could disable text anti-aliasing for tests, which would make them more robust to the freetype version...

Benjamin Root
Collaborator

Just a thought... If we are going to have special controls on how the tests are run, then we need to make sure that any updates to the baseline images and/or any images added from new tests are generated under the same conditions. Don't know if that should just be a documentation thing or if a special script should be made that would run the requested test under the right conditions.

Michael Droettboom
Owner

@astrofrog: Good idea about using non-antialiased mode. I'll go ahead and implement that and see how we do.

@WeatherGod: We already have a number of settings and context that we require for running tests that are picked up automagically by nose in lib/matplotlib/tests/__init__.py:setup(). The only way that wouldn't get picked up is if the tests were run outside of nose, which isn't really a use case we can support for a number of other reasons.

Michael Droettboom
Owner

There is now a new rcParam, text.antialiased that turns antialiasing on and off in the Agg backend. The test setup() will set this to False. If @astrofrog's theory is correct, we should see fewer text differences between different versions of freetype. I've tested it on Linux freetype 2.4.6 vs. Mac OS-X freetype 2.4.4 and it certainly seems to work.

DO NOT MERGE until I've had a chance to squash all of these commits together. This branch now has three versions of all the baseline images (at about 20MB a set) and I don't want to increase the size of the git repo unnecessarily.

Benjamin Root
Collaborator
Thomas Robitaille

@mdboom - thanks! I will test this today with different freetype versions on MacOS X and linux.

On a side note, I wonder whether we could leave a few (or even one) tests that do require anti-aliasing, and simply skip them if the version of freetype is wrong? This will allow us to make sure that the new anti-aliasing option for text that you added has no impact on plots where anti-aliasing is included? (but won't lead to failures on systems where the version is wrong, just a known failure).

Thomas Robitaille

@mdboom - this is minor, but could you possibly add an option to turn anti-aliasing on/off in your freetype_test code? It could be interesting to set up a script to run it against all stable releases of freetype to see whether the output is indeed constant without anti-aliasing.

Thomas Robitaille

One final question - after this pull request, is there any reason not to lower the tolerance for tests? In fact, is there any reason why we wouldn't expect the RMS to be 0? Are there other platform/version-dependent issues?

Thomas Robitaille

I've tested this branch on MacOS 10.7 with freetype 2.4.9 (14.1.8), and all tests pass except three (the font_styles ones):

======================================================================
FAIL: matplotlib.tests.test_text.test_font_styles.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
    result = f(*args, **kwargs)
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /Users/Shared/Jenkins/Home/jobs/matplotlib-mdboom-tests-osx-10.7-multiconfig/workspace/NV/1.6.1/PV/2.7/result_images/test_text/font_styles.png vs. /Users/Shared/Jenkins/Home/jobs/matplotlib-mdboom-tests-osx-10.7-multiconfig/workspace/NV/1.6.1/PV/2.7/result_images/test_text/expected-font_styles.png (RMS 47.138)

======================================================================
FAIL: matplotlib.tests.test_text.test_font_styles.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
    result = f(*args, **kwargs)
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /Users/Shared/Jenkins/Home/jobs/matplotlib-mdboom-tests-osx-10.7-multiconfig/workspace/NV/1.6.1/PV/2.7/result_images/test_text/font_styles_pdf.png vs. /Users/Shared/Jenkins/Home/jobs/matplotlib-mdboom-tests-osx-10.7-multiconfig/workspace/NV/1.6.1/PV/2.7/result_images/test_text/expected-font_styles_pdf.png (RMS 23.409)

======================================================================
FAIL: matplotlib.tests.test_text.test_font_styles.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
    result = f(*args, **kwargs)
  File "/Users/Shared/Jenkins/Home/virtualenvs.nose.pil/python2.7-numpy1.6.1/lib/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /Users/Shared/Jenkins/Home/jobs/matplotlib-mdboom-tests-osx-10.7-multiconfig/workspace/NV/1.6.1/PV/2.7/result_images/test_text/font_styles_svg.png vs. /Users/Shared/Jenkins/Home/jobs/matplotlib-mdboom-tests-osx-10.7-multiconfig/workspace/NV/1.6.1/PV/2.7/result_images/test_text/expected-font_styles_svg.png (RMS 39.686)

The images are here: http://db.tt/7jOnhnwN

It looks as though somehow the font styles don't work on my system when anti-aliasing is disabled?

Thomas Robitaille

For info, I am also getting two inkscape related errors, but these seem unrelated to this branch, so I opened a new issue at #787.

Thomas Robitaille

Sorry again for the deluge of comments, but I've tested this branch on a linux machine with a pretty old version of freetype (9.20.3), and there I get 9 failures: Log is here: http://pastebin.com/NNiQ9ZQX - Images are here: http://db.tt/AwCCCIC5

Strangely, most of these issues look like hinting issues - maybe the way you disable hinting didn't work with that old a version of freetype?

Let me know if there are any tests I could run that would be helpful!

Michael Droettboom
Owner

@WeatherGod: yes, new baseline images need to be generated through the testing system -- the first time a test is run without the baseline images, the test still runs and generates output, and this output can be copied into the source tree as new baseline images. I've added a paragraph to the developer docs describing how to generate and add new baseline images to the system.

@astrofrog: I've added a test that includes antialiasing, which will be a KnownFailure if the freetype version doesn't match the one I have (2.4.6). I suppose we'll have to update it periodically as new freetype versions enter the mainstream.

@astrofrog: I will add an option to freetype_test for mono font rendering -- that's a lower priority than this work getting the tests in line for the release candidate, though.

@astrofrog: The font_styles test failures seem unrelated, since the styles are wrong in SVG and PDF as well (which would be unaffected by the antialiased flag). I'll create a new issue for that.

Michael Droettboom
Owner

@astrofrog: I'm not finding any API changes between how hinting is selected across those versions of freetype. I think this may be related to the changes in rasterizing we were seeing elsewhere -- the rounding to full pixels hides it somewhat, but not completely. Once I have freetype_test working with mono rendering we can confirm the same differences happen there, and it will be easier to tweak freetype API calls if there are any to tweak... (not that I've found any).

Thomas Robitaille

Just tested on a completely different linux machine, with an even older version of freetype (9.10.3) and I'm seeing similar failures to the linux box with freetype 9.20.3:

Log: http://pastebin.com/QkdKjeL7
Images: http://db.tt/SBwYIIVr

Note that this includes two font_style failures (and I tried removing the font cache).

Anyway, sorry for the information overload! Hopefully we can diagnose these remaining issues with your test program. This is still major progress given that we're seeing relatively few failures over a range of freetype versions.

Thomas Robitaille

I think there's a mixup of test image files - if you look at the last batch I sent, font_styles.png is actually the anti-aliasing test image!

Michael Droettboom
Owner

@astrofrog: I'm not sure I follow the last comment (albeit, there's a lot going on).

http://db.tt/SBwYIIVr

has no font_styles.png at all.

http://db.tt/7jOnhnwN

has a font_styles.png, but it is neither antialiased nor styled (all the text is in the same font).

Thomas Robitaille

Please ignore the last comment regarding the file mixup - I'm not quite sure what happened there.

Thomas Robitaille

Actually I stand by my last comment - there is a mixup of test images, but for some reason there is an issue with the Dropbox file. I'll email you the results.

Benjamin Root
Collaborator

@mdboom, thanks! That addresses my major concern.

@mdboom, @astrofrog, you have both done tremendous work with this PR and deserve a million and one thanks! I will be signing off now for the next couple of weeks.

Have a happy bugfix release! Cheers!

Thomas Robitaille

@mdboom - I sent you the 9.10.3 and 9.20.3 results by email, let me know if you didn't get them. For 9.10.3, the font_styles.png file is definitely the wrong one (it comes from the antialiasing test).

Jens Hedegaard Nielsen

I see the problem with the font_style images actually being the anti aliasing image as well. Adding a cleanup decorator to
the aliasing test seems to fix the problem.
i.e.

from matplotlib.testing.decorators import image_comparison, knownfailureif, cleanup

...

@cleanup 
@image_comparison(baseline_images=['antialiased'], extensions=['png'])
...
Michael Droettboom
Owner

@jenshnielsen: Thanks for finding that. There seems to be some weird interaction between @knownfailureif and @image_comparison that prevents the cleanup from working (normally @image_comparison and @cleanup would be redundant).

@astrofrog: Since I can't actually reproduce what you and @jenshnielsen are seeing, can you confirm that this works now? (Ignoring the lack of styling that happens in #788).

Thomas Robitaille

This does get rid of the font_style failures on the linux boxes, and also solves the lack of failure for the antialiasing test.

Jens Hedegaard Nielsen

Yes I see. I don't think adding @cleanup is the way to go. It skips the anti aliased test for some reason (with a silent pass).
It seems that part of the problem is that the KnownFailureDidNotFailTest(msg) error is raised in the anti aliasing test and this makes the test error out without cleaning up after it self.
I haven't looked at the decorator in great detail but I guess this should raise when the test pases even if failure condition is true i.e. the freetype version is wrong. By commenting out this line I can remove this issue. But the knowfailure has no effect and the test
always fails if I manually change the baseline image to something wrong.

For the reference I am trying this on a Ubuntu 11.10 box with freetype 2.4.4 and nose 1.0.0.
Will try later on a Ubuntu 12.04 beta box.

John Hunter
Owner

Damn, @mdboom, you're committing faster than I can test and report. Gotta start over!

Thomas Robitaille

@mdboom - regarding the old freetype versions, could you add outputting the bounding boxes in the case of no hinting so we can see if there was some change in the bbox algorithm over time? Maybe you could output that to a file (instead of to stdout) and we can easily check with a cksum whether all the bounding boxes are the same. I'm going to upload a repo that makes testing against different freetype versions easy.

Thomas Robitaille

@mdboom - actually, no rush, I'm already seeing differences in the mono output, so that's enough to work with for now.

John Hunter
Owner

I am seeing the following on a clean build and install, after flusing
the font and tex cache and running on mdboom@b5f802a
(opensuse, freetype 12.2.6)

FAIL: matplotlib.tests.test_mathtext.mathfont_cm_09_test.test

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(self.arg)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
result = f(
args, **kwargs)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
'(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/mathfont_cm_09.png vs. /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/expected-mathfont_cm_09.png (RMS 61.425)

FAIL: matplotlib.tests.test_mathtext.mathfont_stix_14_test.test

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(self.arg)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
result = f(
args, **kwargs)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
'(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/mathfont_stix_14.png vs. /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/expected-mathfont_stix_14.png (RMS 3378.111)

FAIL: matplotlib.tests.test_mathtext.mathfont_stix_17_test.test

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(self.arg)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
result = f(
args, **kwargs)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
'(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/mathfont_stix_17.png vs. /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/expected-mathfont_stix_17.png (RMS 60.877)

FAIL: matplotlib.tests.test_mathtext.mathfont_stix_56_test.test

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(self.arg)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
result = f(
args, **kwargs)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
'(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/mathfont_stix_56.png vs. /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/expected-mathfont_stix_56.png (RMS 3385.501)

FAIL: matplotlib.tests.test_mathtext.mathfont_stixsans_19_test.test

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(self.arg)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
result = f(
args, **kwargs)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
'(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/mathfont_stixsans_19.png vs. /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/expected-mathfont_stixsans_19.png (RMS 3405.471)

FAIL: matplotlib.tests.test_mathtext.mathfont_stixsans_56_test.test

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(self.arg)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 35, in failer
result = f(
args, **kwargs)
File "/export/home/johnh/devlinux/lib64/python2.7/site-packages/matplotlib/testing/decorators.py", line 126, in do_test
'(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/mathfont_stixsans_56.png vs. /export/home/johnh/tmp/mpl_test/result_images/test_mathtext/expected-mathfont_stixsans_56.png (RMS 3385.501)


Ran 994 tests in 307.321s

Michael Droettboom
Owner

@jenshnielsen, @astrofrog: I have a fix in that I think resolves the test_antialiasing test nonsense. Can you please report back that it's work for you?

@jdh2358: Sorry, I'll slow down ;) Can you put the broken images up somewhere? I'd like to see in what way the tests are failing.

Thomas Robitaille

@jdh2358 @mdboom - those are the same as the ones I've been seeing on my linux boxes

Thomas Robitaille

Ok, this might help shed some light on freetype version dependencies:

https://github.com/astrofrog/freetype_version_testing

(you need to pull in the submodule after cloning)

You can try it out by running:

python install_all.py
python test_all.py

(don't worry, it will only install within the directory)

Then, you can look at the output files in `output/. I've included a list of MD5 hashes on my mac for the mono a grey images. You can see that for the mono ones, there was one change at 2.4.5:

https://github.com/astrofrog/freetype_version_testing/blob/master/output/mono_osx_10.7

For the grey ones, there were more variations recently, so monochrome does seem a safer bet. Feel free to try and run it on your systems to see if you find the same conclusion (change at 2.4.5). I think 2.4.5 is more recent than @jdh2358 and my versions, but older than @mdboom's, hence the issues.

Jens Hedegaard Nielsen

I see the same tests fail as @jdh2358 on Ubuntu 11.10 with freetype 2.4.4. They all
pass on Ubuntu 12.04 with freetype 2.4.8.
In addition I see some test failures in test_delaunay using Ubuntu 11.10 but not 12.04 also font related.
(Is there any reason why test_delaunay and test_legend are not added to the default tests in lib/matplotlib/init.py default_test_modules)

Thomas Robitaille

Ok, I see the same results for the freetype_version_testing tests on a linux machine (i.e. a change in monochrome output at 2.4.5). Makes sense that @jenshnielsen is seeing failures with 2.4.4 and 2.4.8.

Michael Droettboom
Owner

@astrofrog: That's great! I'm building the hundreds of freetype installs now...

In the meantime, I'm working on getting @knownfailureif working with @image_comparison so that we can mark these handful of mathtext tests as knownfail. Assuming #788 also gets resolved, then I think we're in pretty good shape.

Jens Hedegaard Nielsen

It seems to work better. However if the test passes even thou the freetype version is different
(the know failure condition is true) an error is raised.

Thomas Robitaille

I found the change in freetype that caused this:

2011-01-15  Werner Lemberg  <wl@gnu.org>

    [raster] Make bbox handling the same as with Microsoft's rasterizer.

    Right before B/W rasterizing, the bbox gets simply rounded to
    integers.  This fixes, for example, glyph `three' in font `Helvetica
    Neue LT Com 65 Medium' at 11ppem.

    Thanks to Greg Hitchcock who explained this behaviour.

    * src/raster/ftrend1.c (ft_raster1_render): Implement it.
Thomas Robitaille

And here is the code that was changed (in src/raster/ftrend1.c in freetype-2.4.5)

cbox.xMin = FT_PIX_FLOOR( cbox.xMin );
cbox.yMin = FT_PIX_FLOOR( cbox.yMin );
cbox.xMax = FT_PIX_CEIL( cbox.xMax );
cbox.yMax = FT_PIX_CEIL( cbox.yMax );

to:

/* undocumented but confirmed: bbox values get rounded */
#if 1
cbox.xMin = FT_PIX_ROUND( cbox.xMin );
cbox.yMin = FT_PIX_ROUND( cbox.yMin );
cbox.xMax = FT_PIX_ROUND( cbox.xMax );
cbox.yMax = FT_PIX_ROUND( cbox.yMax );
#else
cbox.xMin = FT_PIX_FLOOR( cbox.xMin );
cbox.yMin = FT_PIX_FLOOR( cbox.yMin );
cbox.xMax = FT_PIX_CEIL( cbox.xMax );
cbox.yMax = FT_PIX_CEIL( cbox.yMax );
#endif

(just for info). Now the question of whether we can figure out a workaround, as it looks tricky...

Jens Hedegaard Nielsen

Just to clarify I see all tests passing with 2.4.8

Thomas Robitaille

Just thought I'd mention an idea I had for dealing with the different freetype versions if we can't figure out another solution:

Since the changes are often very small, why not save the reference images for a specific freetype version, say 2.4.6 (13.1.7), as we have now, and also save difference images, which take almost no space (~1K each), for freetype versions where the results are different. If we stick with non-aliased tests, we would basically just store one set of difference images for pre-2.4.5 versions, which takes very little space. Then, we can have a decorator that takes care of producing a new reference image if needed using the diffs before the comparison takes place. Note, this can also be used with the aliased tests, but requires more diffs as there were more versions.

This would basically make the tests pass with all freetype versions!

Michael Droettboom
Owner

Now it's possible to specify a range freetype versions we expect the image comparator to work on. If it fails outside of that range, it's a knownfail. If it works anyway outside of that range, it passes. I've marked all of the mathtext tests with the range of freetypes that match mine (2.4.5 to 2.4.9) and the antialiasing test with the narrower 2.4.5 to 2.4.6. This should hopefully get all tests passing everywhere. Fingers crossed!

Michael Droettboom
Owner

@astrofrog: That's a good idea (including diffs to previous freetype versions). Maybe we should open a new issue for that, as that's a significant chunk of infrastructure, that we maybe should wrap into the other improvements I'd like to make to the testing framework (see #778). I think in the meantime it's fine to have a few known fails.

Michael Droettboom mdboom Fix failing tests. The reasons for the failures were in the following…
… categories:

  - changes in the snapping algorithm that made the axes off by one pixel.
  - changes in text alignment.

All the text in the baseline images is non-antialiased.  This seems to be more robust to differences across freetype versions.  This involved the addition of a new rcParam 'text.antialised', used only by the Agg backend.

Additionally, some freetype-related differences still cause failures.  For those, we mark the tests as known fail if the user has a version of freethe versions that we expect to worvesions were determined using T. Robitaille's freetype differences testing tool here:

   https://github.com/astrofrog/freetype_version_testing

Fixes a bug in the SVG backend's Gouraud shading.

Force the use of the Agg backend for testing, and restore the old backend afterward.
3b0c9a7
John Hunter
Owner

It looks like we have a start/end range for freetype version. In general wouldn't we want the end range to be None and to assume it works for future versions? Otherwise as new freetypes come out, we are going to start accumulating more knownfails. Am I seeing this right?

John Hunter
Owner

Holy smokes, we have a winner. Heroic work @mdboom and @astrofrog:

python -c 'import matplotlib as m; m.test()'
----------------------------------------------------------------------
Ran 994 tests in 302.548s

OK (KNOWNFAIL=8)
Michael Droettboom
Owner

If those future versions (e.g. 2.4.10) don't change anything, the test will run, the comparison will be made showing no changes and the result of the tests will be a pass. If the hypothetical 2.4.10 changes something, the test will return known fail. I think that's how we want things to work, since we can't look into the future, and as long as the future doesn't change, we won't accumulate known failures.

Given @astrofrog's work to see when things have changed, we can expect 2.4.5 - 2.4.9 to work interchangably in non-antialiased mode. In antialiased mode, there are more frequent changes. Since I've been generating baseline images from 2.4.6, only 2.4.5 and 2.4.6 will work.

Michael Droettboom
Owner

BTW -- assuming @astrofrog or @jenshnielsen don't find anything new with this, I consider this ready to merge: I've gone ahead and squashed all the commits together so the main repo won't grow insanely in size.

Jens Hedegaard Nielsen

That works excellent. In addition I see the following tests failing
using freetype 2.4.4 but not 2.4.8. (They are all font related mainly the rendering of P and 1)

As I pointed out before these tests are skipped in the default test suite.

======================================================================
FAIL: matplotlib.tests.test_delaunay.test_cliff.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.7/nose/case.py", line 187, in runTest
    self.test(*self.arg)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /tmp/result_images/test_delaunay/cliff-lin-con.png vs. /tmp/result_images/test_delaunay/expected-cliff-lin-con.png (RMS 857.983)

======================================================================
FAIL: matplotlib.tests.test_delaunay.test_cosine_peak.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.7/nose/case.py", line 187, in runTest
    self.test(*self.arg)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /tmp/result_images/test_delaunay/cosine_peak-nn-img.png vs. /tmp/result_images/test_delaunay/expected-cosine_peak-nn-img.png (RMS 15490.165)

======================================================================
FAIL: matplotlib.tests.test_delaunay.test_cosine_peak.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.7/nose/case.py", line 187, in runTest
    self.test(*self.arg)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /tmp/result_images/test_delaunay/cosine_peak-nn-con.png vs. /tmp/result_images/test_delaunay/expected-cosine_peak-nn-con.png (RMS 677.838)

======================================================================
FAIL: matplotlib.tests.test_delaunay.test_cosine_peak.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.7/nose/case.py", line 187, in runTest
    self.test(*self.arg)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /tmp/result_images/test_delaunay/cosine_peak-lin-con.png vs. /tmp/result_images/test_delaunay/expected-cosine_peak-lin-con.png (RMS 781.796)

======================================================================
FAIL: matplotlib.tests.test_delaunay.test_gentle.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.7/nose/case.py", line 187, in runTest
    self.test(*self.arg)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /tmp/result_images/test_delaunay/gentle-nn-con.png vs. /tmp/result_images/test_delaunay/expected-gentle-nn-con.png (RMS 372.553)

----------------------------------------------------------------------
Thomas Robitaille

@mdboom - I agree that the diff method should be done separately from this PR. If we did do this for all comparison images (not just the ones failing currently) then one could consider lowering the threshold for RMS for failures, allowing us to pick out non-font related issues that are more subtle!

Just running the tests one last time to test this PR.

Jens Hedegaard Nielsen

I'm not sure that the mathtext with freetype 2.4.4 can simply be regarded an unimportant
difference. Consider the stixsans_56. In the result that I get using freetype 2.4.4 the i is missing among the lower case
letters. http://ubuntuone.com/44BTlRSBwjN68k0PBhdiuR
It is there in the base images and when using 2.4.8.

I.e. I am not sure that it is a good idea just to generate new images for 2.4.4 and lower and do a diff

Thomas Robitaille

Just a quick note - I'm still seeing the issue @jdh2358 saw originally when running the tests remotely:

RuntimeError: could not create GdkCursor object

I have to manually add a call to set the backend in tests.py.

Thomas Robitaille

I'm still getting 4 failures with 9.20.3:

======================================================================
FAIL: matplotlib.tests.test_axes.test_arc_ellipse.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/robitaille/usr/lib64/python2.6/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /home/robitaille/matplotlib/result_images/test_axes/arc_ellipse.png vs. /home/robitaille/matplotlib/result_images/test_axes/expected-arc_ellipse.png (RMS 243.382)

======================================================================
FAIL: matplotlib.tests.test_axes.test_markevery_line.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/robitaille/usr/lib64/python2.6/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /home/robitaille/matplotlib/result_images/test_axes/markevery_line.png vs. /home/robitaille/matplotlib/result_images/test_axes/expected-markevery_line.png (RMS 23410.476)

======================================================================
FAIL: matplotlib.tests.test_axes.test_polar_units.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/robitaille/usr/lib64/python2.6/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /home/robitaille/matplotlib/result_images/test_axes/polar_units_2.png vs. /home/robitaille/matplotlib/result_images/test_axes/expected-polar_units_2.png (RMS 33.764)

======================================================================
FAIL: matplotlib.tests.test_tightlayout.test_tight_layout4.test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/robitaille/usr/lib64/python2.6/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 36, in failer
    result = f(*args, **kwargs)
  File "/home/robitaille/usr/lib64/python2.6/site-packages/matplotlib/testing/decorators.py", line 140, in do_test
    '(RMS %(rms).3f)'%err)
ImageComparisonFailure: images not close: /home/robitaille/matplotlib/result_images/test_tightlayout/tight_layout4.png vs. /home/robitaille/matplotlib/result_images/test_tightlayout/expected-tight_layout4.png (RMS 10.100)

----------------------------------------------------------------------

(these are not new, they were there before)

All of these are font-related, so can probably be marked as known fails. Some of them concern only specific letters/characters, so I think it will be worth investigating this a bit more in future with the testing script I have to see if there were other changes in freetype that matter.

Thomas Robitaille

And I see the exact same four failures with 9.10.3 on linux

Jens Hedegaard Nielsen

Apart from the 4 delaunay tests that fail on Ubuntu 11.10
I have all tests passing on both Ubuntu 11.10 and 12.04

Thomas Robitaille

Ok, so I did some further tests with freetype_test, and it seems some other characters do have changes in the monochromatic version at other versions of freetype. For example, character 'y' (which I think is causing some of the above failures) changes at 2.3.0, 2.3.9, 2.4.4, and 2.4.5. This is why we are still not getting consistent results across all versions.

Thomas Robitaille

Anyway, it seems like the current pull request already provides significant improvement compared to before, so maybe we should just merge this (after marking the above tests as known fails) for this candidate release, and we can do a more exhaustive test of freetype varying the characters to see where all the transitions are, and whether we can fix any of them?

John Hunter
Owner

This is the last issue to close before I burn the 1.1.1 release candidate and put a tarball up for building and testing, so Michael, fire at will when you are ready to merge this sucker. If you want to wait another day and try stamping out the remaining brush fires, I have no problem delaying.

Michael Droettboom
Owner

I can hopefully polish this a little more tonight... save the bigger ideas for another time.

Thomas Robitaille

@mdboom - should we start a new issue to keep track of more findings? I've done some more research into which freetype versions are causing issues, but I don't want to continue flooding this PR with information at this stage.

Thomas Robitaille

I won't go into much detail here, but basically 2.4.5 is the freetype version where the most characters changed. Other versions where things change are 2.3.0, 2.3.10, 2.4.0, 2.4.4, and 2.4.5. But the bottom line of my testing is that things are stable from 2.4.5 onwards (for monochromatic font rendering), which is the most important for now.

Once there is a new issue to keep track of future progress on this topic, I will post more detailed findings, and we can then see if any of the changes can be dealt with in the calls to the freetype library.

Thomas Robitaille astrofrog referenced this pull request
Merged

Make tests faster #778

Thomas Robitaille

Final comment from me: on MacOS 10.7, all the tests pass (with freetype 2.4.9), apart from the failures in #788, but I think those must be Jenkins-specific.

Michael Droettboom
Owner

@jenshnielsen: That missing 'i' is troubling, but I wonder if it's due to mono rendering rounding error or in fact not reading the character at all. Does the 'i' appear in the corresponding SVG and/or PDF in your output?

@jenshnielsen: How are the delaunay tests failing? Font-only or something else?

@astrofrog: I've marked those 4 failing tests as needing freetype version 2.4.5 through 2.4.9. That should resolve this for now and then (barring jenshnielsen's issues) should resolve this PR.

Thomas Robitaille

All right, now everything seems fine!

Linux with freetype 2.3.9:

OK (KNOWNFAIL=11)

Linux with freetype 2.2.1:

FAILED (KNOWNFAIL=273, errors=1)

but the one error is just a gs error to do with pcolormesh, but this might just be due to the version of gs.

MacOS 10.7 with freetype 2.4.9:

FAILED (KNOWNFAIL=2, errors=2)

but these are just the inkscape errors in #787 (I think that I managed to fix the font_style errors from #788 - more later in that issue)

So this PR looks good to me!

Michael Droettboom
Owner

I missed earlier on that @jenshnielsen pointed out that test_delaunay and test_legend are not part of the standard set of tests. I usually test directly with nose at the commandline (for various reasons) so I didn't notice that. How do those test sets fare font-wise? @astrofrog: Would you mind testing those with one of your ancient freetypes?

Michael Droettboom
Owner

I'm going to mark the 4 delaunay tests @jenshnielsen as known fail for old freetypes.

Thomas Robitaille

@mdboom - how do I activate these tests?

Michael Droettboom
Owner

@astrofrog: They should be activated by commit f22547c

Michael Droettboom
Owner

The problem running tests remotely should now also be fixed.

Thomas Robitaille

Linux with freetype 2.2.1:

FAILED (KNOWNFAIL=280, errors=1)

(no inkscape, and 1 error due to a ghostscript error, which does seem like a genuine error, but I don't think it's related to this pull request, so will open a new issue for that if it persists)

Linux with freetype 2.3.9:

OK (KNOWNFAIL=16)

MacOS 10.7:

FAILED (KNOWNFAIL=2, errors=3)

(errors are inkscape as usual)

Thomas Robitaille

Looks ready to merge!

Michael Droettboom mdboom merged commit 5846304 into from
Thomas Robitaille

(forgot to mention - the tests do work remotely now!)

Paul Ivanov
Collaborator

I just want to reiterate: EPIC EFFORT @mdboom & @astrofrog!!! job well done, I can't believe all these tests are passing now!

Thomas Robitaille

For info, I've opened a new ticket (#792) to keep track of progress with the freetype issue moving forward (though no immediate action is required - just thought I'd let you know in case you want to sign up for notifications on that issue).

Michael Droettboom mdboom deleted the branch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Mar 22, 2012
  1. Michael Droettboom

    Fix failing tests. The reasons for the failures were in the following…

    mdboom authored
    … categories:
    
      - changes in the snapping algorithm that made the axes off by one pixel.
      - changes in text alignment.
    
    All the text in the baseline images is non-antialiased.  This seems to be more robust to differences across freetype versions.  This involved the addition of a new rcParam 'text.antialised', used only by the Agg backend.
    
    Additionally, some freetype-related differences still cause failures.  For those, we mark the tests as known fail if the user has a version of freethe versions that we expect to worvesions were determined using T. Robitaille's freetype differences testing tool here:
    
       https://github.com/astrofrog/freetype_version_testing
    
    Fixes a bug in the SVG backend's Gouraud shading.
    
    Force the use of the Agg backend for testing, and restore the old backend afterward.
  2. Michael Droettboom
  3. Michael Droettboom
  4. Michael Droettboom
  5. Michael Droettboom

    Make tests work remotely

    mdboom authored
Something went wrong with that request. Please try again.