Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JFYI: matplotlib image differ tests #1

jankatins opened this issue Apr 2, 2016 · 4 comments

JFYI: matplotlib image differ tests #1

jankatins opened this issue Apr 2, 2016 · 4 comments


Copy link

@jankatins jankatins commented Apr 2, 2016

This is mainly JFYI because it came up on twitter: matplotlib has a similar system in place to do unittesting on their images. It is also used in downstream packages like seaborn. The system is based on comparing raster images and compares the rasterized output of svg, tiff and ps backends to a baseline png which is included in the repo. rasterization is done with ghostscript. I suspect that the rasterize step is there because svgs can produce the same visual but have different internal representations (e.g. when plotting a point and a line, AFAIK the xml can contain point -> line and line -> point).

The workflow is:

  • write testcase with a name in a testfile
  • run once -> fails due to missing baseline images and produces a png image "result_images/testfile/name.png"
  • compare image with your expected image
  • If fine: copy the output to the baseline directory
  • run again -> baseline image is found and plot is compared by drawing the plot on three backends, saving the results (png+ps+svg), saterize svg+ps and comparing the rasterized image to the baseline image.

From my experience with this:

  • The tests should try very hard to make the available installed fonts the same on all test systems (e.g. bitstream vera or something, which can be expected to be available on dev machines and on travis/...; remove any fallbacks in the config; matplotlib actually has a font embedded in the package to have a default)
  • The outputs are not always completely the same due to different systems (e.g. different antialiasing strategies on linux/windows) -> matplotlib has a tolerance parameter for the comparison, but recently tried very hard to remove all non-zero values and was almost successful (but which got again worse when automatic windows tests were introduced).
  • mpl usually removes any text from a plot before it is drawn (a parameter to the comparison function), so different text rendering on axis labels on different systems is not the failure problem...
  • If tolerance is not zero, it's probably best to build plots which look ugly, like increasing the size of printed dots and such things, because small dots can be on totally different positions as expected but this isn't registered because of the tolerance...
  • To reproduce errors on travis/appveyor it's nice if the code spits out a directory which contains the images (+ baseline + diff + html with side-by-side placements of the images for visual inspection), so this can be uploaded (travis) or save as an artifact (appveyor)

A test looks like this:

@image_comparison(baseline_images=['log_scales'], remove_text=True)
def test_log_scales():
    ax = plt.subplot(122, yscale='log', xscale='symlog')


-> tests all three images formats (no extensions=['png]) and has a tolerance of 0 (no tol=x) and removes the text. baseline_images is a list because you can have multiple plots in a test (which is IMO not a nice feature...).

The main part is here: (mpl is license="BSD")

CC: @hrbrmstr because twitter... :-)


This comment has been minimized.

Copy link

@hrbrmstr hrbrmstr commented Apr 3, 2016

nice. man, i wish there were some other way on both python and r to not use legacy linux font libs (i.e. a nice, modern, cross-platform font lib that support OTF wld be epic)


This comment has been minimized.

Copy link

@lionel- lionel- commented Apr 3, 2016

Thanks for your insights Jan.

My main goal with the initial release of vdiffr is to offer a convenient UI for writing visual tests with testthat and managing failed cases with a workflow based on a Shiny app:


I chose to compare SVG files mainly for convenience. As good as svglite is, it does not offer a completely accurate rendition of R plots. But in most cases, complete accuracy is not necessary for the purpose of testing regressions. I wrote vdiffr with ggplot2 extensions in mind, which are more oriented towards data exploration than creating graphics for publication. The advantage of SVG is that I don't have to deal with tolerance.

It's certainly possible to add backends though. I like how you apply different testing strategies in one go.


This comment has been minimized.

Copy link

@jimhester jimhester commented Oct 3, 2016

@janschulz Winston's vtest uses ImageMagik compare of raster images with a tolerence threshold, seems to be more what you had in mind. See for usage in ggplot2.


This comment has been minimized.

Copy link

@clauswilke clauswilke commented Aug 19, 2018

This is an old issue, but since it's still open I'll add my two cents: I have found the comparison of svg's extremely valuable. The one thing I can do with svg's that I can't do with raster images is diff the new image against the old and hunt down exactly what has changed. I do this regularly, in particular when I don't see a difference visually but vdiffr tells me the images aren't the same. I find it helpful to understand why vdiffr thinks the images are different and what in the code changed to cause those differences. With raster images, you're mostly flying blind.

Example: this is a case where the visual tests failed because changes in the calculation of axis tick locations resulted in slightly different locations for the ticks and labels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants
You can’t perform that action at this time.