You can clone with
No one assigned
After customizing a number of parameters in my matplotlibrc file, I was able to generate figures as desired in with pyplot.show(). However, using pyplot.savefig() with the same parameters resulted in images with thicker lines and bigger fonts, even when the dpi was the same for the default figure and the saved image.
This issue was described by another user here: http://stackoverflow.com/questions/7906365/matplotlib-savefig-plots-different-from-show/9814313
I was able to find a solution for myself as described here: http://stackoverflow.com/questions/7906365/matplotlib-savefig-plots-different-from-show/9814313#9814313
But, it would be really nice if a fix were built in for all the rendering backends, or, alternately, there was at least an easy way to scale paths and fonts so that savefig() and show() resulted in similar images.
I can't reproduce the problem if the dpi's are the same -- I get identical images. Try this:
from matplotlib.pyplot import *
If you see differences, can you please post both the test.png and a screenshot of the window?
Which backend are you using? I suspect it's one of the non-Agg ones, or the changes you propose would have effect in both the screen and file output.
Interesting. That does produce the same image for me. So, I was able to track down the source of the problem, which seems to be having figure.dpi is set to a non-default value in my matplotlibrc file.
So, if I create a matplotlibrc file from the one posted here: http://matplotlib.sourceforge.net/users/customizing.html
And at line 280 set figure.dpi = 160, then the resultant savefig() and show() outputs are different. You can see the difference here: http://i.imgur.com/Ap8NX.png , with the saved image on top of the show() image.
That's promising, because switching that setting back is likely an easier fix than the one I implemented. Is that desired behavior, though? Why would that happen?
To either @karmel or @mdboom : I think it would be helpful at this point to clarify what each of you thinks the action for this issue now is.
@karmel is pointing out a genuine bug that I can reproduce with the macosx backend, but not with *agg backends--which is why saving to a png works.
matplotlib.rcParams['figure.dpi'] = 240
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, figsize=(3,2)) # 2 inches high
ax.plot([1,2,3], lw=72) # line 1 inch wide
This should draw a diagonal line of width half the height of the whole figure. With agg it does; with macosx it does not.
Can anyone on a mac look into why this is happening?
I wonder if this is related to another bug I encountered with a friend who uses a Mac. In that case, the graphics context was handled differently in the macosx and the agg backends. Essentially, the color of the hatches were being handled properly in one, but not the other (I forget which).
Out of interest. Are you sure lw=72 is correct, and not 72.27? I've seen the number 72.27 appearing in the PGF backend...
Hmm, so it appears that there are 72 postscript points to an inch, but 72.27 LaTeX points to an inch. I'm on a Mac, I can take a quick look at this.
So, here's the interesting thing. When using savefig, the resulting figure's line is rather large, much bigger than one inch. However, using show, the line appears to be about an inch thick, which is what it should be. So I conclude that the problem is not with show but with savefig. Unless I'm missing something?
There also appears to be lots of variables called dpi. There's figure.dpi, figure.canvas.dpi, renderer.dpi and when pushing the save button on the interactive toolbar, a 100 dpi figure is saved as a result of some dpi swapping. I am utterly confused as to why there isn't one single dpi value.
@dmcdougall: Careful: which backend are you using? And are you measuring the thickness of the line (perpendicular, of course; I should have just made the line horizontal) as a fraction of the height of the figure? It should be 1/2 the height of the figure, since I specified the figure as 2 inches. When I use an agg backend, this is what I see; when I use macosx, it is not, and it is the screen line that is thinner than 1/2 the figure height.
Regarding dpi: yes, it is a confusing mess, and there is more than one way in which dpi operates, depending on backend.
@efiring I copied and pasted your code, so I'm using the macosx backend. I was measuring the diagonal thickness of the line, since that's the width of the line. Your comment leads me to believe that the diagonal thickness of the line should actually be two inches. Is that correct? I'll check against Agg.
@efiring Using savefig, I get the same figure using either the agg or macosx backends. Unsurprising, since the macosx backend switches the backend to agg to save the figure anyway. Here is the output using savefig from the agg backend:
And here is the output using savefig from the macosx backend:
I even ran diff between them. The diff is empty.
@dmcdougall, the confusion here is that one must compare what the macosx backend puts on the screen to what is written to the file. It is not using agg when rendering to the screen, but yes, it is using agg when it writes a png, regardless of whether it is done via savefig or via the file save button. The images you show here are correct (line thickness is half the figure height), but I think that if you estimate the line thickness as shown on your screen you will find it is considerably thinner than half the screen height.
@efiring Ah, I understand now. Thanks for the explanation.
That's the objective-C part where the real work is done.
I'm making progress on this.
@efiring Thanks. I edited the comment once I found it :)
I've found the problem. The OS X backend just ignores the linewidth parameter; it's never passed to the Core Graphics context.
Here's a screenshot for proof:
I can now sleep! The next step is to fix it, which I need to figure out. But for now it's bedtime. Thanks for the help @efiring!
Thanks for everyone's hard work on this!
@dmcdougall, that screenshot above: is that what you get with the unmodified macosx backend? It is not the same as what I am seeing; the diagonal line in your version is the correct thickness, the same as in the agg images, whereas I see a line about 1/3 as thick, as a fraction of the figure height. What is the same in your screenshot and on my screen is that the ticks and the surrounding box are rendered differently than in the agg output.
A little more experimentation indicates that the macosx on-screen linewidth is always based on some fixed dpi; it is simply not seeing the figure.dpi setting. For example, if I use rcParams["figure.dpi"]=72, I get a small plot with everything scaled down except the width of the blue line, and the widths of the axis box and ticks.
@efiring No, it's what I get after fixing the problem.
This doesn't seem the right solution to this problem.
gc.get_linewidth returns the line width previously set by gc.set_linewidth.
But gc.set_linewidth already makes the call to CGContextSetLineWidth, so we should not be needing another call to CGContextSetLineWidth in GraphicsContext_draw_path.
I think the issue lies in the definition of gc.set_linewidth:
Is the line width passed to gc.set_linewidth the line width in pixels? Or the line width in points?
And the same question for gc.get_linewidth: Is the line width returned by gc.get_linewidth the line width in pixels or in points?
I guess it's easiest if get/set_linewidth use units of pixels, and do the conversion between points and pixels uniformly for different backends if possible.
But if gc.set_linewidth must use units of points, then the conversion between points and pixels has to be made within the set_linewidth function of GraphicsContextMac. Which is a bit annoying, since the points_to_pixels function is in the renderer and not in the graphics context.
@mdehoon Thanks for taking the time to look at the pull request. I appreciate your feedback. I agree with most of what you say. In fact, in light of your comments the solution is much simpler and cleaner, but requires passing the dpi to the graphics context.
For comparison, let's take the agg backend. The dpi is passed in through the constructor and all points are converted to pixels in the c code. The dpi (as far as I am aware) is not currently passed to the objective code for the osx backend. Python passes the linewidths as points to the agg backend, and the conversion is done in c. I think, for the sake of attempting to keep everything uniform we should do the same here.
Since there are also problems with the mac backend not saving bitmaps are the correct dpi (issue ), I think it's best to change the GraphicsContextMac constructor to mimic that of the agg backend. That is, passing the width, height and dpi all to the C code. We could nail two birds with one stone if we do this properly.
I'll look into it more today. Thanks again for your comments.
Edit: The other mac dpi issue I forgot to mention. It is #113.
On further inspection of the dpi issue, it has transpired that this is not an issue with osx backend in paritcular, see #1223.
I think this can be closed now. See #1209.