# matplotlib/matplotlib

### Subversion checkout URL

You can clone with
or
.

# Plot limit with transform#731

Merged
merged 11 commits into from
+745 −239

### 3 participants

Collaborator

This pull request represents a significant chunk of work to address a simple bug:

``````import matplotlib.pyplot as plt
import matplotlib.transforms as mtrans

ax = plt.axes()
off_trans = mtrans.Affine2D().translate(10, 10)

plt.plot(range(11), transform=off_trans + ax.transData)

print(ax.dataLim)
``````

The result should be `[10, 10, 20, 20]`, but the offset transform has not been taken into account.

Since a path transformation can be costly, it made sense to use the created Line's cached transform concept.
This threw up another, quite confusing bug:

``````import matplotlib.projections.polar as polar
import matplotlib.transforms as mtrans
import matplotlib.path as mpath
import numpy as np

full = mtrans.Affine2D().translate(1, 0) + polar.PolarAxes.PolarTransform()

verts = np.array([[0, 0], [5, 5], [2, 0]])
p = mpath.Path(verts)

tpath = mtrans.TransformedPath(p, full)
partial_p, aff = tpath.get_transformed_path_and_affine()

print full.transform_path_affine(full.transform_path_non_affine(p))
print full.get_affine().transform_path_affine(full.transform_path_non_affine(p))

``````

The numbers themselves aren't important, suffice to say that the former is correct.

Additionally, the need for non-affine Transform subclasses to implement `transform_non_affine` and also copy this definition into `transform` too is confusing/obfuscating e.g.:

``````class PolarTransform(Transform):
def transform(self, tr):
# ...
# do some stuff
# ...
return xy
transform.__doc__ = Transform.transform.__doc__

transform_non_affine = transform
transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__
``````

This latter complaint is the result of an optimisation that will see little benefit (transform stacks are typically mostly Affine, and the non-affine part is easily cached).

Therefore this pull request represents a simplification (at the cost of a couple more function calls) of the current Transform framework. Whilst it is my opinion that the Transform class heirachy remains non-optimally representative of the problem space, I have tried to be pragmatic in my changes for both backwards compatibility and size of review considerations.

This pull request is independent of the invalidation mechanism upgrade being discussed in #723, and a merge between the two should be straight forward.

The tests run exactly the same as they did before commencing this work (they weren't passing on my machine in the first place, but the RMS values have not changed at all). The run time has gone up 5 seconds up to 458 seconds (~1% slower), but this includes the new tests as a result of this pull.

Note this change subtly affects the way one should implement a Transform. If you are implementing a non affine transformation, then you should
override the transform_non_affine, rather than overriding the transform & copying the transform into transform_non_affine too. e.g.:

``````class PolarTransform(Transform):
def transform_non_affine(self, tr):
# ...
# do some stuff
# ...
return xy
transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__
``````

The documentation is still representative of this change, hence there are few documentation changes included in this pull request.

was assigned
Owner

Hey Phil,

I just did a quick read through of this PR and it will take me much more time for me to have something intelligent to say. While I wrote the original transformations infrastructure, @mdboom did a thorough revamp of it for mpl 0.98 and he is the resident expert. So hopefully he will chime in here shortly. From my read, it is obvious that this is very careful, well thought out and well tested work, so congrats on successfully diving into one of the hairiest parts of the codebase.

I don't use any exotic non-affine transformations in my own work, just the plain vanilla logarithmic scales, so I am not intimately familiar with many of the issues arising there. I do use the blended transformations quite a bit, and this leads me to my question. In the original bug you discuss:

``````off_trans = mtrans.Affine2D().translate(10, 10)
plt.plot(range(11), transform=off_trans + ax.transData)
print(ax.dataLim)
``````

which produces incorrect datalim, you write "The result should be [10, 10, 20, 20]". It is not obvious to me that mpl is producing the incorrect result here. I'm not really disagreeing with you here, mainly looking to be educated.

In the simple case of a "blended transformation" produced by axvline:

``````In [46]: ax = plt.gca()

In [47]: line, = ax.plot([0, 1], [.5, .7])

In [48]: ax.dataLim
Out[48]:
Bbox(array([[ 0. ,  0.5],
[ 1. ,  0.7]]))

In [49]: linev = ax.axvline(0.5)

In [50]: ax.dataLim
Out[50]:
Bbox(array([[ 0. ,  0.5],
[ 1. ,  0.7]]))

In [51]: linev.get_ydata()
Out[51]: [0, 1]
``````

In this case, even though the axvline has y coords in [0, 1], this does not affect the axes datalim y limits, because using the blended transform (x is data, y is axes) we do not consider the y coordinate to be in the data system at all. The line will span the y space regardless of the x data limits or view limits. Obviously when presented with a generic transformation, there is no way for mpl to infer this, so we give it a hint with the x_isdata and y_isdata which is then respected by update_from_path. I see you support this behavior in your comment about backwards compatibility in _update_line_limits

Now this breaks down if I create the blended transformation on my own and do not set these x_isdata/y_isdata hints:

``````In [53]: ax = plt.gca()

In [54]: trans = transforms.blended_transform_factory(
ax.transData, ax.transAxes)

In [55]: line, = ax.plot([1,2,3], transform=trans)

In [56]: ax.dataLim
Out[56]:
Bbox(array([[ 0.,  1.],
[ 2.,  3.]]))
``````

In the original (pre-Michael refactor) transformation scheme, the dataLim gave the bounding box of the untransformed data, the viewLim gave the bounding box of the view port, and the transformation took you from data limits to figure pixels. The reason the y-data in the blended axes lines like axvline were not considered was because these were already considered "pre-transformed" if you will, and hence not part of the data limits. Ie, only the raw, untransformed data were included in the datalim.

This early scheme obviously wasn't robust enough to handle all the wonderful transformations we can now support following Michael's refactor, but he left those attributes (dataLim, viewLim) in for backwards compatibility, and I think some of the weirdness we are now seeing is a result of a more thorough treatment of transformations trying to maintain as much backwards compatibility as possible.

I mention all of this just for historical context so you might understand why some of the weird things you are seeing are in there. I am not advocating slavish backwards compatibility, because I'd rather have a correct and robust system going forward, and you've done an admirable job in your patch supporting these idiosyncracies already. What I'm trying to understand is why the dataLim in the case of the initial bug you proposed should utilize the transform at all, when the dataLim in the other cases we have considered (log scales, polar transforms, funky blended transforms from axvline etc), all ignore the transformation.

I haven't yet gotten to your second "quite confusing bug" :-)

referenced this pull request
Merged

### Caching the results of Transform.transform_path for non-affine transforms #723

Owner

I have to agree with John that a lot of work has clearly been put into this pull request on some of the trickiest parts of the code, without much guidance from the original author (I've been busy on many non-matplotlib things lately). I hope we can (over time) simplify rather than further complicate the transforms system -- in the long term even by breaking backward compatibility if there's good benefits to doing so.

Let me address each bug independently.

1) I'm not sure it's actually a bug. I don't think it's ever been the case that the auto view limit algorithm takes into account the transformation of the data. The transform member of the artist is intended to be a way to get from data coordinates to the page, not to be a way to pre-transform the data coordinates into effectively other data coordinates. To put this another way, the "actual" data coordinates should have an impact on the ticking the view box, "transformed" data coordinates should not, and are not intended to. That's what the scale and projection infrastructure is for. The transform member is used, for example, by the "Shadow" patch to slightly offset the drawing of the data without affecting the data itself. (And I think it's the fault of documentation that that isn't very clear -- the first commit here says it's fixing it to work as documented, but I'm not sure what documentation you're referring to -- we should clarify anything that is misleading). Consider the case of a log transform -- if that was done by prepending a log transform to the line's transform member, there would be no way of communicating to the axes ticker that ticks should be handled differently. I think what is more appropriate, given the current imperfect framework, is to define a new scale or projection along these lines:

And, as defining a scale is somewhat heavyweight, providing a new interface to handle the simple case where one just wants to provide a 1D transformation function would be a nice new features, IMHO. It's is also possible to transform the data before passing it to matplotlib. But of course, understanding your use case here better may help us arrive at an even better solution.

As for 2): I agree it's a bug. Is there anyway you could pull out independently the solution just for that part? Also note you say the first result is correct -- I tend to agree, but with this pull request the former returns the same erroneous results as the latter.

3) Changing how transforms are defined: The current approach leaves some flexibility in that one can define three implementations, affine, non-affine and combined. "Combined" in most cases will be equivalent to affine + non-affine, but there may be cases where it is more efficient (particularly in code not written using numpy) to do both in a single swoop, and I wouldn't want to lose that flexibility. As it stands now, if a `Transform` subclass defines `transform` and not `transform_non_affine`, `transform_non_affine` implicitly delegates `transform`. (See `transforms.py:1098`). So the micro-optimization where transforms add `transform_non_affine = transform` is not strictly necessary, but it doesn't change the behavior. So, unless I'm missing something, I don't think your change is necessary (even if only for reducing keystrokes). One might argue that we should have `transform` delegate to `transform_non_affine` instead of the reverse, but that seems like change for change's sake. The reason it is the way it is now is that I assumed most people writing custom transformations would be writing non-affine ones, and this allows them to be ignorant of the whole affine/non-affine distinction and just write a method called `transform`.

Thanks for all your work on this -- it's great to have someone picking it apart and this level of detail. I hope I don't come across as discouraging pull requests -- in fact I'd love to pass this baton along and I think there's a lot of food for thought here. As for next steps, I think it might be most helpful to have an independent pull request just for #2, and continue to discuss ways of supporting the use case suggested by #1 (perhaps on a new issue).

Collaborator

Mike, John,

Firstly, thank you very much for all of your time looking at this non trivial pull request, your feedback is massively valuable and I really appreciate it. I should add that normally when I say the word bug, it tends to mean "it is not behaving as I would expect"; I guess that is not its strict definition in the dictionary :-) .

I hope I don't come across as discouraging pull requests

Not at all. The beauty of github is that we can have in-depth discussions about how code should look and behave and I only see benefit from your input.

John, your example is a good one. In my opinion the current behaviour is undesirable:

``````>>> import matplotlib.pyplot as plt
>>> ax = plt.gca()
>>> line, = ax.plot([0.2, 0.3], [0.6, 0.7])
>>> ax.dataLim
Bbox(array([[ 0.2,  0.6],
[ 0.3,  0.7]]))
>>> line_ax, = ax.plot([0, 1], [0.1, 0.9], transform=ax.transAxes)
>>> ax.dataLim
Bbox(array([[ 0. ,  0.1],
[ 1. ,  0.9]]))
``````

It is my opinion that the `dataLim` should be unaffected by anything which does not involve the `transData`. If a transform only involves a part (be it x or y components) then only that part should affect the `dataLim`. This information is derivable (`trans.contains_branch(self.transData)` in this pull request) and doesn't need to be limited to the easily controlled cases such as `axvline`. I haven't yet overriden the behaviour of `contains_branch` for blended transforms, partially as it seems the result needs to be a 2-tuple rather than a single boolean, but certainly the capability should be fairly straight forward.

The transform member of the artist is intended to be a way to get from data coordinates to the page, not to be a way to pre-transform the data coordinates into effectively other data coordinates.

I guess this is the fundamental shift that this pull request is trying to achieve. The way the transform framework has been implemented means that there is great flexibility when it comes to handling data in different coordinate systems on the same plot (or projection in your terms) without having to "re-project" in advance (i.e. I can work in each datum's "native" coordinate system). Thanks to this I am able to plot polar data with a theta origin at `pi/4` from north on the same plot as data with a different theta origin. Similarly, I am able to work with geospatial data from multiple map projections and put them all on to an arbitrary map projection - without having to re-project the data first:

``````ax = plt.axes(projection='map_projection1')
plt.plot(xdata_in_proj2_coords, ydata_in_proj2_coords,
transform=map_projection2_to_1_transform + ax.transData)
``````

This code pretty much just works with v1.1, except for the data limit calculation which currently assumes that `xdata_in_proj2_coords` is in projection1's coordinate system.

I don't want to make this post to long, so I will leave it there for now with the hope that this has been sufficient to explain why I made this pull request and that it will help inform our discussion further.

All the best,

Collaborator

Woops, I added (and have subsequently removed) a commit which I didn't intend to include.

Collaborator

@mdboom: I am still keen to get this functionality in before the 20th. It will need a little work to address some of your concerns, and I hope to avoid the need for `x_isdata` and `y_isdata`.

Collaborator

@mdboom: I have rebased and removed the use of `x_isdata` in favour of deriving this information from the transforms. This was surprisingly easy with the changes being proposed here, and for me is a good sign that the changes are valuable.

The things which I think need discussion (please add more if you have anything) are:

• a usecase for "pre-transforming" ones data and allowing the dataLim and viewLim to reflect those values
• undoing the "Changing how transforms are defined" (3)
• extracting the fix to the `transform non-affine + transform affine` to a separate request

I suggest we have these discussions in this PR, but try to keep the posts short-ish. If it gets a bit noisy, we can always use the devel mailing list.

Collaborator
##### A usecase for "pre-transforming" ones data

The example I gave previously is my primary usecase for making it possible to plot data which is not necessarily in the `ax.transData` coordinate system. I would like to be able to define matplotlib axes subclasses which represent map projections (e.g. Hammer) and be able to add lines, polygons and images from other cartographic projections (e.g. PlateCarree). I do not want to limit the user as to which cartographic projection they use for their data. To take a tangible case, suppose a user as an image in Mercator that they want to put next to a line in PlateCarree, onto a Robinson map. The syntax that I would like to achieve is:

``````plate_carree = MyProjectionLibrary.PlateCarree()
robinson = MyProjectionLibrary.Robinson()
mercator = MyProjectionLibrary.Mercator()

ax = plt.axes(projection=robinson)
plt.plot(lons, lats, transform=plate_carree, zorder=100)
ax.imshow(img, transform=mercator)
``````

The beauty of this interface is that it is so familiar to a mpl user that they can just pick it up and run. Apart from the need for me to expose a `_as_mpl_transform` api which would provide parity with the `_as_mpl_projection` interface, this all just works (my axes subclass does the image transformations), except from the fact that the dataLim has been incorrectly set to be in `plate_carree` coordinates rather than `robinson` ones.

Collaborator
Owner

Ok -- @pelson: does that mean you're working on a fix?

I think, if we can, it would be nice to include this before the freeze to get it lots of random user testing before the release. This is one of the more "open heart surgery" PR's in the pipe.

Collaborator

Does that mean you're working on a fix?

No, but I will do in the next 3 hours or so.

This is one of the more "open heart surgery" PR's in the pipe.

Ha. I see what you mean. I agree that, because the unit tests don't have full coverage, the only way we can have confidence with code is to put it out in the wild.

lib/matplotlib/patches.py
 ((7 lines not shown)) return artist.Artist.get_transform(self) def get_patch_transform(self): + """ + Return the :class:`~matplotlib.transforms.Transform` ... I'm not sure
 Collaborator pelson added a note Aug 14, 2012 @mdboom: Would you know what these (`get_patch_transform` and `get_data_transform`) are for? I would like to get a one liner for their purpose. Collaborator pelson added a note Aug 20, 2012 Needs resolving before merging. Owner mdboom added a note Aug 20, 2012 `data_transform` maps the data coordinates to physical coordinates. `patch_transform` maps the native coordinates of the patch to physical coordinates. For example, to draw a circle with a radius of 2, the original circle goes from (-1, -1) to (1, 1) (i.e. radius == 1), and the patch transform would scale it up to 2. to join this conversation on GitHub. Already have an account? Sign in to comment
Collaborator

Ok. I think this is in a good state now (changelog and tests polished a little). `travis-ci` seems to be having a little bit of a problem atm, which I don't think is related, so I haven't actually been able to test this on python3 just yet.

referenced this pull request
Closed

### Bug in Axes.relim when the first line is y_isdata=False and possible fix #854

doc/api/api_changes.rst
 ((33 lines not shown)) + >>> print(ax.viewLim) + Bbox('array([[ 0., 0.],\n [ 90., 90.]])') + +* One can now easily get a transform which goes from one transform's coordinate system + to another, in an optimized way, using the new subtract method on a transform. For instance, + to go from data coordinates to axes coordinates:: + + >>> import matplotlib.pyplot as plt + >>> ax = plt.axes() + >>> data2ax = ax.transData - ax.transAxes + >>> print(ax.transData.depth, ax.transAxes.depth) + 3, 1 + >>> print(data2ax.depth) + 2 + + for versions before 2.0 this could only be achieved in a sub-optimal way, using
 Collaborator pelson added a note Aug 20, 2012 version number is wrong. Should be 1.2 to join this conversation on GitHub. Already have an account? Sign in to comment
lib/matplotlib/transforms.py
 ((19 lines not shown)) has_inverse = False + """True if this transform as a corresponding inverse transform."""
 Collaborator pelson added a note Aug 20, 2012 as -> has to join this conversation on GitHub. Already have an account? Sign in to comment
lib/matplotlib/transforms.py
 ((6 lines not shown)) self._a = a self._b = b self.set_children(a, b) self._mtx = None + if DEBUG: + def __str__(self): + return '(%s, %s)' % (self._a, self._b) + + @property + def depth(self): + return self._a.depth + self._b.depth + + def _iter_break_from_left_to_right(self): + for lh_compliment, rh_compliment in self._a._iter_break_from_left_to_right(): + yield lh_compliment, rh_compliment + self._b + for lh_compliment, rh_compliment in self._b._iter_break_from_left_to_right(): + yield self._a + lh_compliment, rh_compliment + +
 Collaborator pelson added a note Aug 20, 2012 One too many newlines to join this conversation on GitHub. Already have an account? Sign in to comment
Collaborator

@mdboom: I don't know when you last read this, so I am holding back from rebasing as the only way you can have confidence in the rebase is to re-read the whole lot.

Merging by hand at this point is probably a better bet. Are you happy to do this, or would you like me to do it?

Owner

Why don't you address the small documentation changes and do a rebase -- it's probably easiest to comment on the rebase here as a pull request than in a manual merge.

Collaborator

Ok. Will do shortly. Just working on pickle PR.

 Phil Elson `Substantial change to transform to make it work as documented. The up…` `…shot is that the dataLim determination is now working.` `8bbe2e5` Phil Elson `Several bugs fixed, particularly with Polar & Geo.` `2f2ff13` Phil Elson `All tests work as expected.` `2286565` Phil Elson `Made small, pre-pull changes.` `3ccff6f` Phil Elson `Finall improvements to plot extent calculation.` `dfbe69c` Phil Elson `Changes as a result of reading diffs prior to pull request.` `83ed6c8` pelson `Removed the use of x_isdata and y_isdata and tidied up transfoms a li…` `…ttle.` `b96e139` pelson `Avoided the use of the hashable property of a transform.` `45f234f` pelson `Added appropriate change logs for new transform mechanims, fixed a bu…` `…g (and updated appropriate test).` `8dbe074` pelson `Small doc changes re transform with plot limits.` `930b117`
Collaborator

Ok. The conflicts were to do with the test_transform.py and api_changes.rst and were pretty straight forward. This is now good to go as far as I can see.

Owner

Thanks for the rebase -- I'll have another look at this today, but it may not be until later in the day.

lib/matplotlib/lines.py
 @@ -446,6 +446,10 @@ def recache(self, always=False): self._invalidy = False def _transform_path(self, subslice=None): + """ + Puts a TransformedPath instance at self._transformed_path, + all invalidation of the transform is then handled by the TransformedPath instance.
 Owner mdboom added a note Aug 21, 2012 Too long line. to join this conversation on GitHub. Already have an account? Sign in to comment
Owner

Ok -- I think other than my nit about line length and filling in the docstrings for get_data_transform and get_patch_transform, I think this is good to go.

 pelson `Doc improvements and line wrapping.` `4b0fbb5`
Collaborator

Thanks Mike. All done.

merged commit `98f6eb2` into matplotlib:master
referenced this pull request
Closed

### Workaround needed to make example in `Transformations Tutorial` work with log axis #3809

Commits on Aug 20, 2012
1. Phil Elson authored pelson committed
`…shot is that the dataLim determination is now working.`
2. Phil Elson authored pelson committed
3. Phil Elson authored pelson committed
4. Phil Elson authored pelson committed
5. Phil Elson authored pelson committed
6. Phil Elson authored pelson committed
7. pelson authored
`…ttle.`
8. pelson authored
9. pelson authored
`…g (and updated appropriate test).`
10. pelson authored
Commits on Aug 21, 2012
1. pelson authored
48 doc/api/api_changes.rst
 @@ -72,6 +72,54 @@ Changes in 1.2.x original keyword arguments will override any value provided by *capthick*. +* Transform subclassing behaviour is now subtly changed. If your transform + implements a non-affine transformation, then it should override the + ``transform_non_affine`` method, rather than the generic ``transform`` method. + Previously transforms would define ``transform`` and then copy the + method into ``transform_non_affine``: + + class MyTransform(mtrans.Transform): + def transform(self, xy): + ... + transform_non_affine = transform + + This approach will no longer function correctly and should be changed to: + + class MyTransform(mtrans.Transform): + def transform_non_affine(self, xy): + ... + +* Artists no longer have ``x_isdata`` or ``y_isdata`` attributes; instead + any artist's transform can be interrogated with + ``artist_instance.get_transform().contains_branch(ax.transData)`` + +* Lines added to an axes now take into account their transform when updating the + data and view limits. This means transforms can now be used as a pre-transform. + For instance: + + >>> import matplotlib.pyplot as plt + >>> import matplotlib.transforms as mtrans + >>> ax = plt.axes() + >>> ax.plot(range(10), transform=mtrans.Affine2D().scale(10) + ax.transData) + >>> print(ax.viewLim) + Bbox('array([[ 0., 0.],\n [ 90., 90.]])') + +* One can now easily get a transform which goes from one transform's coordinate system + to another, in an optimized way, using the new subtract method on a transform. For instance, + to go from data coordinates to axes coordinates:: + + >>> import matplotlib.pyplot as plt + >>> ax = plt.axes() + >>> data2ax = ax.transData - ax.transAxes + >>> print(ax.transData.depth, ax.transAxes.depth) + 3, 1 + >>> print(data2ax.depth) + 2 + + for versions before 1.2 this could only be achieved in a sub-optimal way, using + ``ax.transData + ax.transAxes.inverted()`` (depth is a new concept, but had it existed + it would return 4 for this example). + Changes in 1.1.x ================
2  lib/matplotlib/artist.py
 @@ -101,8 +101,6 @@ def __init__(self): self._remove_method = None self._url = None self._gid = None - self.x_isdata = True # False to avoid updating Axes.dataLim with x - self.y_isdata = True # with y self._snap = None def remove(self):
69 lib/matplotlib/axes.py
 @@ -1461,17 +1461,52 @@ def add_line(self, line): self._update_line_limits(line) if not line.get_label(): - line.set_label('_line%d'%len(self.lines)) + line.set_label('_line%d' % len(self.lines)) self.lines.append(line) line._remove_method = lambda h: self.lines.remove(h) return line def _update_line_limits(self, line): - p = line.get_path() - if p.vertices.size > 0: - self.dataLim.update_from_path(p, self.ignore_existing_data_limits, - updatex=line.x_isdata, - updatey=line.y_isdata) + """Figures out the data limit of the given line, updating self.dataLim.""" + path = line.get_path() + if path.vertices.size == 0: + return + + line_trans = line.get_transform() + + if line_trans == self.transData: + data_path = path + + elif any(line_trans.contains_branch_seperately(self.transData)): + # identify the transform to go from line's coordinates + # to data coordinates + trans_to_data = line_trans - self.transData + + # if transData is affine we can use the cached non-affine component + # of line's path. (since the non-affine part of line_trans is + # entirely encapsulated in trans_to_data). + if self.transData.is_affine: + line_trans_path = line._get_transformed_path() + na_path, _ = line_trans_path.get_transformed_path_and_affine() + data_path = trans_to_data.transform_path_affine(na_path) + else: + data_path = trans_to_data.transform_path(path) + else: + # for backwards compatibility we update the dataLim with the + # coordinate range of the given path, even though the coordinate + # systems are completely different. This may occur in situations + # such as when ax.transAxes is passed through for absolute + # positioning. + data_path = path + + if data_path.vertices.size > 0: + updatex, updatey = line_trans.contains_branch_seperately( + self.transData + ) + self.dataLim.update_from_path(data_path, + self.ignore_existing_data_limits, + updatex=updatex, + updatey=updatey) self.ignore_existing_data_limits = False def add_patch(self, p): @@ -1507,11 +1542,14 @@ def _update_patch_limits(self, patch): if vertices.size > 0: xys = patch.get_patch_transform().transform(vertices) if patch.get_data_transform() != self.transData: - transform = (patch.get_data_transform() + - self.transData.inverted()) - xys = transform.transform(xys) - self.update_datalim(xys, updatex=patch.x_isdata, - updatey=patch.y_isdata) + patch_to_data = (patch.get_data_transform() - + self.transData) + xys = patch_to_data.transform(xys) + + updatex, updatey = patch.get_transform().\ + contains_branch_seperately(self.transData) + self.update_datalim(xys, updatex=updatex, + updatey=updatey) def add_table(self, tab): @@ -1599,13 +1637,13 @@ def _process_unit_info(self, xdata=None, ydata=None, kwargs=None): if xdata is not None: # we only need to update if there is nothing set yet. if not self.xaxis.have_units(): - self.xaxis.update_units(xdata) + self.xaxis.update_units(xdata) #print '\tset from xdata', self.xaxis.units if ydata is not None: # we only need to update if there is nothing set yet. if not self.yaxis.have_units(): - self.yaxis.update_units(ydata) + self.yaxis.update_units(ydata) #print '\tset from ydata', self.yaxis.units # process kwargs 2nd since these will override default units @@ -3424,7 +3462,6 @@ def axhline(self, y=0, xmin=0, xmax=1, **kwargs): trans = mtransforms.blended_transform_factory( self.transAxes, self.transData) l = mlines.Line2D([xmin,xmax], [y,y], transform=trans, **kwargs) - l.x_isdata = False self.add_line(l) self.autoscale_view(scalex=False, scaley=scaley) return l @@ -3489,7 +3526,6 @@ def axvline(self, x=0, ymin=0, ymax=1, **kwargs): trans = mtransforms.blended_transform_factory( self.transData, self.transAxes) l = mlines.Line2D([x,x], [ymin,ymax] , transform=trans, **kwargs) - l.y_isdata = False self.add_line(l) self.autoscale_view(scalex=scalex, scaley=False) return l @@ -3546,7 +3582,6 @@ def axhspan(self, ymin, ymax, xmin=0, xmax=1, **kwargs): verts = (xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin) p = mpatches.Polygon(verts, **kwargs) p.set_transform(trans) - p.x_isdata = False self.add_patch(p) self.autoscale_view(scalex=False) return p @@ -3603,7 +3638,6 @@ def axvspan(self, xmin, xmax, ymin=0, ymax=1, **kwargs): verts = [(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin)] p = mpatches.Polygon(verts, **kwargs) p.set_transform(trans) - p.y_isdata = False self.add_patch(p) self.autoscale_view(scaley=False) return p @@ -3909,7 +3943,6 @@ def plot(self, *args, **kwargs): self.add_line(line) lines.append(line) - self.autoscale_view(scalex=scalex, scaley=scaley) return lines
31 lib/matplotlib/lines.py
 @@ -6,6 +6,8 @@ # TODO: expose cap and join style attrs from __future__ import division, print_function +import warnings + import numpy as np from numpy import ma from matplotlib import verbose @@ -249,17 +251,15 @@ def contains(self, mouseevent): if len(self._xy)==0: return False,{} # Convert points to pixels - if self._transformed_path is None: - self._transform_path() - path, affine = self._transformed_path.get_transformed_path_and_affine() + path, affine = self._get_transformed_path().get_transformed_path_and_affine() path = affine.transform_path(path) xy = path.vertices xt = xy[:, 0] yt = xy[:, 1] # Convert pick radius from points to pixels - if self.figure == None: - warning.warn('no figure set when check if mouse is on line') + if self.figure is None: + warnings.warn('no figure set when check if mouse is on line') pixels = self.pickradius else: pixels = self.figure.dpi/72. * self.pickradius @@ -446,6 +446,11 @@ def recache(self, always=False): self._invalidy = False def _transform_path(self, subslice=None): + """ + Puts a TransformedPath instance at self._transformed_path, + all invalidation of the transform is then handled by the + TransformedPath instance. + """ # Masked arrays are now handled by the Path class itself if subslice is not None: _path = Path(self._xy[subslice,:]) @@ -453,6 +458,14 @@ def _transform_path(self, subslice=None): _path = self._path self._transformed_path = TransformedPath(_path, self.get_transform()) + def _get_transformed_path(self): + """ + Return the :class:`~matplotlib.transforms.TransformedPath` instance + of this line. + """ + if self._transformed_path is None: + self._transform_path() + return self._transformed_path def set_transform(self, t): """ @@ -482,8 +495,8 @@ def draw(self, renderer): subslice = slice(max(i0-1, 0), i1+1) self.ind_offset = subslice.start self._transform_path(subslice) - if self._transformed_path is None: - self._transform_path() + + transformed_path = self._get_transformed_path() if not self.get_visible(): return @@ -507,7 +520,7 @@ def draw(self, renderer): funcname = self._lineStyles.get(self._linestyle, '_draw_nothing') if funcname != '_draw_nothing': - tpath, affine = self._transformed_path.get_transformed_path_and_affine() + tpath, affine = transformed_path.get_transformed_path_and_affine() if len(tpath.vertices): self._lineFunc = getattr(self, funcname) funcname = self.drawStyles.get(self._drawstyle, '_draw_lines') @@ -528,7 +541,7 @@ def draw(self, renderer): gc.set_linewidth(self._markeredgewidth) gc.set_alpha(self._alpha) marker = self._marker - tpath, affine = self._transformed_path.get_transformed_points_and_affine() + tpath, affine = transformed_path.get_transformed_points_and_affine() if len(tpath.vertices): # subsample the markers if markevery is not None markevery = self.get_markevery()
12 lib/matplotlib/patches.py
 @@ -167,9 +167,21 @@ def get_transform(self): return self.get_patch_transform() + artist.Artist.get_transform(self) def get_data_transform(self): + """ + Return the :class:`~matplotlib.transforms.Transform` instance which + maps data coordinates to physical coordinates. + """ return artist.Artist.get_transform(self) def get_patch_transform(self): + """ + Return the :class:`~matplotlib.transforms.Transform` instance which + takes patch coordinates to data coordinates. + + For example, one may define a patch of a circle which represents a + radius of 5 by providing coordinates for a unit circle, and a + transform which scales the coordinates (the patch coordinate) by 5. + """ return transforms.IdentityTransform() def get_antialiased(self):
56 lib/matplotlib/projections/geo.py
 @@ -263,7 +263,7 @@ def __init__(self, resolution): Transform.__init__(self) self._resolution = resolution - def transform(self, ll): + def transform_non_affine(self, ll): longitude = ll[:, 0:1] latitude = ll[:, 1:2] @@ -282,18 +282,12 @@ def transform(self, ll): x = (cos_latitude * ma.sin(half_long)) / sinc_alpha y = (ma.sin(latitude) / sinc_alpha) return np.concatenate((x.filled(0), y.filled(0)), 1) - transform.__doc__ = Transform.transform.__doc__ - - transform_non_affine = transform transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ - def transform_path(self, path): + def transform_path_non_affine(self, path): vertices = path.vertices ipath = path.interpolated(self._resolution) return Path(self.transform(ipath.vertices), ipath.codes) - transform_path.__doc__ = Transform.transform_path.__doc__ - - transform_path_non_affine = transform_path transform_path_non_affine.__doc__ = Transform.transform_path_non_affine.__doc__ def inverted(self): @@ -309,10 +303,10 @@ def __init__(self, resolution): Transform.__init__(self) self._resolution = resolution - def transform(self, xy): + def transform_non_affine(self, xy): # MGDTODO: Math is hard ;( return xy - transform.__doc__ = Transform.transform.__doc__ + transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ def inverted(self): return AitoffAxes.AitoffTransform(self._resolution) @@ -348,7 +342,7 @@ def __init__(self, resolution): Transform.__init__(self) self._resolution = resolution - def transform(self, ll): + def transform_non_affine(self, ll): longitude = ll[:, 0:1] latitude = ll[:, 1:2] @@ -361,18 +355,12 @@ def transform(self, ll): x = (2.0 * sqrt2) * (cos_latitude * np.sin(half_long)) / alpha y = (sqrt2 * np.sin(latitude)) / alpha return np.concatenate((x, y), 1) - transform.__doc__ = Transform.transform.__doc__ - - transform_non_affine = transform transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ - def transform_path(self, path): + def transform_path_non_affine(self, path): vertices = path.vertices ipath = path.interpolated(self._resolution) return Path(self.transform(ipath.vertices), ipath.codes) - transform_path.__doc__ = Transform.transform_path.__doc__ - - transform_path_non_affine = transform_path transform_path_non_affine.__doc__ = Transform.transform_path_non_affine.__doc__ def inverted(self): @@ -388,7 +376,7 @@ def __init__(self, resolution): Transform.__init__(self) self._resolution = resolution - def transform(self, xy): + def transform_non_affine(self, xy): x = xy[:, 0:1] y = xy[:, 1:2] @@ -398,7 +386,7 @@ def transform(self, xy): longitude = 2 * np.arctan((z*x) / (2.0 * (2.0*z*z - 1.0))) latitude = np.arcsin(y*z) return np.concatenate((longitude, latitude), 1) - transform.__doc__ = Transform.transform.__doc__ + transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ def inverted(self): return HammerAxes.HammerTransform(self._resolution) @@ -434,7 +422,7 @@ def __init__(self, resolution): Transform.__init__(self) self._resolution = resolution - def transform(self, ll): + def transform_non_affine(self, ll): def d(theta): delta = -(theta + np.sin(theta) - pi_sin_l) / (1 + np.cos(theta)) return delta, np.abs(delta) > 0.001 @@ -466,18 +454,12 @@ def d(theta): xy[:,1] = np.sqrt(2.0) * np.sin(aux) return xy - transform.__doc__ = Transform.transform.__doc__ - - transform_non_affine = transform transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ - def transform_path(self, path): + def transform_path_non_affine(self, path): vertices = path.vertices ipath = path.interpolated(self._resolution) return Path(self.transform(ipath.vertices), ipath.codes) - transform_path.__doc__ = Transform.transform_path.__doc__ - - transform_path_non_affine = transform_path transform_path_non_affine.__doc__ = Transform.transform_path_non_affine.__doc__ def inverted(self): @@ -493,10 +475,10 @@ def __init__(self, resolution): Transform.__init__(self) self._resolution = resolution - def transform(self, xy): + def transform_non_affine(self, xy): # MGDTODO: Math is hard ;( return xy - transform.__doc__ = Transform.transform.__doc__ + transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ def inverted(self): return MollweideAxes.MollweideTransform(self._resolution) @@ -534,7 +516,7 @@ def __init__(self, center_longitude, center_latitude, resolution): self._center_longitude = center_longitude self._center_latitude = center_latitude - def transform(self, ll): + def transform_non_affine(self, ll): longitude = ll[:, 0:1] latitude = ll[:, 1:2] clong = self._center_longitude @@ -555,18 +537,12 @@ def transform(self, ll): np.sin(clat)*cos_lat*cos_diff_long) return np.concatenate((x, y), 1) - transform.__doc__ = Transform.transform.__doc__ - - transform_non_affine = transform transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ - def transform_path(self, path): + def transform_path_non_affine(self, path): vertices = path.vertices ipath = path.interpolated(self._resolution) return Path(self.transform(ipath.vertices), ipath.codes) - transform_path.__doc__ = Transform.transform_path.__doc__ - - transform_path_non_affine = transform_path transform_path_non_affine.__doc__ = Transform.transform_path_non_affine.__doc__ def inverted(self): @@ -587,7 +563,7 @@ def __init__(self, center_longitude, center_latitude, resolution): self._center_longitude = center_longitude self._center_latitude = center_latitude - def transform(self, xy): + def transform_non_affine(self, xy): x = xy[:, 0:1] y = xy[:, 1:2] clong = self._center_longitude @@ -604,7 +580,7 @@ def transform(self, xy): (x*sin_c) / (p*np.cos(clat)*cos_c - y*np.sin(clat)*sin_c)) return np.concatenate((long, lat), 1) - transform.__doc__ = Transform.transform.__doc__ + transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ def inverted(self): return LambertAxes.LambertTransform(
14 lib/matplotlib/projections/polar.py
 @@ -42,7 +42,7 @@ def __init__(self, axis=None, use_rmin=True): self._axis = axis self._use_rmin = use_rmin - def transform(self, tr): + def transform_non_affine(self, tr): xy = np.empty(tr.shape, np.float_) if self._axis is not None: if self._use_rmin: @@ -74,20 +74,14 @@ def transform(self, tr): y[:] = r * np.sin(t) return xy - transform.__doc__ = Transform.transform.__doc__ - - transform_non_affine = transform transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ - def transform_path(self, path): + def transform_path_non_affine(self, path): vertices = path.vertices if len(vertices) == 2 and vertices[0, 0] == vertices[1, 0]: return Path(self.transform(vertices), path.codes) ipath = path.interpolated(path._interpolation_steps) return Path(self.transform(ipath.vertices), ipath.codes) - transform_path.__doc__ = Transform.transform_path.__doc__ - - transform_path_non_affine = transform_path transform_path_non_affine.__doc__ = Transform.transform_path_non_affine.__doc__ def inverted(self): @@ -138,7 +132,7 @@ def __init__(self, axis=None, use_rmin=True): self._axis = axis self._use_rmin = use_rmin - def transform(self, xy): + def transform_non_affine(self, xy): if self._axis is not None: if self._use_rmin: rmin = self._axis.viewLim.ymin @@ -163,7 +157,7 @@ def transform(self, xy): r += rmin return np.concatenate((theta, r), 1) - transform.__doc__ = Transform.transform.__doc__ + transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__ def inverted(self): return PolarAxes.PolarTransform(self._axis, self._use_rmin)
265 lib/matplotlib/tests/test_transforms.py
 @@ -1,5 +1,8 @@ from __future__ import print_function -from nose.tools import assert_equal +import unittest + +from nose.tools import assert_equal, assert_raises +import numpy.testing as np_test from numpy.testing import assert_almost_equal from matplotlib.transforms import Affine2D, BlendedGenericTransform from matplotlib.path import Path @@ -9,6 +12,8 @@ import matplotlib.transforms as mtrans import matplotlib.pyplot as plt +import matplotlib.path as mpath +import matplotlib.patches as mpatches @@ -106,37 +111,37 @@ def test_pre_transform_plotting(): def test_Affine2D_from_values(): - points = [ [0,0], + points = np.array([ [0,0], [10,20], [-1,0], - ] + ]) - t = Affine2D.from_values(1,0,0,0,0,0) + t = mtrans.Affine2D.from_values(1,0,0,0,0,0) actual = t.transform(points) expected = np.array( [[0,0],[10,0],[-1,0]] ) assert_almost_equal(actual,expected) - t = Affine2D.from_values(0,2,0,0,0,0) + t = mtrans.Affine2D.from_values(0,2,0,0,0,0) actual = t.transform(points) expected = np.array( [[0,0],[0,20],[0,-2]] ) assert_almost_equal(actual,expected) - t = Affine2D.from_values(0,0,3,0,0,0) + t = mtrans.Affine2D.from_values(0,0,3,0,0,0) actual = t.transform(points) expected = np.array( [[0,0],[60,0],[0,0]] ) assert_almost_equal(actual,expected) - t = Affine2D.from_values(0,0,0,4,0,0) + t = mtrans.Affine2D.from_values(0,0,0,4,0,0) actual = t.transform(points) expected = np.array( [[0,0],[0,80],[0,0]] ) assert_almost_equal(actual,expected) - t = Affine2D.from_values(0,0,0,0,5,0) + t = mtrans.Affine2D.from_values(0,0,0,0,5,0) actual = t.transform(points) expected = np.array( [[5,0],[5,0],[5,0]] ) assert_almost_equal(actual,expected) - t = Affine2D.from_values(0,0,0,0,0,6) + t = mtrans.Affine2D.from_values(0,0,0,0,0,6) actual = t.transform(points) expected = np.array( [[0,6],[0,6],[0,6]] ) assert_almost_equal(actual,expected) @@ -165,6 +170,246 @@ def test_clipping_of_log(): assert np.allclose(tpoints[-1], tpoints[0]) +class NonAffineForTest(mtrans.Transform): + """ + A class which looks like a non affine transform, but does whatever + the given transform does (even if it is affine). This is very useful + for testing NonAffine behaviour with a simple Affine transform. + + """ + is_affine = False + output_dims = 2 + input_dims = 2 + + def __init__(self, real_trans, *args, **kwargs): + self.real_trans = real_trans + r = mtrans.Transform.__init__(self, *args, **kwargs) + + def transform_non_affine(self, values): + return self.real_trans.transform(values) + + def transform_path_non_affine(self, path): + return self.real_trans.transform_path(path) + + +class BasicTransformTests(unittest.TestCase): + def setUp(self): + + self.ta1 = mtrans.Affine2D(shorthand_name='ta1').rotate(np.pi / 2) + self.ta2 = mtrans.Affine2D(shorthand_name='ta2').translate(10, 0) + self.ta3 = mtrans.Affine2D(shorthand_name='ta3').scale(1, 2) + + self.tn1 = NonAffineForTest(mtrans.Affine2D().translate(1, 2), shorthand_name='tn1') + self.tn2 = NonAffineForTest(mtrans.Affine2D().translate(1, 2), shorthand_name='tn2') + self.tn3 = NonAffineForTest(mtrans.Affine2D().translate(1, 2), shorthand_name='tn3') + + # creates a transform stack which looks like ((A, (N, A)), A) + self.stack1 = (self.ta1 + (self.tn1 + self.ta2)) + self.ta3 + # creates a transform stack which looks like (((A, N), A), A) + self.stack2 = self.ta1 + self.tn1 + self.ta2 + self.ta3 + # creates a transform stack which is a subset of stack2 + self.stack2_subset = self.tn1 + self.ta2 + self.ta3 + + # when in debug, the transform stacks can produce dot images: +# self.stack1.write_graphviz(file('stack1.dot', 'w')) +# self.stack2.write_graphviz(file('stack2.dot', 'w')) +# self.stack2_subset.write_graphviz(file('stack2_subset.dot', 'w')) + + def test_transform_depth(self): + assert_equal(self.stack1.depth, 4) + assert_equal(self.stack2.depth, 4) + assert_equal(self.stack2_subset.depth, 3) + + def test_left_to_right_iteration(self): + stack3 = (self.ta1 + (self.tn1 + (self.ta2 + self.tn2))) + self.ta3 +# stack3.write_graphviz(file('stack3.dot', 'w')) + + target_transforms = [stack3, + (self.tn1 + (self.ta2 + self.tn2)) + self.ta3, + (self.ta2 + self.tn2) + self.ta3, + self.tn2 + self.ta3, + self.ta3, + ] + r = [rh for _, rh in stack3._iter_break_from_left_to_right()] + self.assertEqual(len(r), len(target_transforms)) + + for target_stack, stack in zip(target_transforms, r): + self.assertEqual(target_stack, stack) + + def test_transform_shortcuts(self): + self.assertEqual(self.stack1 - self.stack2_subset, self.ta1) + self.assertEqual(self.stack2 - self.stack2_subset, self.ta1) + + assert_equal((self.stack2_subset - self.stack2), + self.ta1.inverted(), + ) + assert_equal((self.stack2_subset - self.stack2).depth, 1) + + assert_raises(ValueError, self.stack1.__sub__, self.stack2) + + aff1 = self.ta1 + (self.ta2 + self.ta3) + aff2 = self.ta2 + self.ta3 + + self.assertEqual(aff1 - aff2, self.ta1) + self.assertEqual(aff1 - self.ta2, aff1 + self.ta2.inverted()) + + self.assertEqual(self.stack1 - self.ta3, self.ta1 + (self.tn1 + self.ta2)) + self.assertEqual(self.stack2 - self.ta3, self.ta1 + self.tn1 + self.ta2) + + self.assertEqual((self.ta2 + self.ta3) - self.ta3 + self.ta3, self.ta2 + self.ta3) + + def test_contains_branch(self): + r1 = (self.ta2 + self.ta1) + r2 = (self.ta2 + self.ta1) + self.assertEqual(r1, r2) + self.assertNotEqual(r1, self.ta1) + self.assertTrue(r1.contains_branch(r2)) + self.assertTrue(r1.contains_branch(self.ta1)) + self.assertFalse(r1.contains_branch(self.ta2)) + self.assertFalse(r1.contains_branch((self.ta2 + self.ta2))) + + self.assertEqual(r1, r2) + + self.assertTrue(self.stack1.contains_branch(self.ta3)) + self.assertTrue(self.stack2.contains_branch(self.ta3)) + + self.assertTrue(self.stack1.contains_branch(self.stack2_subset)) + self.assertTrue(self.stack2.contains_branch(self.stack2_subset)) + + self.assertFalse(self.stack2_subset.contains_branch(self.stack1)) + self.assertFalse(self.stack2_subset.contains_branch(self.stack2)) + + self.assertTrue(self.stack1.contains_branch((self.ta2 + self.ta3))) + self.assertTrue(self.stack2.contains_branch((self.ta2 + self.ta3))) + + self.assertFalse(self.stack1.contains_branch((self.tn1 + self.ta2))) + + def test_affine_simplification(self): + # tests that a transform stack only calls as much is absolutely necessary + # "non-affine" allowing the best possible optimization with complex + # transformation stacks. + points = np.array([[0, 0], [10, 20], [np.nan, 1], [-1, 0]], dtype=np.float64) + na_pts = self.stack1.transform_non_affine(points) + all_pts = self.stack1.transform(points) + + na_expected = np.array([[1., 2.], [-19., 12.], + [np.nan, np.nan], [1., 1.]], dtype=np.float64) + all_expected = np.array([[11., 4.], [-9., 24.], + [np.nan, np.nan], [11., 2.]], dtype=np.float64) + + # check we have the expected results from doing the affine part only + np_test.assert_array_almost_equal(na_pts, na_expected) + # check we have the expected results from a full transformation + np_test.assert_array_almost_equal(all_pts, all_expected) + # check we have the expected results from doing the transformation in two steps + np_test.assert_array_almost_equal(self.stack1.transform_affine(na_pts), all_expected) + # check that getting the affine transformation first, then fully transforming using that + # yields the same result as before. + np_test.assert_array_almost_equal(self.stack1.get_affine().transform(na_pts), all_expected) + + # check that the affine part of stack1 & stack2 are equivalent (i.e. the optimization + # is working) + expected_result = (self.ta2 + self.ta3).get_matrix() + result = self.stack1.get_affine().get_matrix() + np_test.assert_array_equal(expected_result, result) + + result = self.stack2.get_affine().get_matrix() + np_test.assert_array_equal(expected_result, result) + + +class TestTransformPlotInterface(unittest.TestCase): + def tearDown(self): + plt.close() + + def test_line_extent_axes_coords(self): + # a simple line in axes coordinates + ax = plt.axes() + ax.plot([0.1, 1.2, 0.8], [0.9, 0.5, 0.8], transform=ax.transAxes) + np.testing.assert_array_equal(ax.dataLim.get_points(), np.array([[0, 0], [1, 1]])) + + def test_line_extent_data_coords(self): + # a simple line in data coordinates + ax = plt.axes() + ax.plot([0.1, 1.2, 0.8], [0.9, 0.5, 0.8], transform=ax.transData) + np.testing.assert_array_equal(ax.dataLim.get_points(), np.array([[ 0.1, 0.5], [ 1.2, 0.9]])) + + def test_line_extent_compound_coords1(self): + # a simple line in data coordinates in the y component, and in axes coordinates in the x + ax = plt.axes() + trans = mtrans.blended_transform_factory(ax.transAxes, ax.transData) + ax.plot([0.1, 1.2, 0.8], [35, -5, 18], transform=trans) + np.testing.assert_array_equal(ax.dataLim.get_points(), np.array([[ 0., -5.], [ 1., 35.]])) + plt.close() + + def test_line_extent_predata_transform_coords(self): + # a simple line in (offset + data) coordinates + ax = plt.axes() + trans = mtrans.Affine2D().scale(10) + ax.transData + ax.plot([0.1, 1.2, 0.8], [35, -5, 18], transform=trans) + np.testing.assert_array_equal(ax.dataLim.get_points(), np.array([[1., -50.], [12., 350.]])) + plt.close() + + def test_line_extent_compound_coords2(self): + # a simple line in (offset + data) coordinates in the y component, and in axes coordinates in the x + ax = plt.axes() + trans = mtrans.blended_transform_factory(ax.transAxes, mtrans.Affine2D().scale(10) + ax.transData) + ax.plot([0.1, 1.2, 0.8], [35, -5, 18], transform=trans) + np.testing.assert_array_equal(ax.dataLim.get_points(), np.array([[ 0., -50.], [ 1., 350.]])) + plt.close() + + def test_line_extents_affine(self): + ax = plt.axes() + offset = mtrans.Affine2D().translate(10, 10) + plt.plot(range(10), transform=offset + ax.transData) + expeted_data_lim = np.array([[0., 0.], [9., 9.]]) + 10 + np.testing.assert_array_almost_equal(ax.dataLim.get_points(), + expeted_data_lim) + + def test_line_extents_non_affine(self): + ax = plt.axes() + offset = mtrans.Affine2D().translate(10, 10) + na_offset = NonAffineForTest(mtrans.Affine2D().translate(10, 10)) + plt.plot(range(10), transform=offset + na_offset + ax.transData) + expeted_data_lim = np.array([[0., 0.], [9., 9.]]) + 20 + np.testing.assert_array_almost_equal(ax.dataLim.get_points(), + expeted_data_lim) + + def test_pathc_extents_non_affine(self): + ax = plt.axes() + offset = mtrans.Affine2D().translate(10, 10) + na_offset = NonAffineForTest(mtrans.Affine2D().translate(10, 10)) + pth = mpath.Path(np.array([[0, 0], [0, 10], [10, 10], [10, 0]])) + patch = mpatches.PathPatch(pth, transform=offset + na_offset + ax.transData) + ax.add_patch(patch) + expeted_data_lim = np.array([[0., 0.], [10., 10.]]) + 20 + np.testing.assert_array_almost_equal(ax.dataLim.get_points(), + expeted_data_lim) + + def test_pathc_extents_affine(self): + ax = plt.axes() + offset = mtrans.Affine2D().translate(10, 10) + pth = mpath.Path(np.array([[0, 0], [0, 10], [10, 10], [10, 0]])) + patch = mpatches.PathPatch(pth, transform=offset + ax.transData) + ax.add_patch(patch) + expeted_data_lim = np.array([[0., 0.], [10., 10.]]) + 10 + np.testing.assert_array_almost_equal(ax.dataLim.get_points(), + expeted_data_lim) + + + def test_line_extents_for_non_affine_transData(self): + ax = plt.axes(projection='polar') + # add 10 to the radius of the data + offset = mtrans.Affine2D().translate(0, 10) + + plt.plot(range(10), transform=offset + ax.transData) + # the data lim of a polar plot is stored in coordinates + # before a transData transformation, hence the data limits + # are not what is being shown on the actual plot. + expeted_data_lim = np.array([[0., 0.], [9., 9.]]) + [0, 10] + np.testing.assert_array_almost_equal(ax.dataLim.get_points(), + expeted_data_lim) + + if __name__=='__main__': import nose - nose.runmodule(argv=['-s','--with-doctest'], exit=False) + nose.runmodule(argv=['-s','--with-doctest'], exit=False)
487 lib/matplotlib/transforms.py