Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #5524. Use finfo.max instead of np.inf #5534

Merged
merged 2 commits into from Nov 22, 2015

Conversation

mdboom
Copy link
Member

@mdboom mdboom commented Nov 20, 2015

Cc: @efiring

Based on suggested fix in #5524.

@mdboom mdboom added this to the next major release (2.0) milestone Nov 20, 2015
@efiring
Copy link
Member

efiring commented Nov 20, 2015

I was thinking about the finfo values, but couldn't remember the name finfo. I'm a little uneasy about it, though. These values will be run through the norm, after which they will be checked against the 0-1 range. The norming operation on these max values can generate overflow, yielding inf, leaving us right back where we started. Although it is not as pretty, I think the practical solution may be to use a large value that still leaves scope for being normed without overflow. If we used 1e200, for example, how often do you think that would cause trouble for a user?

@mdboom
Copy link
Member Author

mdboom commented Nov 20, 2015

Stepping back, the problem seems to have been introduced when non-finite values were masked. Would it work if we masked NaNs only (and left inf and -inf alone)?

@WeatherGod
Copy link
Member

why not just use vmax + eps and vmin - eps? Makes for a very small chance
for overflow.

On Fri, Nov 20, 2015 at 1:40 PM, Michael Droettboom <
notifications@github.com> wrote:

Stepping back, the problem seems to have been introduced when non-finite
values were masked. Would it work if we masked NaNs only (and left inf and
-inf alone)?


Reply to this email directly or view it on GitHub
#5534 (comment)
.

@mdboom
Copy link
Member Author

mdboom commented Nov 20, 2015

@WeatherGod: That does indeed fix the test. @efiring: Any downsides to that you can think of?

@efiring
Copy link
Member

efiring commented Nov 20, 2015

I thought of that also, but I wasn't sure whether vmin/vmax would always be known and invariant at that point in the execution.

@efiring
Copy link
Member

efiring commented Nov 20, 2015

Also, when you norm vmax + eps, is it guaranteed that the result will be greater than 1? I doubt it.

@@ -1231,10 +1231,11 @@ def _process_levels(self):
self.layers = 0.5 * (self._levels[:-1] + self._levels[1:])
# ...except that extended layers must be outside the
# normed range:
finfo = np.finfo(float)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iirc, this can be a major time sink

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. In this case, I don't think it's in a hot path, though.

@efiring
Copy link
Member

efiring commented Nov 20, 2015

@mdboom, Regarding an earlier question: I think that masking only the nan values and leaving inf alone would work in this case, but would not be the right thing for pcolormesh in general. It would mean that autoscaling of the data range would still give useless results. I think using masked_invalid is correct, and that something like my original suggestion, with genuine numbers, is the way to handle this. It might not look pure, but it is practical.

@tacaswell
Copy link
Member

@efiring It looks like this self.vmin is unrelated to the self.vmin in the Normalize classes as noted by you in 9cba6fb so in this case we do know vmin/vmax as they are extracted from the levels.

I think it is better to use the eps approach than a fixed large number to ensure that self.layers is always strictly increasing.

tacaswell added a commit that referenced this pull request Nov 22, 2015
@tacaswell tacaswell merged commit 37182ee into matplotlib:master Nov 22, 2015
tacaswell added a commit that referenced this pull request Nov 22, 2015
@tacaswell
Copy link
Member

backported as d1f525c

@QuLogic QuLogic modified the milestones: Critical bugfix release (1.5.1), next major release (2.0) Nov 22, 2015
@efiring
Copy link
Member

efiring commented Nov 22, 2015

Confusing indeed. Can we just delete self.vmin and self.vmax here?
In any case, even with local variables for the vmin and vmax, this still would not be not correct. The extend values need to be outside the normed range. This changeset is just putting them marginally outside the data range. @tacaswell, please revert this.

@tacaswell
Copy link
Member

reverted on master as f90367a reverted on v1.5.x as 605718b

Sorry, I miss understood. I am not sure how to guarantee that the values normalize to outside the bounds for a general norm. Maybe we should just use the top and bottom values in self._levels as if extend is true, they are already padded out.

@efiring
Copy link
Member

efiring commented Nov 22, 2015

I think the problem is that their padding is enough to bracket the "outside" data values, but not necessarily to ensure that the normed top and bottom levels are outside 0-1 range. I stand by my earlier recommendation to use magic numbers which work (e.g. anything in the 1e100 to 1e300 range; let's split it and use 1e150) instead of magic numbers (e.g. finfo stuff) that don't work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants