Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contours colored wrong when data has a wide range #851

Closed
painter1 opened this issue Oct 28, 2014 · 13 comments
Closed

Contours colored wrong when data has a wide range #851

painter1 opened this issue Oct 28, 2014 · 13 comments
Assignees
Milestone

Comments

@painter1
Copy link
Contributor

@painter1 painter1 commented Oct 28, 2014

To reproduce this bug you will need a file I have named tmp_AODVIS.nc. Please let me know how to supply it.
This file has one variable on a 180x360 lat-lon grid. The variable's values are mostly in the correct range, between 0 and 1. But about 1/8 of the values are very large, around 1.0e35 (they differ from the fill value and from each other, and they are not masked). This situation arises from an error, but that makes it all the more important that we plot it correctly - so as to reveal the error. (BTW, I didn't do it!)

In fact, the following (reasonable) Python code produces a plot of a uniform color (orange in my case) which implies that all values are between 1.0 and 1.5. As the following code will demonstrate, 88% of the data is less than 1.0

import vcs
v=vcs.init()
f=cdms2.open('tmp_AODVIS.nc')
AOD=f('rv_AODVIS_ANN_ft0_ne30')
gm=v.createisofill()
PROJECTION = v.createprojection()
PROJECTION.type=-1
gm.projection=PROJECTION
levels=vcs.mkscale(0,1.0e10)
levels = [-1,0,0.5,1.0,1.5,2.0,100.0,1.0e36]
gm.levels = levels
v.plot( AOD, gm )

You see a uniform orange, meaning that all data is between 1.0 and 1.5.

It it? No, as demonstrated by:

(AOD<1.0).sum()
56861
(AOD>=1.0).sum()
7939
AOD.shape
(180, 360)
180*360
64800
5861+7939
13800
56861./64800
0.8774845679012345

@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Oct 31, 2014

@dlonie I'm printing the bandedcontour values and colours I get:
(colors are vcs index, VTK's r,g,b, 255 r,g,b)

Contour: 0 : -1
Contour: 1 : 0
Contour: 2 : 0.5
Contour: 3 : 1.0
Contour: 4 : 1.5
Contour: 5 : 2.0
Contour: 6 : 100.0
Contour: 7 : 1e+36

lut: 0 16 0.0 0.33 1.0 0.0 84.15 255.0
lut: 1 53 0.0 1.0 0.55 0.0 255.0 140.25
lut: 2 90 0.57 1.0 0.0 145.35 255.0 0.0
lut: 3 128 1.0 0.51 0.0 255.0 130.05 0.0
lut: 4 165 1.0 0.05 0.0 255.0 12.75 0.0
lut: 5 202 0.39 0.0 0.0 99.45 0.0 0.0
lut: 6 239 0.73 0.0 0.74 186.15 0.0 188.7

Orange seems to be color index "3" any idea what is wrong?

Thanks,

Pushed a branch with some bug fix to run the following code branch is : issue_851_colors

Jeff's sample code:

import vcs,os,cdms2,MV2
v=vcs.init()
f=cdms2.open(os.path.join(os.path.dirname(__file__),'tmp_AODVIS.nc'))
AOD=f('rv_AODVIS_ANN_ft0_ne30')
#AOD=MV2.masked_greater(AOD,1.e20)
gm=v.createisofill()
PROJECTION = v.createprojection()
PROJECTION.type=-1
gm.projection=PROJECTION
levels=vcs.mkscale(0,1.0e10)
levels = [-1,0,0.5,1.0,1.5,2.0,100.0,1.0e36]
gm.levels = levels
gm.list()
v.plot( AOD, gm )
v.png("850")
raw_input("Press enter")

Data can be found at:
https://github.com/doutriaux1/VTKLearning/blob/master/SandBox/tmp_AODVIS.nc

@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Oct 31, 2014

Also picture of Jeff description:
850

@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Oct 31, 2014

if we mask bad data we get something reasonable.
850

@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Oct 31, 2014

@dlonie VTK filters gets the same values for indices colours and levels in both case, only the data differ.

@doutriaux1 doutriaux1 added VTK and removed VCS labels Oct 31, 2014
@doutriaux1 doutriaux1 added this to the 2.1 milestone Oct 31, 2014
@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Oct 31, 2014

because of this I think it is a VTK issue reassigning to Kitware until they confirm/infirm this.

@allisonvacanti
Copy link
Contributor

@allisonvacanti allisonvacanti commented Nov 3, 2014

I seem to be missing something. Using that data file and script (I changed the variable name to "rv_AODVIS_ANN_ft0_ne30" since that's what UVCDAT reports in the file), I get:

contour-colors

@painter1
Copy link
Contributor Author

@painter1 painter1 commented Nov 3, 2014

I've seen that picture after clipping out the bad (>1.e30) data. 
You have more contours than Charles' example with masked-out data,
and it's upside-down, but it's basically the same picture.  What
you're missing is the bad data!
- Jeff

On 11/3/14, 6:43 AM, David Lonie wrote:


  I seem to be missing something. Using that data file and script
    (I changed the variable name to "rv_AODVIS_ANN_ft0_ne30" since
    that's what UVCDAT reports in the file), I get:

  —
    Reply to this email directly or view
      it on GitHub.
  {"@context":<a class="moz-txt-link-rfc2396E" href="http://schema.org">"http://schema.org"</a>,"@type":"EmailMessage","description":"View this Issue on GitHub","action":{"@type":"ViewAction","url":<a class="moz-txt-link-rfc2396E" href="https://github.com/UV-CDAT/uvcdat/issues/851#issuecomment-61487115">"https://github.com/UV-CDAT/uvcdat/issues/851#issuecomment-61487115"</a>,"name":"View Issue"}}

@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Nov 3, 2014

@dlonie plkease remove the line that comments out "bad" values. In your picture it seems like they have been commented out and hence work.

#AOD=MV2.masked_greater(AOD,1.e20)

Make sure it is commented out.

Also, which branch are you using?

@allisonvacanti
Copy link
Contributor

@allisonvacanti allisonvacanti commented Nov 6, 2014

Well, this was fun to track down >.<

The issue is that vtkBandedPolyDataContourFilter will, by default, condense the contour levels if consecutive levels are close. I imagine this is to clean up problems cause by contouring data that has a small range with a high level of contour levels.

How it works is that there is a configurable ClipTolerance that is multiplied by the scalar range to come up with an InternalClipTolerance. If two consecutive levels are less than the InternalClipTolerance apart, they are merged.

Trouble is, without masking the data, the scalar range is ~1e36, and InternalClipTolerance ends up being ~1e24, so several levels end up getting removed, leading to the entire dataset being lumped into the same contour band. This also puts the vcs-generated LUT out of sync with the actual contour levels, hence the color doesn't makes sense when comparing the legend to the plot.

tl;dr This patch fixes the issue (line numbers may be off, since I rebased your branch onto master):

@@ -857,10 +857,11 @@ class VTKVCSBackend(object):
               lut = vtk.vtkLookupTable()
               cot = vtk.vtkBandedPolyDataContourFilter()
               cot.ClippingOn()
               cot.SetInputData(sFilter.GetOutput())
               cot.SetNumberOfContours(len(l))
+              cot.SetClipTolerance(0.)
               for j,v in enumerate(l):
                 print "Contour:",j,":",v
                 cot.SetValue(j,v)
               #cot.SetScalarModeToIndex()
               cot.Update()

Now we get the expected result:

850

@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Nov 6, 2014

Wow! Nice detective job here! Do you want to push a branch? Or are you happy with me doing it and merging?

@aashish24
Copy link
Contributor

@aashish24 aashish24 commented Nov 7, 2014

Nice job @dlonie Shoudl we always set the tolerance to 0? Just wondering about numerical stability of the algorithm (not looked into the code as much). I am wondering if setting it to 0 may cause some some sort of issues in certain cases.

@allisonvacanti
Copy link
Contributor

@allisonvacanti allisonvacanti commented Nov 7, 2014

I'll push a branch in a moment. I wasn't sure if you wanted to lump this in with your other branch, either way works for me.

Setting to 0 always should be fine for this usecase, since vcs is generating the contour levels. The tolerance seems to be there for when the levels are automatically generated.

@doutriaux1
Copy link
Contributor

@doutriaux1 doutriaux1 commented Nov 7, 2014

Good ppint we could turn it off (set to 0) only if delta is greater than the code's default of 1.e20.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants