Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D cone traces #2641

Merged
merged 32 commits into from
May 18, 2018
Merged

3D cone traces #2641

merged 32 commits into from
May 18, 2018

Conversation

etpinard
Copy link
Contributor

@etpinard etpinard commented May 16, 2018

Cones are coming to plotly.js

peek 2018-05-16 13-58

Things we need to decide on

  • How to name this trace? I chose cone, like a cone trace, similar to surface. But since a cone trace has often many cones, maybe cones would be better. cone3d similar to mesh3d could work, but I think the 3d is not necessary, quiver is more popular in 2d. Decision: we're going with cone.
  • How to name the meshgrid attribute? I chose vx, vy and vz (v for vector), but maybe meshgrid.x, meshgrid.y and meshgrid.z would be best. Note that these attributes are optional. Decision: going with cones.(x|y|z) in cb7ef43 which will allows us to configure non-cartesian-product meshgrids if we want down the road. Removed from this 1st implementation -> 5a42de0
  • Should the colorbar show normalize vector norm (i.e. 0 to 1) values or absolute vector norms? Maybe we need an attribute to enable both behaviors? Decision: going with absolut vector norms in a2db567
  • The color values (i.e. intensity) for the cones represent the vector norm which is positive definite. Maybe we should restrict cmin and cmax to be positive also? Moreover, maybe we should make Viridis the default colorscale. Decision: Let's not restrict cmin, cmax in case we add an intensity field to color the cones by something other than their vector norm down the road. No go for Viridis default until v2.

Things we need to fix/add

  • Computed trace bounds are off. They only consider the cones' tip, ignoring their lengths and directions. This can make autoranged axes look weird:

image

Update: Ok, but not great solution done in db8222b and improved multiple times in subsequent commits.

  • Need to plug in size settings (attributes sizeref and sizemode) - done in 863b0da
  • Hover doesn't work - done in 71e46af
  • Image generation fails in imagetest (works fine in orca) - Update: non-perfect solution in aaf7249 where gl3d_cone-* mocks are tests using in orca during ./tasks/noci_test.sh.

Things that work ok, but could improve

  • We currently call cone2mesh twice which can be slow (~300ms for 20k cones) per .plot call. Once in calc to get the colors (i.e. intensity) values and another in convert to compute the correctly scaled coordinate (note the the scene's scale isn't available during calc). I would be nice to split gl-cone3d's createConePlot into two subroutines. Update: not any more after a2db567

Things that could wait future iterations

  • make x, y and z optional, generate linspaces when omitted
  • plug in gl-mesh3d's lighting settings - done in 3dd6cf5
  • [ ] providing an underdetermined meshgrid can lead to exceptions in gl-cone3d's createConePlot not an issue until we figure how to implement our version of meshgrid.

@etpinard
Copy link
Contributor Author

forgot to cc @alexcjohnson @jackparmer and @kig who maybe could help out ✅ a few checkboxes.

@alexcjohnson
Copy link
Collaborator

Cool!

How to name this trace? I chose cone, like a cone trace, similar to surface. But since a cone trace has often many cones, maybe cones would be better. cone3d similar to mesh3d could work, but I think the 3d is not necessary, quiver is more popular in 2d.

Agreed - a cone is a 3D shape, it would have no meaning in 2D, so just cone - 3d is unnecessary. We don't pluralize any of our other types (parcoords looks plural, but it's "coordinates" that's plural, not the lines connecting these coordinates, which are really the objects on the plot)

How to name the meshgrid attribute? I chose vx, vy and vz (v for vector), but maybe meshgrid.x, meshgrid.y and meshgrid.z would be best. Note that these attributes are optional.

I think we should have the required data be the vector field (x, y, z) -> (u, v, w), then probably we need a few ways to specify whether and how to interpolate cones onto that field (and I think this is what you're already doing, but in case it's not clear already all of these arrays are assumed to be 1D - 3D leads to too many ambiguities as well as extra headaches for our back end):

  • If you add nothing else, draw cones at exactly the specified positions (x, y, z)
  • You can provide 3 more data arrays, then the cones are interpolated onto these positions. Perhaps like MATLAB we call these (cx, cy, cz)? or cones: {x, y, z}?
  • It would also be nice to be able to specify just a grid for each dimension; what should this look like? I don't think it can just be x0, dx etc because we need both bounds; x0, dx, x1 would do this but we have no precedent for that. For histograms and contours we have {start, end, size}, so how about cones: {xgrid: {start, end, size}, ygrid: {start, end, size}, zgrid: {start, end, size}}? start and end could be optional, then we would just bound the grid to the data.
  • In principle it seems like you could mix and match these per dimension... like you provide cones: {x, y, zgrid} and the result is cones explicitly positioned in x and y, then repeated for every value implied by zgrid. But that could get confusing, I could imagine people wanting to think of cones.x, cones.y, and cones.z each as an independent array, ie cone positions are the cartesian product of the three rather than element-by-element combinations. Too many possibilities, if you want to start with just the first option that's fine, we can add the others later if there's demand.

Should the colorbar show normalize vector norm i.e. 0 to 1) values or absolute vector norms? Maybe we need an attribute to enable both behaviors?

Absolute vector norm seems far more useful. Leave this attribute out and let people normalize on their own, until someone complains. The hover data is always going to show the absolute vector anyway, right? Would be weird if this disagreed with the colorscale.

The color values (i.e. intensity) for the cones represent the vector norm which is positive definite. Maybe we should restrict cmin and cmax to be positive also? Moreover, maybe we should make Viridis the default colorscale.

Much as I like Viridis, lets not diverge from the default colorscales we have for all other traces until we can change them as well.

Is there a use case to supply color as an independent data array? Perhaps someone wants to color just by w (vertical flux) or ∇∙v or ∇×v or who knows what... If we allow that, cmin and cmax could be anything. Anyway I don't see a need to explicitly force them to be positive even when color is the norm, they'd be positive by default and only go negative if the user explicitly chooses that.

@alexcjohnson
Copy link
Collaborator

Computed trace bounds are off. They only consider the cones' tip, ignoring their lengths and directions.

Are the cones positioned with their tips at the data positions? That seems odd, I'd have thought the data position would go at its center of mass, ie 3/4 of the way from the tip to the base.

@etpinard
Copy link
Contributor Author

Are the cones positioned with their tips at the data positions?

Yes. That's how it works currently.

@alexcjohnson
Copy link
Collaborator

Huh, MATLAB places the cones with their tails at the data positions. That also seems wrong to me...

'If *scaled*, `sizeref` scales such that the reference cone size',
'for the maximum vector magnitude is 1.',
'If *absolute*, `sizeref` scales such that the reference cone size',
'for vector magnitude 1 is one grid unit.'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by a grid unit here? Is it one unit of the (x, y, z) positions or the spacing between adjacent cones?

Almost always the vectors have different units from the position space they're displayed in - so I'd think if you specify nothing at all about the cone sizing, they should end up scaled so the largest one is about as big as the smallest spacing between adjacent cones.

There's definitely a use case for 'absolute' - the user has pre-calculated everything and they want each cone to go from one precise (x, y, z) to another precise (x, y, z). I'm not sure there's a use case though for scaling to one unit in position, I'd think 'scaled' should refer to the minimum adjacent cone distance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@kig kig May 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll have a new look at the cone sizing parameters. Pre-meshgrid they were based on the position space units and scaled so that two cones with maximum norm, pointing to the same direction, would have their tip and base touch. Like <|<|. That would be with coneSize 2. With coneSize 1, you'd get no overlapping cones even when they're |><|.

The absolute sizing ditched the automatic cone scaling and used the cone vector norm to figure out the size of the cone model, so that a norm of 1 with absolute size factor 1 results in a cone that's one unit tall in the position unit space. Absolute size factor 2 => 2 unit tall cone, etc.

With the meshgrid sampling (and the realization that the cone positions may not be on a nice regular grid), I changed the automatic cone sizing to scale based on the smallest distance between two adjacent cone positions (adjacent in the sense that they've got successive indices in the positions array). This is not the right thing to do though, doing a smallest distance between any two cone positions would be, but that's O(n^2) with a naive algo.

Another idea floated on the sizing was to have pixel-based sizing as well, which'd probably work like "invert the projection, view and model transform matrices for cone model, scale cone model according to viewport size" but would result in ortho camera cones which might be confusing. (And it sounds like a bit of a pain to implement.)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kig that all sounds great. I guess we only need to worry about the smallest-distance calculation when using the (x, y, z) data to position the cones; With the cartesian product of x/y/z interpolation grids each dimension should be pretty compact, even if the values are out of order. But I guarantee at some point we'll see a data set where the closest points are not successive. I don't know if there's a JS implementation already available but it's possible to do it in O(n log(n)^2) http://www.cs.ucsb.edu/~suri/cs235/ClosestPair.pdf

Lets not worry about pixel-based sizing, at least not for now. I guess I can see using this if we also do automatic grid spacing, ie when you zoom in and the cone spacing gets larger, at some point we decrease the grid size so we draw more cones between the existing ones. That's definitely not a v1 feature 😅

@alexcjohnson
Copy link
Collaborator

We currently call cone2mesh twice which can be slow (~300ms for 20k cones) per .plot call. Once in calc to get the colors (i.e. intensity) values and another in convert to compute the correctly scaled coordinate

If it's just for color (which, per #2641 (comment), might not not even come from the cone lengths), can't we just find the bounds of the data provided, as a simple loop over the cone data? That should be a heck of a lot faster (quick test: ~5ms for 20k data points) and, I might argue, more faithful to the data than basing the color bounds on the interpolated data used in the actual cones.

... using an invisible gl-scatter3d trace
var meshData = convert(scene, data);
var mesh = createConeMesh(gl, meshData);

var pts = createScatterPlot({
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a little bit hacky and not very efficient, but this works:

peek 2018-05-16 17-24

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, that was fast! Is it possible to get (u, v, w) in the hovertext as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible!

peek 2018-05-16 23-31

but maybe someone would argue that 6 fields in one hover label in too much by default. Maybe we could just show the vector norm value? Or maybe have hoverinfo: 'x+y+z+norm' by default and optionally up to: x+y+z+u+v+w+norm+text

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'x+y+z+norm' by default and optionally up to: x+y+z+u+v+w+norm+text

I like it!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in e4a2035

... w/o using cone2mesh, and use that the find cmin & cmax
    which now correspond to "absolute" vector norm.

    This make cone/helpers.js obsolete.
... during autorange / scene bounds computation.

- This pad values is a "worse case" solution (i.e. it pads too much
  for most situations), but gives nice visual results (I think).

colorscaleCalc(trace, meshData.vertexIntensity, '', 'c');
colorscaleCalc(trace, [normMin, normMax], '', 'c');
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shoutout to @alexcjohnson for the tip!

var obj = objects[j];
var objBounds = obj.bounds;
// add worse-case pad for cone traces
var pad = (obj._trace.data._normMax || 0) * dataScale[i];
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I came up with to solve the "cones out of scene" problem.

Plotly.newPlot(gd, [{
	type: 'cone',
      "x": [1, 2, 3],
      "y": [1, 2, 3],
      "z": [1, 2, 3],
      "u": [1, 0, 0],
      "v": [0, 3, 0],
      "w": [0, 0, 2],
}])

now gives:

image

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I suppose based on gl-vis/gl-cone3d#6 we could drop it to 3/4 of that value, though then if we make that configurable it would depend on the offset... more trouble than it's worth, let's keep what you have here.

- x/y/z always refers vector positions (their length should match
  u/v/w)
- if nothing else provided, x/y/z correspond to the cone position also,
- to set cone position, use `cones.(x|y|z)` for now
- update attribute descriptions and trace meta info
@etpinard
Copy link
Contributor Author

I still need to fixup this case brought up by @alexcjohnson

// seems like something’s off with the axis padding - cones at 1 & 3 give axis ranges `[-2.46, 6.46]`? Shouldn’t that be `[0, 4]` if we’re padding by one full cell?

Plotly.newPlot(gd,[{type: 'cone', x: [1, 1, 1, 1, 3, 3, 3, 3], y: [1, 1, 3, 3, 1, 1, 3, 3], z: [1, 3, 1, 3, 1, 3, 1, 3],
u: [1, 2, 3, 4, 1, 2, 3, 4], v: [1, 1, 2, 2, 3, 3, 4, 4], w: [4, 4, 1, 1, 2, 2, 3, 3]}])

but other than, this PR seems ready. Dropping the discussion needed, tagging as reviewable.

@@ -508,7 +508,7 @@ proto.plot = function(sceneData, fullLayout, layout) {
var obj = objects[j];
var objBounds = obj.bounds;
// add worse-case pad for cone traces
var pad = (obj._trace.data._normMax || 0) * dataScale[i];
var pad = 0.75 * (obj._trace.data._compMax || 0) * dataScale[i];
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gives a better estimate. The data off #2641 (comment) now gives:

image

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! Yeah, I was wrong with [0, 4] - this looks awesome.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, this is still not quite right though. If I double all the u, v, w values, the cones don't change (as they shouldn't) but the padding increases.

var t={type: 'cone', x: [1, 1, 1, 1, 3, 3, 3, 3], y: [1, 1, 3, 3, 1, 1, 3, 3], z: [1, 3, 1, 3, 1, 3, 1, 3],
u: [1, 2, 3, 4, 1, 2, 3, 4], v: [1, 1, 2, 2, 3, 3, 4, 4], w: [4, 4, 1, 1, 2, 2, 3, 3]}

// the resulting image should be independent of what I multiply by here
function m(v) { return v*2; }

t.u = t.u.map(m); t.v = t.v.map(m); t.w=t.w.map(m); Plotly.newPlot(gd,[t], {scene: {camera: {eye: {x: -0.01, y: 2.52, z: 0.42}}}, width: 800, height: 800})

v * 1 (as above):
screen shot 2018-05-18 at 9 20 22 am

v * 2:
screen shot 2018-05-18 at 9 20 10 am

I don't need to tell you what happens with v / 10 do I? 😉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

latest attempt: 3d26d47

delete gd.data[0][k];

supplyAllDefaults(gd);
expect(gd._fullData[0].visible).toBe(!k, 'missing array ' + k);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clever, but maybe a little too compact to include '' in this set 😈

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, fixed in 8dca9ae

})
.then(function() {
_assertAxisRanges('after adding one cone outside range but with norm-0',
[-0.45, 6.45], [-0.45, 6.45], [-0.45, 6.45]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexcjohnson unfortunately I can't get this to "stick" to the previous range[0] values without breaking other gl3d mocks that depend on

for(j = 0; j < objects.length; j++) {
var objBounds = objects[j].bounds;
sceneBounds[0][i] = Math.min(sceneBounds[0][i], objBounds[0][i] / dataScale[i]);
sceneBounds[1][i] = Math.max(sceneBounds[1][i], objBounds[1][i] / dataScale[i]);
}

which scales the autorange bounds with the dataScale:

var dataScale = [1, 1, 1];
for(j = 0; j < 3; ++j) {
if(dataBounds[0][j] > dataBounds[1][j]) {
dataScale[j] = 1.0;
}
else {
if(dataBounds[1][j] === dataBounds[0][j]) {
dataScale[j] = 1.0;
}
else {
dataScale[j] = 1.0 / (dataBounds[1][j] - dataBounds[0][j]);
}
}
}

Hopefully this iteration is close enough.


As a side note, we should spend some time replacing this dataScale hack. This thing was added to make date axes work properly in 3D. Timestamps don't fit in a Float32Array, so we can't just convert the datestrings to timestamps, but we do this better in scattergl, where we use Float64Array or split the data into two Float32Arrays when not supported.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New issue about the dataScale mess -> #2646

@@ -107,6 +108,9 @@ function convert(scene, trace) {
meshData.fresnel = trace.lighting.fresnel;
meshData.opacity = trace.opacity;

// stash autorange pad value
trace._pad = anchor2coneSpan[trace.anchor] * meshData.vectorScale * trace._normMax * trace.sizeref;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, I think we have a winner now! I don't see any cases (As I change the cone spacing, cone count in each direction, and vector length) where this fails to encompass all the cones, nor any where the padding looks unpleasantly large. 🏆

@@ -102,7 +102,7 @@ describe('@gl Test cone autorange:', function() {
})
.then(function() {
_assertAxisRanges('scaled up',
[-0.39, 4.39], [-0.39, 4.39], [-0.39, 4.39]
[-0.66, 4.66], [-0.66, 4.66], [-0.66, 4.66]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add one more test, to 🔒 this in a little more since it went through so many iterations: scale up the x/y/z arrays. At least one of the previous iterations had a problem with this but otherwise looked pretty good.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's this one:

var trace = fig.data[0];
var x = trace.x.slice();
x.push(5);
var y = trace.y.slice();
y.push(5);
var z = trace.z.slice();
z.push(5);
var u = trace.u.slice();
u.push(0);
var v = trace.v.slice();
v.push(0);
var w = trace.w.slice();
w.push(0);
return Plotly.restyle(gd, {
x: [x], y: [y], z: [z],
u: [u], v: [v], w: [w]
});
})
.then(function() {
_assertAxisRanges('after adding one cone outside range but with norm-0',
[-0.72, 6.72], [-0.72, 6.72], [-0.72, 6.72]
);

am I missing anything?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's close, but there's a difference between adding another point that doesn't change the minimum spacing between cones, and expanding the whole x/y/z space and therefore expanding the minimum spacing as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in -> 2c08357

@alexcjohnson
Copy link
Collaborator

Alright! I'm happy if you are! 💃 💃 💃

@etpinard
Copy link
Contributor Author

Ok. Merging this thing!

Shoutout to @kig for this feature 🎉

Thanks very much @alexcjohnson for the help making this plotly-proof! 💪

This will be part of v1.38.0 set to be released on Tuesday. 🚀


by the way @kig I published gl-cone3d@1.0.0 to npm.

@etpinard etpinard merged commit df67a27 into master May 18, 2018
@etpinard etpinard deleted the cone-traces branch May 18, 2018 21:07
This was referenced May 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants