New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PointsMaterial's sizeAttenuation property should default to false #10385

Open
ericdrobinson opened this Issue Dec 16, 2016 · 16 comments

Comments

Projects
None yet
5 participants
@ericdrobinson

ericdrobinson commented Dec 16, 2016

Description of the problem

The default PointsMaterial sets sizeAttenuation to true. This produces unexpected results as drawing a point should theoretically produce a single pixel location (unless size is set larger). This is how lines (another primitive) work - why are points different?

Further, this default really breaks when used with an Orthographic Camera. In theory, the sizeAttenuation property was added to allow points to feel somewhat more like "good citizens" when paired with a Perspective Camera. When paired with an Orthographic Camera, points become very bad citizens. On a personal note, this lead to a bunch of lost time trying to track down what was going on in a project - at one point the entire drawing surface was covered by attenuated "points" because they happened to be too close to the camera.

At the very least, the documentation doesn't outline how the points attenuate in size: at what distance should we expect size to be 1:1? How quickly does size attenuate?

This is somewhat a continuation of #7517.

Three.js version

  • Dev
  • r83
  • ...

Browser

  • All of them
  • Chrome
  • Firefox
  • Internet Explorer

OS

  • All of them
  • Windows
  • Linux
  • Android
  • IOS

Hardware Requirements (graphics card, VR Device, ...)

Nothing specific.

@looeee

This comment has been minimized.

Show comment
Hide comment
@looeee

looeee Dec 16, 2016

Collaborator

What should the documentation say here? If you let me know I'll update it.

Collaborator

looeee commented Dec 16, 2016

What should the documentation say here? If you let me know I'll update it.

@ericdrobinson

This comment has been minimized.

Show comment
Hide comment
@ericdrobinson

ericdrobinson Dec 16, 2016

At the very least, I assume you're referring to this part, yes?

At the very least, the documentation doesn't outline how the points attenuate in size: at what distance should we expect size to be 1:1? How quickly does size attenuate?

I mean, I would write whatever makes sense based on what happens in the code, right? I'm not sure where scale comes from there, but you could figure out where the point would truly become a single pixel, no? At the very least, what is the expectation when the point is precisely 1 unit away from the camera?

I would also suggest that a note be added to the top calling out the fact that "points do not equate to pixels unless you set the sizeAttenuation = false and size = 1."

ericdrobinson commented Dec 16, 2016

At the very least, I assume you're referring to this part, yes?

At the very least, the documentation doesn't outline how the points attenuate in size: at what distance should we expect size to be 1:1? How quickly does size attenuate?

I mean, I would write whatever makes sense based on what happens in the code, right? I'm not sure where scale comes from there, but you could figure out where the point would truly become a single pixel, no? At the very least, what is the expectation when the point is precisely 1 unit away from the camera?

I would also suggest that a note be added to the top calling out the fact that "points do not equate to pixels unless you set the sizeAttenuation = false and size = 1."

@looeee

This comment has been minimized.

Show comment
Hide comment
@looeee

looeee Dec 17, 2016

Collaborator

Could you do some testing? If there is a simple formula for figuring out what size the points will be at a given distance from the camera it would be good to have a note of it.

With regard to sizeAttenuation being true by default, I would say it's because that is assumed to be the most likely setting that users will need.

Collaborator

looeee commented Dec 17, 2016

Could you do some testing? If there is a simple formula for figuring out what size the points will be at a given distance from the camera it would be good to have a note of it.

With regard to sizeAttenuation being true by default, I would say it's because that is assumed to be the most likely setting that users will need.

@ericdrobinson

This comment has been minimized.

Show comment
Hide comment
@ericdrobinson

ericdrobinson Dec 17, 2016

Could you do some testing? If there is a simple formula for figuring out what size the points will be at a given distance from the camera it would be good to have a note of it.
Huh? I'm a little confused by this. I believe I linked the exact line in the code that does the calculation, no?

#ifdef USE_SIZEATTENUATION
	gl_PointSize = size * ( scale / - mvPosition.z );
#else
	gl_PointSize = size;
#endif

As I mentioned in my previous comment, I'm not sure where the scale factor comes in so I'm not exactly certain how to describe how things should work. If we assume all values are 1 (excepting the -mvPosition.z [distance] value which is theoretically -1), then the gl_PointSize would be 1 at a distance of 1 from the camera. You're likely looking at a gl_PointSize of 2 at a distance of 0.5 and a gl_PointSize of 0.5 at a distance of 2. I did some quick tests with this JSFiddle but this didn't appear to play out. In short, I've no idea and would defer to @mrdoob for this.

With regard to sizeAttenuation being true by default, I would say it's because that is assumed to be the most likely setting that users will need.

@mrdoob Any thoughts on this? From personal experience, the default did not line up with expectations. Better documentation may of course help to alleviate the issue. However, perhaps you can speak to why sizeAttenuation is set to true by default.

From what I can tell, sizeAttenuation is a THREE.js-specific feature. The OpenGL Core Spec doesn't mention anything at all with respect to distance-from-the-camera (same for the OpenGL ES Manual). This, to me, would be a "cool optional feature that THREE.js can do if you need it" kind of thing.

ericdrobinson commented Dec 17, 2016

Could you do some testing? If there is a simple formula for figuring out what size the points will be at a given distance from the camera it would be good to have a note of it.
Huh? I'm a little confused by this. I believe I linked the exact line in the code that does the calculation, no?

#ifdef USE_SIZEATTENUATION
	gl_PointSize = size * ( scale / - mvPosition.z );
#else
	gl_PointSize = size;
#endif

As I mentioned in my previous comment, I'm not sure where the scale factor comes in so I'm not exactly certain how to describe how things should work. If we assume all values are 1 (excepting the -mvPosition.z [distance] value which is theoretically -1), then the gl_PointSize would be 1 at a distance of 1 from the camera. You're likely looking at a gl_PointSize of 2 at a distance of 0.5 and a gl_PointSize of 0.5 at a distance of 2. I did some quick tests with this JSFiddle but this didn't appear to play out. In short, I've no idea and would defer to @mrdoob for this.

With regard to sizeAttenuation being true by default, I would say it's because that is assumed to be the most likely setting that users will need.

@mrdoob Any thoughts on this? From personal experience, the default did not line up with expectations. Better documentation may of course help to alleviate the issue. However, perhaps you can speak to why sizeAttenuation is set to true by default.

From what I can tell, sizeAttenuation is a THREE.js-specific feature. The OpenGL Core Spec doesn't mention anything at all with respect to distance-from-the-camera (same for the OpenGL ES Manual). This, to me, would be a "cool optional feature that THREE.js can do if you need it" kind of thing.

@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Dec 17, 2016

Collaborator

We can fix this. We can debate later whether sizeAttenuation should be false by default.

Here is the attenuation logic:

#ifdef USE_SIZEATTENUATION
	gl_PointSize = size * ( scale / - mvPosition.z );
#else
	gl_PointSize = size;
#endif

gl_PointSize has units of pixels. That means PointsMaterial.size has units of pixels.

scale is set in WebGLRenderer like so,

uniforms.scale.value = _height * 0.5; // this is the problem

where _height is the renderer's canvas height in pixels (without considering pixel ratio, which we will ignore for the purpose of this discussion),

and - mvPosition.z is the world depth of the point location.

So looking at units, we currently have something that does not make sense.

pixels = pixels * ( pixels / world_units ). // makes no sense

What we want is a formula that looks like this:

gl_PointSize = size * ( nominalDistance / - mvPosition.z ); // makes intuitive sense

where nominalDistance is _the depth (in world units) at which the point is rendered at size pixels.

Note that ( nominalDistance / - mvPosition.z ) is unit-less, as it should be.

@ericdrobinson Clearly, the model has to be fixed. But for the purpose of experimenting, there is a work-around. Do something like this in your javascript:

var attenuation = {
    size: 50, // in pixels
    distance: 400, // in world units
    enabled: true
};

var material = new THREE.PointsMaterial( {
    size: attenuation.size * attenuation.distance / ( 0.5 * renderer.getSize().height ), // compensate for renderer
    sizeAttenuation: attenuation.enabled
} );

With a perspective camera, this will render Points that are 400 world units in front of the camera at a size of 50 pixels. Other points will have appropriate relative attenuation.

We can debate "distance" vs "depth" terminology, but this is the basic idea.

@mrdoob The fix involves adding a property to PointsMaterial that specifies the value of the "nominal attenuation distance".

Suggestions for property names are welcome.

Collaborator

WestLangley commented Dec 17, 2016

We can fix this. We can debate later whether sizeAttenuation should be false by default.

Here is the attenuation logic:

#ifdef USE_SIZEATTENUATION
	gl_PointSize = size * ( scale / - mvPosition.z );
#else
	gl_PointSize = size;
#endif

gl_PointSize has units of pixels. That means PointsMaterial.size has units of pixels.

scale is set in WebGLRenderer like so,

uniforms.scale.value = _height * 0.5; // this is the problem

where _height is the renderer's canvas height in pixels (without considering pixel ratio, which we will ignore for the purpose of this discussion),

and - mvPosition.z is the world depth of the point location.

So looking at units, we currently have something that does not make sense.

pixels = pixels * ( pixels / world_units ). // makes no sense

What we want is a formula that looks like this:

gl_PointSize = size * ( nominalDistance / - mvPosition.z ); // makes intuitive sense

where nominalDistance is _the depth (in world units) at which the point is rendered at size pixels.

Note that ( nominalDistance / - mvPosition.z ) is unit-less, as it should be.

@ericdrobinson Clearly, the model has to be fixed. But for the purpose of experimenting, there is a work-around. Do something like this in your javascript:

var attenuation = {
    size: 50, // in pixels
    distance: 400, // in world units
    enabled: true
};

var material = new THREE.PointsMaterial( {
    size: attenuation.size * attenuation.distance / ( 0.5 * renderer.getSize().height ), // compensate for renderer
    sizeAttenuation: attenuation.enabled
} );

With a perspective camera, this will render Points that are 400 world units in front of the camera at a size of 50 pixels. Other points will have appropriate relative attenuation.

We can debate "distance" vs "depth" terminology, but this is the basic idea.

@mrdoob The fix involves adding a property to PointsMaterial that specifies the value of the "nominal attenuation distance".

Suggestions for property names are welcome.

@mrdoob

This comment has been minimized.

Show comment
Hide comment
@mrdoob

mrdoob Dec 17, 2016

Owner

So you guys are proposing this API?

var material = new THREE.PointsMaterial( {
    size: 50,
    sizeAttenuation: true, // false by default
    sizeAttenuationDistance: 400 // default?
} );
Owner

mrdoob commented Dec 17, 2016

So you guys are proposing this API?

var material = new THREE.PointsMaterial( {
    size: 50,
    sizeAttenuation: true, // false by default
    sizeAttenuationDistance: 400 // default?
} );
@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Dec 17, 2016

Collaborator

Yes, but I'm not that happy with the nomenclature sizeAttenuationDistance. Can't think of anything better, though.

We are really dealing with the concept of "nominal size at nominal distance", in case that inspires a better property name.

Collaborator

WestLangley commented Dec 17, 2016

Yes, but I'm not that happy with the nomenclature sizeAttenuationDistance. Can't think of anything better, though.

We are really dealing with the concept of "nominal size at nominal distance", in case that inspires a better property name.

@ericdrobinson

This comment has been minimized.

Show comment
Hide comment
@ericdrobinson

ericdrobinson Dec 17, 2016

@WestLangley Thanks for the fantastic breakdown. That explains the peculiarities I was seeing with the simple tests I was throwing at the problem perfectly. And thanks for posting the workaround.

The nonimalDistance property (or whatever it becomes) would definitely help clear things up, but will this solve the problem? My understanding is that the PointsMaterial is used for [particle?] billboards in Three.js and it would stand to reason that you'd want to rely on actual distance attenuation calculations for that, no? This will always be a poor approximation of actual distance, given that camera FOV (for perspective cameras) isn't considered. Unless you're writing a simple example instead of, say, a particle system in a game, I suspect that this would require quite a bit of hand-tuning to get feeling correct.

I'm having a tough time seeing how the proposed sizeAttenuation equation makes sense for Orthographic Cameras given that it works along the following curve:
screen shot 2016-12-17 at 5 43 13 pm
The inflection point above would be at nominalDistance. This makes some kind of sense for Perspective Cameras but far less sense for Orthographic (where one might expect linear or something custom, if anything at all).

So you guys are proposing this API?

@mrdoob Something like that, I guess? I'm still not convinced that the sizeAttenuation would really make good sense across camera settings yet. It feels... hacky.

ericdrobinson commented Dec 17, 2016

@WestLangley Thanks for the fantastic breakdown. That explains the peculiarities I was seeing with the simple tests I was throwing at the problem perfectly. And thanks for posting the workaround.

The nonimalDistance property (or whatever it becomes) would definitely help clear things up, but will this solve the problem? My understanding is that the PointsMaterial is used for [particle?] billboards in Three.js and it would stand to reason that you'd want to rely on actual distance attenuation calculations for that, no? This will always be a poor approximation of actual distance, given that camera FOV (for perspective cameras) isn't considered. Unless you're writing a simple example instead of, say, a particle system in a game, I suspect that this would require quite a bit of hand-tuning to get feeling correct.

I'm having a tough time seeing how the proposed sizeAttenuation equation makes sense for Orthographic Cameras given that it works along the following curve:
screen shot 2016-12-17 at 5 43 13 pm
The inflection point above would be at nominalDistance. This makes some kind of sense for Perspective Cameras but far less sense for Orthographic (where one might expect linear or something custom, if anything at all).

So you guys are proposing this API?

@mrdoob Something like that, I guess? I'm still not convinced that the sizeAttenuation would really make good sense across camera settings yet. It feels... hacky.

@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Dec 17, 2016

Collaborator

We are not talking about using sizeAttenuation with OrthographicCamera. It does not make sense.

This will always be a poor approximation of actual distance, given that camera FOV (for perspective cameras) isn't considered.

FOV is not relevant here. Also, sizeAttenuationDistance is an actual distance.

Collaborator

WestLangley commented Dec 17, 2016

We are not talking about using sizeAttenuation with OrthographicCamera. It does not make sense.

This will always be a poor approximation of actual distance, given that camera FOV (for perspective cameras) isn't considered.

FOV is not relevant here. Also, sizeAttenuationDistance is an actual distance.

@ericdrobinson

This comment has been minimized.

Show comment
Hide comment
@ericdrobinson

ericdrobinson Dec 17, 2016

We are not talking about using sizeAttenuation with OrthographicCamera. It does not make sense.

Cool. Agreed.

FOV is not relevant here.

Wait, how so? Changing the FOV changes the projected size of objects, no? If the purpose of sizeAttenuation is to enable Point Sprite billboards (as the examples seem to indicate), then a distance calculation that takes the projection matrix into account would seem necessary. An example of this (if I'm reading the math correctly) is described here. For purpose of discussion, I've included the relevant portion here:

uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;

layout(location = 0) in vec4 position;

void main()
{
    vec4 eyePos = modelview * position;
    vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
    vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
    gl_PointSize = 0.25 * (projSize.x+projSize.y);
    gl_Position = projection * eyePos;
}

I guess with something like this, it would be important to define a "pixels-per-unit" number to define what size means. This way you could define "points are 15 pixels/unit" and then use that number to determine "well, if it's 1 unit away, and the projection is x, then the math would change that 15 pixels to 24" (or however it works out).

If it wasn't already clear, I should point out that I am not a graphics programmer. ;p

Also, sizeAttenuationDistance is an actual distance.

Yes, it is. It's an actual distance in the world coordinates. But you're using that without considering the projection matrix to determine rendered size (points skip this part of the pipeline). Mixing the suggested equation into a customized world rendered with a perspective camera would require a lot of hand-tuning (if it's not actually impossible to get something consistent at all).

ericdrobinson commented Dec 17, 2016

We are not talking about using sizeAttenuation with OrthographicCamera. It does not make sense.

Cool. Agreed.

FOV is not relevant here.

Wait, how so? Changing the FOV changes the projected size of objects, no? If the purpose of sizeAttenuation is to enable Point Sprite billboards (as the examples seem to indicate), then a distance calculation that takes the projection matrix into account would seem necessary. An example of this (if I'm reading the math correctly) is described here. For purpose of discussion, I've included the relevant portion here:

uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;

layout(location = 0) in vec4 position;

void main()
{
    vec4 eyePos = modelview * position;
    vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
    vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
    gl_PointSize = 0.25 * (projSize.x+projSize.y);
    gl_Position = projection * eyePos;
}

I guess with something like this, it would be important to define a "pixels-per-unit" number to define what size means. This way you could define "points are 15 pixels/unit" and then use that number to determine "well, if it's 1 unit away, and the projection is x, then the math would change that 15 pixels to 24" (or however it works out).

If it wasn't already clear, I should point out that I am not a graphics programmer. ;p

Also, sizeAttenuationDistance is an actual distance.

Yes, it is. It's an actual distance in the world coordinates. But you're using that without considering the projection matrix to determine rendered size (points skip this part of the pipeline). Mixing the suggested equation into a customized world rendered with a perspective camera would require a lot of hand-tuning (if it's not actually impossible to get something consistent at all).

@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Dec 17, 2016

Collaborator

FOV affects the rendered size of meshes whose world size is specified in world units.

Here, we are dealing with points, and we are specifying the rendered size in pixels.

Collaborator

WestLangley commented Dec 17, 2016

FOV affects the rendered size of meshes whose world size is specified in world units.

Here, we are dealing with points, and we are specifying the rendered size in pixels.

@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Dec 17, 2016

Collaborator

THREE.Sprite having scale 1 in world units is designed to render the same size as THREE.PlaneGeometry( 1, 1 ) of scale 1 at the same location. Use THREE.Sprite if you are concerned about that issue.

Collaborator

WestLangley commented Dec 17, 2016

THREE.Sprite having scale 1 in world units is designed to render the same size as THREE.PlaneGeometry( 1, 1 ) of scale 1 at the same location. Use THREE.Sprite if you are concerned about that issue.

@ericdrobinson

This comment has been minimized.

Show comment
Hide comment
@ericdrobinson

ericdrobinson Dec 18, 2016

FOV affects the rendered size of meshes whose world size is specified in world units.
Here, we are dealing with points, and we are specifying the rendered size in pixels.

Right. Which I guess is what I was trying to describe with respect the whole "pixels-per-unit" thing. My suggestion would be to provide users with the option to specify the world size in world units of a point. This could very easily be performed by reinterpreting the size property as "world units" when sizeAttenuation is used.

The big point I'm trying to make here is that the purpose of the sizeAttenuation property appears to be to enable point sprites billboard whose size changes based on distance from the [perspective] camera. It's effectively fast and efficient billboarding. If you're using PointsMaterial for this purpose and you want sizeAttenuation as you expect "things to get smaller as they move away from the camera" then wouldn't you want to be able to set the size of a billboard? I could put a tree texture on those sprites. How does that help if I can't specify the size of the tree in world units but have to rely on some basic function that ignores the perspective transform?

If I'm using the PointsMaterial to render points-as-pixels then no special math is necessary and size simply becomes "pixel size".

ericdrobinson commented Dec 18, 2016

FOV affects the rendered size of meshes whose world size is specified in world units.
Here, we are dealing with points, and we are specifying the rendered size in pixels.

Right. Which I guess is what I was trying to describe with respect the whole "pixels-per-unit" thing. My suggestion would be to provide users with the option to specify the world size in world units of a point. This could very easily be performed by reinterpreting the size property as "world units" when sizeAttenuation is used.

The big point I'm trying to make here is that the purpose of the sizeAttenuation property appears to be to enable point sprites billboard whose size changes based on distance from the [perspective] camera. It's effectively fast and efficient billboarding. If you're using PointsMaterial for this purpose and you want sizeAttenuation as you expect "things to get smaller as they move away from the camera" then wouldn't you want to be able to set the size of a billboard? I could put a tree texture on those sprites. How does that help if I can't specify the size of the tree in world units but have to rely on some basic function that ignores the perspective transform?

If I'm using the PointsMaterial to render points-as-pixels then no special math is necessary and size simply becomes "pixel size".

@ericdrobinson

This comment has been minimized.

Show comment
Hide comment
@ericdrobinson

ericdrobinson Dec 18, 2016

@mrdoob Any input on this? I've searched the three.js repo for uses of the PointsMaterial and use cases fall into one of two camps:

  1. Points as pixels - Uniformly sized squares per point.
  2. Points as Point Sprites - Used for fast/simple billboarding or "point clouds with depth".

I personally believe that some of the examples listed above have incorrect settings for their purpose (e.g. the "Camera" example, which uses points for background stars - no need for the attenuation there, right?).

The best example of what I'm trying to suggest comes with the Geometry Convex example. Points are drawn at vertices of the mesh. Currently, the size of those Point Sprite circles actually attenuate separately from the rendered size of the main mesh. Change the FOV of the camera and the mesh will appear larger/smaller on the screen but rendered Point Sprites will not similarly change (unless I'm mistaken - which is very possible).

The change I'm suggesting (interpreting the size property as world units instead of rendered pixels when (and only when) sizeAttenuation is true) would allow a user to say "my mesh is a total size of 2 units. I would like the vertex 'drawing' to be of size 0.2 units." With the example of the "tree billboard" I mentioned above, I could also easily say "my trees are all ~4.2 units tall. I will set the particles to that size and intersperse a bunch of those into the scene to improve render cost."

ericdrobinson commented Dec 18, 2016

@mrdoob Any input on this? I've searched the three.js repo for uses of the PointsMaterial and use cases fall into one of two camps:

  1. Points as pixels - Uniformly sized squares per point.
  2. Points as Point Sprites - Used for fast/simple billboarding or "point clouds with depth".

I personally believe that some of the examples listed above have incorrect settings for their purpose (e.g. the "Camera" example, which uses points for background stars - no need for the attenuation there, right?).

The best example of what I'm trying to suggest comes with the Geometry Convex example. Points are drawn at vertices of the mesh. Currently, the size of those Point Sprite circles actually attenuate separately from the rendered size of the main mesh. Change the FOV of the camera and the mesh will appear larger/smaller on the screen but rendered Point Sprites will not similarly change (unless I'm mistaken - which is very possible).

The change I'm suggesting (interpreting the size property as world units instead of rendered pixels when (and only when) sizeAttenuation is true) would allow a user to say "my mesh is a total size of 2 units. I would like the vertex 'drawing' to be of size 0.2 units." With the example of the "tree billboard" I mentioned above, I could also easily say "my trees are all ~4.2 units tall. I will set the particles to that size and intersperse a bunch of those into the scene to improve render cost."

@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Dec 18, 2016

Collaborator

@ericdrobinson You are using the wrong class. If you want to render a lot of trees efficiently, you can use InstancedBufferGeometry with a billboard shader created with RawShaderMaterial. All the trees will be sized in world units and rendered with a single draw call.

There is even a three.js example for you to learn from: http://threejs.org/examples/webgl_buffergeometry_instancing_billboards.html.

Collaborator

WestLangley commented Dec 18, 2016

@ericdrobinson You are using the wrong class. If you want to render a lot of trees efficiently, you can use InstancedBufferGeometry with a billboard shader created with RawShaderMaterial. All the trees will be sized in world units and rendered with a single draw call.

There is even a three.js example for you to learn from: http://threejs.org/examples/webgl_buffergeometry_instancing_billboards.html.

@ericdrobinson

This comment has been minimized.

Show comment
Hide comment
@ericdrobinson

ericdrobinson Dec 18, 2016

You are using the wrong class. If you want to render a lot of trees efficiently, you can use InstancedBufferGeometry with a billboard shader created with RawShaderMaterial. All the trees will be sized in world units and rendered with a single draw call.

Good to know. I should stress, however, that I am not actually dealing with this problem myself. I am merely trying to present potential use cases to help with clarity.

@WestLangley You seem against the idea of having the size property be world units when sizeAttenuation is on. May I ask why that is? I think I may be missing something.

I'm still not convinced that the sizeAttenuationDistance setting makes much useful sense. Sure, we can at least say with certainty that "objects at exactly n-units from the camera will render at size pixels!" That's great. However, the perceived size will change dramatically depending on whether the distance is greater than or less than that distance (see the graph in this comment. By changing n (or sizeAttenuationDistance) you're merely changing the location of that inflection point.

So what I've been asking is "what is the purpose of the attenuation? Why would someone want to use it?" The best that I've been able to come up with is simple Point Sprites [billboards]. The followup question is: "what makes the most sense for changing size based on distance?" The most straight-forward answer to that question is: "to have the points sized in world units and attenuate the same way as any other geometry in the scene."

I understand that there's a measure of "we have other ways of achieving the same effect" but what do you gain by having this one special/odd-case for sizing?

If you do stick with the size * (inflectionDist/distFromCamera) (or ~1/dist) equation, I would suggest renaming the sizeAttenuation property to something like fauxSizeAttenuation and add a description of the equation to the documentation to make it readily apparent that this does not simply mean "turn on perspective" for points.

ericdrobinson commented Dec 18, 2016

You are using the wrong class. If you want to render a lot of trees efficiently, you can use InstancedBufferGeometry with a billboard shader created with RawShaderMaterial. All the trees will be sized in world units and rendered with a single draw call.

Good to know. I should stress, however, that I am not actually dealing with this problem myself. I am merely trying to present potential use cases to help with clarity.

@WestLangley You seem against the idea of having the size property be world units when sizeAttenuation is on. May I ask why that is? I think I may be missing something.

I'm still not convinced that the sizeAttenuationDistance setting makes much useful sense. Sure, we can at least say with certainty that "objects at exactly n-units from the camera will render at size pixels!" That's great. However, the perceived size will change dramatically depending on whether the distance is greater than or less than that distance (see the graph in this comment. By changing n (or sizeAttenuationDistance) you're merely changing the location of that inflection point.

So what I've been asking is "what is the purpose of the attenuation? Why would someone want to use it?" The best that I've been able to come up with is simple Point Sprites [billboards]. The followup question is: "what makes the most sense for changing size based on distance?" The most straight-forward answer to that question is: "to have the points sized in world units and attenuate the same way as any other geometry in the scene."

I understand that there's a measure of "we have other ways of achieving the same effect" but what do you gain by having this one special/odd-case for sizing?

If you do stick with the size * (inflectionDist/distFromCamera) (or ~1/dist) equation, I would suggest renaming the sizeAttenuation property to something like fauxSizeAttenuation and add a description of the equation to the documentation to make it readily apparent that this does not simply mean "turn on perspective" for points.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment