New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calculate an envmap reflection vector in a fragment shader #5058
Comments
Interesting! Could you try doing a test with a geometry of, say, 500,000 triangles and check what the performance difference is? |
Making this change for |
A 500k triangles geometry is here (the original approach): It is hard to measure a difference with stats. Do you know any more accurate tool? The cylinder also illustrates the visual difference between these two approaches. Camera is configured to stay perpendicular to the cylinder when it moves up and down. In the first case, the reflection is stretched when a camera is close to the cylinder edges. In the second case the reflected image stays still, because the angle between the camera and the surface does not change. |
Hmm... maybe 2 million triangles? :D |
Still, maybe it's better to just go with perPixel envmap handling for simplification? |
Probably. But |
I initially tried to parametrize materials with I gave up this attempt and only enabled per fragment reflection calculation for the |
The PR related to this issue is merged, so I think this can be closed. |
At this moment Three.js calculates a reflection vector for a material with envmap (and without a bump or a normal map) in a vertex shader:
https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/envmap_vertex.glsl#L14
The result is then interpolated and used by the fragment shader:
https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/envmap_fragment.glsl#L26
Such approach causes distortion for larger triangles (I think this is because the reflection vector does not change linearly, so the interpolation introduces errors). Please take a look here:
http://mixedbit.org/cubemap/orig/webgl_materials_cubemap.html
The reflection is distorted, when a camera moves, a boundary between two triangles becomes clearly visible.
A better approach seems to be to only calculate
vNormal
in the envmap vertex shader, let the GPU interpolate thevNormal
, and calculate reflection vector in the fragment shader (similarly to how this is done when bumpmap or normalmap are used). Then the reflection becomes mirror like, without any distortion:http://mixedbit.org/cubemap/modified/webgl_materials_cubemap.html
I can work on a pull request if such change seems good. I'm not sure though if the proposed approach has no drawbacks (besides additional cost of calculating the reflection vector per fragment). Do you remember a reasoning behind calculating the reflection vector in the vertex shader?
The text was updated successfully, but these errors were encountered: