Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Add support for normal maps in MeshPhongMaterial #2358

Closed
wants to merge 3 commits into
from

Conversation

6 participants
Contributor

crobi commented Aug 28, 2012

Normal maps store the perturbation of the surface normal (vector field), as opposed to bump maps, which are height fields (scalar field).

crobi added some commits Jul 29, 2012

Add support for normal maps in MeshPhongMaterial.js
Normal maps store the perturbation of the surface normal (vector field), as opposed to bump maps, which are height fields (scalar field).
Collaborator

WestLangley commented Aug 28, 2012

You will have to merge from your dev branch into @mrdoob's dev branch, instead, making sure you have pulled recent updates first.

Also, the "normal" shader in ShaderUtils.js has phong lighting, so some version of this is already implemented. Your approach is different, though. Can you explain the differences -- advantages and disadvantages, in particular?

Contributor

crobi commented Aug 28, 2012

Ok, I'll try to merge the dev branch.

Advantages:

  • The "normal" shader from ShaderUtils needs precomputed per-vertex tangents, my shader (well, really this guy's) doesn't need them and constructs them on the fly in the fragment shader. Tangents can be computed using Geometry.computeTangents()
  • As part of one of the most commonly used materials, my shader will automatically support all other features like morph animations.

Disadvantages:

  • My shader needs the oes_standard_derivatives extension (available on all desktop graphics cards and - according to google - probably on most mobile devices). That extension is already needed by the bump map shader

Differences:

  • My shader needs a couple more instructions, but less input data channels, not sure which is faster.
  • The "normal" shader constructs an orthonormal (normal-binormal-tangent) basis at each vertex and interpolates those vectors across the triangle.
    • After re-normalization in the fragment shader, this probably stays orthonormal inside the triangle.
    • The tangent/binormal will not be parallel to the triangle surface
    • The normal mapping will break down at isolated pixels inside the triangle if the normal at two neighboring vertices points to (exactly) opposite directions.
  • My shader constructs at each fragment two tangents in the S and T (texture coordinate major axes) directions, and takes the interpolated normal.
    • The tangent/binormal will be parallel to the triangle surface, the interpolated normal will not
    • This is not an orthonormal basis
    • The normal mapping will break down if the triangle maps to a point or line in texture space.
  • I am actually not sure what is more correct. I would guess that the output should be very similar - at least it looks the same as the normal shader on my models.
Merge remote-tracking branch 'remotes/mrdood/dev'
Conflicts:
	build/three.js
	build/three.min.js
Contributor

crobi commented Aug 28, 2012

I have merged the dev branch, which added all those commits and people to this issue. Should have probably rebased instead. Sorry for the spam.

Collaborator

WestLangley commented Aug 29, 2012

@crobi +1 for the excellent explanation. :-)

Owner

mrdoob commented Aug 29, 2012

@alteredq what do you think? seems like this aligns the normal implementation with the bump one and simplifies a few things, but is likely to be slower?

alteredq added a commit to alteredq/three.js that referenced this pull request Aug 29, 2012

Contributor

crobi commented Aug 29, 2012

I'm not sure if it really will be that much slower, though. The following is some theorycrafting because I'm too lazy to build a real performance test (and run it all relevant hardware).

I couldn't find any hard numbers, but from what I know, the partial derivative instructions (dFdx()) are implemented by taking a simple difference between the values of neighboring pixels. And since the GPUs process 2x2 or 3x3 pixel blocks in sync, they are likely to be incredibly cheap. So all in all, it's only about a difference of ~15 arithmetic operations in the fragment shader.

On the other hand, you save streaming 1 vector per vertex from the geometry buffer to the vertex shader, and an interpolation (varying variables) of 2 vectors per fragment from the vertex shader to the fragment shader. You also save skinning and transforming the tangent from world space to eye space, as well as computing the binormal in the vertex shader.

| instruction   | existing |   new   |
| ------------- | -------  | ------- |
| texture()     |    1     |    1    |
| normalize()   |    4     |    4    |
| dFdx()/dFdy() |    0     |    4    |
| matrix*vector |    1     |    1    |
| scalar*vector |    1     |    6    |
| vector+vector |    1     |    3    |

(only a rough guess, some arithmetic operations may be combined into a fused multiply-add, and it's missing move operations/register management)

Contributor

alteredq commented Aug 29, 2012

@alteredq what do you think? seems like this aligns the normal implementation with the bump one and simplifies a few things, but is likely to be slower?

It's definitely interesting.

I tried it on our ninja example - performance seems similar, look is a tiny bit different. There are fine grain block artefacts similar to ones in bumpmaps (though you need to look carefully for them, it's not very bad). Also UV seams can be bit more visible, again similar like for bumpmaps. As far as I understood these issues comes from how derivatives are computed.

I think we should keep the both options (attribute tangents and derivative tangents), at least for the moment. It's good to have a simple way how to add normal maps for standard materials while the old way should have better compatibility and more predictable look.

Contributor

alteredq commented Aug 29, 2012

I'm not sure if it really will be that much slower, though. The following is some theorycrafting because I'm too lazy to build a real performance test (and run it all relevant hardware).

I did a bit of testing. Performance profile seems similar to bumpmaps via derivatives - it's very good on newer machines but you can feel the difference on older systems.

On my newer GPU (Nvidia Quadro 2000M) performance is similar or even slightly better with derivatives (give or take few percent), on my older GPU (ATI Radeon 3650M) derivatives are slower by 15-25%.

Contributor

crobi commented Aug 29, 2012

Thanks for the performance test!

I just had to push another change to my branch, as I missed some bugs introduced by the merge (I had previously relied on build_all building a non-minified three.js file and didn't notice it's gone now)

There are fine grain block artefacts similar to ones in bumpmaps (though you need to look carefully for them, it's not very bad)

That's strange, I have never seen such artefacts with the new shader (tried it with box-like models, the leeperrrysmith head model, as well as this one).
What I sometimes see though is lightning discontinuities at triangle borders, indicating that the way I build the normal basis is inconsistent with what the normal map generator used, or generally that my approach is wrong.

What model/material configuration and hardware gave you those artefacts? I have used my viewer from my collada loader project to test the shader.

Contributor

alteredq commented Aug 29, 2012

I just had to push another change to my branch, as I missed some bugs introduced by the merge (I had previously relied on build_all building a non-minified three.js file and didn't notice it's gone now)

I already fixed these when merging.

What model/material configuration and hardware gave you those artefacts?

Just ninja from our normal map example. This is the old way:

https://dl.dropbox.com/u/26786471/examples/webgl_materials_normalmap_derivatives.html

And this is the new way:

https://dl.dropbox.com/u/26786471/examples/webgl_materials_normalmap_derivatives2.html

If you check carefully, for example ridges at the top of the head, with derivatives there are jagged edges. Also if you keep moving the head, there is slight shimmering around some edges. Looking bit like difference between aliased and antialiased rendering. But it's ok.

Contributor

tapio commented Aug 29, 2012

I have been unable to get the regular normal map shader work properly (see http://stackoverflow.com/questions/12180899/broken-lights-with-the-normal-map-shader), even though simply swapping out the ShaderMaterial to MeshPhongMaterial made the problem disappear (though obviously no normal mapping then).

I now tried this one also (by pulling alteredq's branch) and it works flawlessly - I didn't need to change my code at all apart from swapping in the new three.js build and adding the normalMap parameter to MeshPhongMaterial I was using to compare with the regular normal mapping. So working solution gets my vote :P

From philosophical point of view, I can't decide if I prefer the derivatives or pre-calculated tangents way of doing this, but I do feel that normal mapping should be a first-class citizen. After all, due to the material generator it should not have any overhead if not used (right?). It's also less maintenance, as the current normal map shader duplicates all the light handling code. Is there reasons not to add it into core, perhaps even with displacement mapping?

Contributor

alteredq commented Aug 29, 2012

Is there reasons not to add it into core, perhaps even with displacement mapping?

No no. Just to clarify - I think this solution is definitely worth having (and I already merged it "into the core"). I'm just wondering whether to still keep also the old one.

We still don't have that much experience with derivatives extension in the wild. From when I released bump mapped examples (that are using very similar approach), some people were complaining, machines were getting frozen, @mrdoob posted a screenshot with very broken rendering in Linux, it is kinda slow on older GPUs and there are subtle artefacts.

Old approach with attribute tangents is more messy, but is kinda proven "industry standard". That's what almost everybody else was using for long time. There is higher chance it'll work as expected on a random system.

So I think for the moment we should keep both options and then sometimes later old normal map shader may get deprecated.

I have been unable to get the regular normal map shader work properly (see http://stackoverflow.com/questions/12180899/broken-lights-with-the-normal-map-shader),

This may be some lights simply turning off for some material configuration. I think somewhere deeper in WebGL layer there is some bug which makes weird stuff happen for some shader code. It keeps appearing and disappearing with normal map shader, I suspect as this is the most complex shader we use regularly. See #1292

Contributor

crobi commented Aug 29, 2012

Hm, intereseting. Here is what those two files @alteredq posted look on my maching (GeForce 580, Windows, Chrome 22.0.1229):

old way
new way

I don't see any jagged edges here - in fact, no matter how far I zoom in, I get smooth shading, while if I use the bump mapping shader, I get hard eges (it looks blocky and you see individual pixels of the bump map).

What I notice in the images is that the old way has noisy/aliased reflections, while the new way has smooth reflection lines. I also see small changes in the shading (most prominent at those small round bumps) - there the old way looks more correct, but none of them show arfefacts.

Owner

mrdoob commented Aug 29, 2012

What model/material configuration and hardware gave you those artefacts?

Just ninja from our normal map example. This is the old way:

https://dl.dropbox.com/u/26786471/examples/webgl_materials_normalmap_derivatives.html

And this is the new way:

https://dl.dropbox.com/u/26786471/examples/webgl_materials_normalmap_derivatives2.html

Having a hard time comparing these because the light is moving... :S

Hm, intereseting. Here is what those two files @alteredq posted look on my maching (GeForce 580, Windows, Chrome 22.0.1229):

old way
new way

Forbidden :S

Contributor

crobi commented Aug 29, 2012

Oops, this here should work:

old way
new way

Contributor

alteredq commented Aug 29, 2012

Having a hard time comparing these because the light is moving... :S

Ok, I made the light static, just reload those links.

Here it's how it looks on my Nvidia Quadro 2000M:

https://dl.dropbox.com/u/26786471/img/normalmap_new.png
https://dl.dropbox.com/u/26786471/img/normalmap_old.png

https://dl.dropbox.com/u/26786471/img/normalmap_new_bottom.png
https://dl.dropbox.com/u/26786471/img/normalmap_old_bottom.png

Differences I meant are mainly at ridges (plus now that @crobi mentioned it, also reflections are different).

I don't see any jagged edges here - in fact, no matter how far I zoom in, I get smooth shading, while if I use the bump mapping shader, I get hard eges (it looks blocky and you see individual pixels of the bump map).

That's what I meant - bilinear filtering seems to work ok for regular normal maps but with bump/normal maps using derivatives you get more "raw" look.

Morten Mikkelsen gave different solutions for mitigating this. If you don't do anything, blocky artefacts are just terrible. Then there is "double derivatives" solution which I used and it looks quite ok (I think your normal map approach does something similar). And finally there is bicubic interpolation done explicitly in the shader which has pretty hairy code and additionally in WebGL GLSL there aren't available necessary instructions for more fine grained access to mipmaps.

Contributor

crobi commented Aug 29, 2012

Hm, then we are probably talking about different kinds of artefacts. Here is what I was talking about - the bump mapping looks as if it used nearest neighbor filtering.

I also don't understand why bilinear filtering with derivatives should give a more raw look - note that i use the (GPU-provided) derivatives only to construct the tangent space basis vectors. The normal itself is given by a single trilinearly filtered texture lookup.
This is different from Morten Mikkelsen' basic approach where he reads three pixels from the height map and computes his own forward differences. He uses the GPU-provided derivatives to determine at which texture coordinates he reads those three pixels.

Is it possible that the differences in the images you posted are from the fact that the new shader produces different normals which result in very narrow specular highlights, which look aliased?

That being said, the tangent space computation with derivatives can of course still break down and that my construction of the tangent space may be wrong because the reflections look different. After reading Morten Mikkelsens code, I think the shader I implemented is equivalent to using his perturbNormalArb with dHdxy_derivmap instead of dHdxy_fwd, so I can try comparing those two results.

Contributor

alteredq commented Aug 29, 2012

I also don't understand why bilinear filtering with derivatives should give a more raw look - note that i use the (GPU-provided) derivatives only to construct the tangent space basis vectors. The normal itself is given by a single trilinearly filtered texture lookup.

Ah yes, I think you are right. I missed that there is a just single tap for normal map.

Is it possible that the differences in the images you posted are from the fact that the new shader produces different normals which result in very narrow specular highlights, which look aliased?

Maybe, edges for small features seem to be sharper (while big features seem to be flatter, especially for certain view angles).

After reading Morten Mikkelsens code, I think the shader I implemented is equivalent to using his perturbNormalArb with dHdxy_derivmap instead of dHdxy_fwd, so I can try comparing those two results.

I did try also derivative maps and they looked worse than our current bump maps.

Owner

mrdoob commented Aug 30, 2012

It looks as if the new approach didn't have anisotropic on ;) The effect looks cleaner and sharper. Maybe it's best to compare with blender to see what's the "expected" result. But the more the triangle is perpendicular to the camera the less the distortion. The reflection looks the same to me, just lacking distortion.

Contributor

alteredq commented Aug 30, 2012

Maybe it's best to compare with blender to see what's the "expected" result.

Actually, we have something better, ninja came from ATI MeshMapper tool. So the original intended look is this:

https://dl.dropbox.com/u/26786471/img/ninja-meshmapper.png
https://dl.dropbox.com/u/26786471/img/ninja-meshmapper-bottom.png

For me it looks much more like the old way (it's even more apparent when you rotate the mesh in MeshMapper, there isn't this camera vs triangle angle dependent flattening distortion). Which is kinda expected, till recently almost everybody did normal maps like this.

But I still think this new way is cool and good to have, in a similar way like bump maps. It's not perfect but it gets like 95% there and it integrates well with our material system ("classic" normal map shader is somehow pain).

Owner

mrdoob commented Aug 30, 2012

But is the flattening an intended effect or is a side effect? Derivatives based bumpmap doesn't seem to have flattening effect.

Contributor

alteredq commented Aug 30, 2012

But is the flattening an intended effect or is a side effect? Derivatives based bumpmap doesn't seem to have flattening effect.

Now you made me confused - I don't know anymore what you mean by "flattening".

Just check the screenshot from MeshMapper - these "spikey" things on the top of the head look like cones, basically no matter which angle there is between camera and triangles (the same behavior happens in old normal map shader).

_/\_

While with derivatives normal maps, closer the triangle orientation gets to view direction, weirder they get, they start to look less like cones and more like some flat discs with a sharp spike in the middle:

_|_

spikes

Owner

mrdoob commented Aug 30, 2012

I'm talking about the "flattening" that occurs at the perpendicular areas, or the closer to the edge the pixel is from the model:

flattening

On the picture. The red line on the third ninja shows the area where the normal distortion is fading. While in the first ninja the normal distortion is still affecting until the edge of the model

Contributor

crobi commented Aug 30, 2012

Yes, i noticed those errors as well. Something is definitely wrong. Unfortunately I won't have time during the next two weeks, but here is what I plan to do:

  1. Orthogonalize by tangent space (see here - T and S are currently not orthogonal to N). This shoudn't make a difference for flat shaded models, so maybe do a quick comparison of some flat shaded, normal mapped crate/brick wall first.
  2. Using my shader, output the above mentioned vectors T, S, and `N' as output colors (one RGB image for each vector, transform each component to the range [0,1] first)
  3. Do the same with the normal map shader from ShaderUtils, but output the vectors vNormal, vTangent, and vBinormal (see here).

If the normals differ, then I have a bug. If either of the tangent vectors differs, then it's either a bug in my implementation or an inherent problem of the approach and I have to think more carefully about it. If all three are equal, then the output should be the same and I have again a bug.

From what I understand, normal mapping tangent space should be computed such that the tangent vector is aligned to the horizontal axis of the normal map image, and the binormal is aligned to the vertical axis of the normal map. Useful links:

Contributor

alteredq commented Aug 30, 2012

Yeah, then we are speaking about the same thing: less "head-on" / "screen-plane" oriented triangles are, less correct normals from derivatives are.

BTW I realized you could use displacement map as bump map and here comes the twist:

https://dl.dropbox.com/u/26786471/examples/webgl_materials_normalmap_derivatives_bump.html

After seeing this, it's kinda obvious both normal map approaches are wrong :S

(disregard faint pattern all over the place, it is present in the displacement map)

Contributor

crobi commented Aug 30, 2012

Interesting find about the flattening. This brings me to another theory about a potential problem :)

At points where the object surface is almost aligned to the view direction, the GPU will choose a high mipmap level for the texture lookup to avoid aliasing. A high mipmap level is basically a low-pass filtered version of the original map, and for normal maps, I assume that results in pretty much a flat surface.
So either the two normal map shaders use different texture filtering/lod bias to look up the normal perturbation, or the mipmaps break the normal space computation using derivatives.

Disabling mipmapping (apart from changing texture parameters) can be done by calling texturelod(map, uv, 0) instead of texture(map, uv)/texture2D(map, uv), though I'm not sure this works in webgl.

Also, since I compute the texture coordinate gradients, I might as well use textureGrad instead of texture and save the GPU the re-computation of those gradients to determine the mipmap level.

Contributor

alteredq commented Aug 30, 2012

WebGL is unfortunately quite limited in texture access, we keep bumping into these limitations.

Only these instructions are available:

only in vertex shaders:

vec4  texture2DLod(sampler2D sampler, vec2 coord, float lod)
vec4  texture2DProjLod(sampler2D sampler, vec3 coord, float lod)
vec4  texture2DProjLod(sampler2D sampler, vec4 coord, float lod)
vec4  textureCubeLod(samplerCube sampler, vec3 coord, float lod)

only in fragment shaders

vec4  texture2D(sampler2D sampler, vec2 coord, float bias)
vec4  texture2DProj(sampler2D sampler, vec3 coord, float bias)
vec4  texture2DProj(sampler2D sampler, vec4 coord, float bias)
vec4  textureCube(samplerCube sampler, vec3 coord, float bias)

both

vec4  texture2D(sampler2D sampler, vec2 coord)
vec4  texture2DProj(sampler2D sampler, vec3 coord)
vec4  texture2DProj(sampler2D sampler, vec4 coord)
vec4  textureCube(samplerCube sampler, vec3 coord)

Anyways, I think there is also something else going on, with much more effect. Did you see bump mapped version? Both normal mapped versions have completely messed up lighting orientation on bumps. Bumpmapped version just feels right, everything is consistent.

Contributor

crobi commented Aug 30, 2012

Hm, it looks as if the bump map and normal map had inverted orientations of the displacement - elevations in the bump map are valleys in the normal map. Did we change anything about texture y axis flipping? I actually had to make some changes to my projects because textures were wrong after going from r49 to r50. If so, we might have to flip the y coordiante of the normal map. Could you try inverting normalTex.y after this line?

Owner

mrdoob commented Aug 30, 2012

BTW I realized you could use displacement map as bump map and here comes the twist:

https://dl.dropbox.com/u/26786471/examples/webgl_materials_normalmap_derivatives_bump.html

After seeing this, it's kinda obvious both normal map approaches are wrong :S

Nice catch.

Contributor

alteredq commented Aug 30, 2012

Could you try inverting normalTex.y after this line?

Yup, this makes old normal map shader look much closer to bump mapped one ;)

And this has similar effect on the new one:

vec3 T = normalize( q0 * st1.s - q1 * st0.s );

It used to be:

vec3 T = normalize( -q0 * st1.s + q1 * st0.s );

Phew, good that we poked into this, we completely missed this implication of big uv unflipping from r49 to r50.

Edit: hmmm, but LeePerry is now messed up :S

alteredq added a commit to alteredq/three.js that referenced this pull request Aug 31, 2012

One possible workaround for normal map inconsistencies across models.
Use two component normalScale and set eventual flipping per-model. It's quite ugly but it could do the job if we don't find a better solution.

Do not merge yet.

See #2358

alteredq added a commit to alteredq/three.js that referenced this pull request Aug 31, 2012

Simpler workaround for normal map inconsistencies across models.
Keep just one number for normal scale and use its sign for flipping. This is simpler, just it doesn't give as many options for arbitrary flipping.

See #2358
Contributor

alteredq commented Aug 31, 2012

So I dug into the repo and even before UV unflipping it was wrong and models were not behaving consistently with normal maps, just the difference was more subtle, so we missed it (normal map flipping was happening just on Lee Perry's neck).

I think we are hitting differences in tangent basis for models coming from different sources:

http://wiki.polycount.com/NormalMap#TangentBasis

Not sure what could be a good solution for this. See two commits above for two possible options how to let user select custom flipping per model (either per-component X or Y, or just Y, instead of current X and Y tied together which doesn't work).

Simpler solution works for our current trouble with ninja and Lee Perry, though it's possible some other models may need different flippings.

kaipr commented Sep 26, 2012

As this seems to be ongoing I'll add my problem here instead of creating a new ticket.

When I try to use a material loaded by SceneLoader with normalMap given, it results in WebGL errors:
GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 2

It works fine when I create that material manually. The SceneLoader seems to do alot more stuff on normalMap than on bumpMap etc, which is far beyond my current knowledge of WebGL. Do I have to pass more attributes in the Scene file? Why does the SceneLoader behave that much different (or is it a bug)?

This is how I manually create it:

var floorMaterial = new THREE.MeshPhongMaterial( {
        map: this.assets.textures.floor_diffuse,
        normalMap: this.assets.textures.floor_normal,
        specularMap: this.assets.textures.floor_specular,
        color: 0xE3701A,
        ambient: 0x333333,
        specular: 0xE3701A,
        shininess: 10,
        metal: false,
        shading: THREE.SmoothShading,
        perPixel: true
      });

and this is how it's specified in the scene json:

    "floor": {
      "type": "MeshPhongMaterial",
      "parameters": {
        "map": "floor_diffuse",
        "normalMap": "floor_normal",
        "specularMap": "floor_specular",
        "color": 14905370,
        "ambient": 3355443,
        "specular": 14905370,
        "shininess": 10,
        "metal": false,
        "perPixel": true
      }

wvl pushed a commit to wvl/three.js that referenced this pull request Nov 28, 2012

wvl pushed a commit to wvl/three.js that referenced this pull request Nov 28, 2012

(threejs src) One possible workaround for normal map inconsistencies …
…across models.

Use two component normalScale and set eventual flipping per-model. It's quite ugly but it could do the job if we don't find a better solution.

Do not merge yet.

See #2358
Contributor

crobi commented Nov 21, 2013

Closing this pull request since three.js has been supporting normal maps for some time now. Problems with the tangent basis are discussed in #3169

@crobi crobi closed this Nov 21, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment