Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refined the number of samples used in diffused and specular textures #6613

Merged
merged 5 commits into from Dec 22, 2023

Conversation

sudhanshuv1
Copy link
Contributor

@sudhanshuv1 sudhanshuv1 commented Dec 9, 2023

Resolves #6586

Changes:

  1. Changed the value of sampleDelta from 0.025 to 0.100 in src/webgl/shaders/imageLightDiffused.frag.
  2. Added dithering to the sample vector sampleVec in src/webgl/shaders/imageLightSpecular.frag.
  3. Changed the value of SAMPLE_COUNT from 1024 to 16, and the definition of vanDerCorput in src/webgl/shaders/imageLightSpecular.frag.

PR Checklist

Copy link

welcome bot commented Dec 9, 2023

🎉 Thanks for opening this pull request! Please check out our contributing guidelines if you haven't already. And be sure to add yourself to the list of contributors on the readme page!

@davepagurek
Copy link
Contributor

davepagurek commented Dec 15, 2023

Sorry for the delay, and thanks for taking this task on! This is a great start.

I'm just playing around with it now, and it looks there are some speckles appearing in the diffused light:

image image
This PR1.9.0

Live: https://editor.p5js.org/davepagurek/sketches/YHiYxeM7B

I played around a bit and I think this is happening because of the addition of the random offset here:

vec3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * normal + randomOffset;

The problem is that when we add a random offset to just the normal direction here, we end up with a vector that does not have length of 1, which breaks some later calculations. I tried out this change, which instead adds the random offset to theta and phi:

        const float sampleDelta = 0.100;
        float nrSamples = 0.0;

-  for(float phi = 0.0; phi < 2.0 * PI; phi += sampleDelta)
+  float randomOffset = random(gl_FragCoord.xy) * sampleDelta;
+       for(float rawPhi = 0.0; rawPhi < 2.0 * PI; rawPhi += sampleDelta)
        {
-    for(float theta = 0.0; theta < ( 0.5 ) * PI; theta += sampleDelta)
+    float phi = rawPhi + randomOffset;
+    for(float rawTheta = 0.0; rawTheta < ( 0.5 ) * PI; rawTheta += sampleDelta)
     {
+      float theta = rawTheta + randomOffset;
       // spherical to cartesian (in tangent space) // tangent space to world // add each sample result to irradiance
       float x = sin(theta) * cos(phi);
       float y = sin(theta) * sin(phi);
       float z = cos(theta);
       vec3 tangentSample = vec3( x, y, z);

-      float randomOffset = mix(-0.1, 0.1, random(gl_FragCoord.xy));
-      vec3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * normal + randomOffset;
+      vec3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * normal;
         irradiance += (texture2D(environmentMap, nTOE(sampleVec)).xyz) * cos(theta) * sin(theta);
       nrSamples++;
     }

This seems to look good again while using the same number of samples, so maybe we can try using that method of adding a random offset?

image

I also noticed that the specular light still ends up looking a tad sharp even at low shininess (50 in this example):

image image
This PR1.9.0

Maybe we can try using more samples for the higher roughness levels? That probably involves using a higher loop limit, and then breaking out of the loop earlier for lower-roughness levels.

@sudhanshuv1
Copy link
Contributor Author

sudhanshuv1 commented Dec 19, 2023

Thanks for testing out the changes! To reduce sharpness in specular texture I tried using larger values of SAMPLE_COUNT. With SAMPLE_COUNT = 400, the sharpness reduced significantly. I used this code to break out of the loop earlier for lower roughness levels:

  for (int i = 0; i < SAMPLE_COUNT; ++i)
  {
    float FLOAT_SAMPLE_COUNT = float(SAMPLE_COUNT);
    // break out at lower roughness levels
    if(FLOAT_SAMPLE_COUNT == pow(2.0,(roughness+0.1)*20.0))           
    {
      break;
    }
    vec2 Xi = HammersleyNoBitOps(i, SAMPLE_COUNT);
    vec3 H = ImportanceSampleGGX(Xi, N, roughness);
    vec3 L = normalize(2.0 * dot(V, H) * H - V);

    float NdotL = max(dot(N, L), 0.0);
    if (NdotL > 0.0)
    {
      prefilteredColor += texture2D(environmentMap, nTOE(L)).xyz * NdotL;
      totalWeight += NdotL;
    }
  }
  prefilteredColor = prefilteredColor / totalWeight;

  gl_FragColor = vec4(prefilteredColor, 1.0);
}

It breaks out of the loop at SAMPLE_COUNT = 4 for roughness = 0.0, at SAMPLE_COUNT = 16 for roughness = 0.1, and so on. The sharpness in specular light does not increase notably using the value SAMPLE_COUNT = 400, as we compare it with the current implementation:

SAMPLE_COUNT = 1024 SAMPLE_COUNT = 400
1point9point0 400
1.9.0 After making changes: Using the value SAMPLE_COUNT = 400 and breaking out of the loop at lower roughness levels

When I used the value SAMPLE_COUNT = 400 the sharpness was reduced in specular light, although there is a catch - the execution time was increased by 0.5 seconds as compared to the previous value of SAMPLE_COUNT = 16. I made a commit with the changes you recommended, also updated SAMPLE_COUNT value to 400, and used the break in loop for lower roughness. Please feel free to test this and suggest any further improvements.

@davepagurek
Copy link
Contributor

I guess the reason why the randomness + dithering doesn't work as well for these specular textures is because they get progressively smaller and smaller, so single pixels become larger and larger relative to the whole texture.

Anyway, the quality in your image looks good! We can see if we can squeeze a bit more performance out of it by maybe storing pow(2.0,(roughness+0.1)*20.0) in a variable outside the for loop to avoid recalculating it inside the loop. We probably also need to round it? Otherwise we might not have FLOAT_SAMPLE_COUNT ever reach exactly the number in question.

@sudhanshuv1
Copy link
Contributor Author

sudhanshuv1 commented Dec 21, 2023

Thanks for checking out that code! Actually it had a bug, I had checked for equality between pow(2.0,(roughness+0.1)*20.0) and FLOAT_SAMPLE_COUNT. FLOAT_SAMPLE_COUNT has the fixed value of 400.0, so the loop runs for 400 iterations at every roughness value instead of breaking out at expected values of pow(2.0,(roughness+0.1)*20.0) . I should have instead compared pow(2.0,(roughness+0.1)*20.0) with i - the current iteration of the loop.

Here is the corrected version of that code - :

void main(){
  const int SAMPLE_COUNT = 400; // 4096
  int lowRoughnessLimit = int(pow(2.0,(roughness+0.1)*20.0));
  float totalWeight = 0.0;
  vec3 prefilteredColor = vec3(0.0);
  float phi = vTexCoord.x * 2.0 * PI;
  float theta = vTexCoord.y * PI;
  float x = sin(theta) * cos(phi);
  float y = sin(theta) * sin(phi);
  float z = cos(theta);
  vec3 N = vec3(x,y,z);
  vec3 V = N;
  for (int i = 0; i < SAMPLE_COUNT; ++i)
  { // break at smaller sample numbers for low roughness levels
    if(i == lowRoughnessLimit)
    {
      break;
    }
    vec2 Xi = HammersleyNoBitOps(i, SAMPLE_COUNT);
    vec3 H = ImportanceSampleGGX(Xi, N, roughness);
    vec3 L = normalize(2.0 * dot(V, H) * H - V);

    float NdotL = max(dot(N, L), 0.0);
    if (NdotL > 0.0)
    {
      prefilteredColor += texture2D(environmentMap, nTOE(L)).xyz * NdotL;
      totalWeight += NdotL;
    }
  }
  prefilteredColor = prefilteredColor / totalWeight;

  gl_FragColor = vec4(prefilteredColor, 1.0);
}

I casted pow(2.0,(roughness+0.1)*20.0) into an int and then compared it with i. The loop breaks at different values of lowRoughnessLimit for different values of roughness:

roughness lowRoughnessLimit
0.0 4
0.1 16
0.2 64
0.3 256

Using this code, the specular light was somewhat sharper than 1.9.0 but the difference is not noticable:

1.9.0 Latest commit in this PR
1 9 0 afterMakingChanges
1.9.0 SAMPLE_COUNT = 400 and breaking out at low roughness levels

Comparing the execution time of imageLight using my present code with the code I used in my first commit and with the one in 1.9.0 :

Version Execution time
1.9.0 1674
First commit in this PR (Using SAMPLE_COUNT=16 ) 467
Latest commit in this PR (Using SAMPLE_COUNT = 400 406

It turns out that increasing the value of SAMPLE_COUNT from 16 to 400 didn't increase the execution time, so we didn't lose the gain that we had achieved earlier. Hopefully this rectified code has made the required changes.

@davepagurek
Copy link
Contributor

Awesome, that's looking good, and it's great to hear that the performance ended up better than expected too! I'll play around with it a bit more tonight just to double check everything. Nice work!

Copy link
Contributor

@davepagurek davepagurek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay, I finally got a chance to pull and take a look, everything looks great!

Thanks for your time iterating on this!

@davepagurek davepagurek merged commit ddede18 into processing:main Dec 22, 2023
2 checks passed
@sudhanshuv1 sudhanshuv1 deleted the refining-sample-numbers branch December 23, 2023 20:02
@sudhanshuv1
Copy link
Contributor Author

@all-contributors please add @sudhanshuv1 for code

Copy link
Contributor

@sudhanshuv1

I've put up a pull request to add @sudhanshuv1! 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refining the number of samples used when creating diffuse and specular textures.
2 participants