Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the formular 16.1 in pbrt-v3-book #336

Open
LittleTheFu opened this issue Jul 27, 2024 · 7 comments
Open

about the formular 16.1 in pbrt-v3-book #336

LittleTheFu opened this issue Jul 27, 2024 · 7 comments

Comments

@LittleTheFu
Copy link

here is the link below
https://www.pbr-book.org/3ed-2018/Light_Transport_III_Bidirectional_Methods/The_Path-Space_Measurement_Equation

I marked it with red lines,I don't know why AFilm is missing
捕获

@rainbow-app
Copy link

Yes, I'm late, been 2 months since you asked.

Let me say that this whole bi-dir topic is very vague in the book. After trying to understand it, I gave up and turned to Veach's PhD thesis. He describes the algorithm much better. Unfortunately he's too abstract, and doesn't provide any concrete examples.

Anyway, I can't comment on those integrals and answer your question.

However, if you want to understand how we get to the result (the importance expression), I think I can help you. I can write the proper (in my opinion, I'm a physicist by education) derivation -- from the measurement. Do you want to understand that?


For now I'll briefly describe why their derivation of expression for importance W is vague. Their two key arguments are:

  • normalization 16.3 (why this norm??)
  • W*cos proportional to density p (again, why??).

Weird arguments, in my opinion, but ok, whatever. What's missing is the demonstration that the so defined W really measures radiance. After all, it is importance sole purpose! In fact, their W doesn't measure radiance (unless the image is 1x1, a single pixel); the code nevertheless works.

@LittleTheFu
Copy link
Author

Thank you for your reply.
Today I still can't get the point of the meaning of "W".
I searched this concept, but still can't understand it.
It would be very nice if you could explain it in detail.


However, if you want to understand how we get to the result (the importance expression), I think I can help you. I can write the proper (in my opinion, I'm a physicist by education) derivation -- from the measurement. Do you want to understand that?

yes, I want to understand it !!!!!

@rainbow-app
Copy link

Let me repeat, I found pbrt-book very vague for bdpt, so I use Veach's formulas and his notation (splitting of W into W^0=spatial part and W^1=directional). Don't let his measure-theory stuff scare you -- I found it very easy to ignore it.

Assume camera is pin-hole = a point (I didn't consider realistic cameras). Assume it measures some radiance L from a remote area light.

See eq. 8.7 (p.223) in Veach. The first term after second equals sign gives us the measurement in our case. We'll derive importance expression from equating it to L.

How camera is set up:

  • Camera position is modeled as a (2D) delta-fn on a surface (A(x_1) in 8.7). The delta-fn is embedded into W^0. After analytically collapsing the integral dA(x_1), the size of the surface can be made arbitrary small, and ignored for ray tracing.

  • Camera sensor is imaginary (doesn't participate in ray tracing), and there's no integral over it. d_s is distance from it to camera center, d_s=1 in pbrt code.

First consider only 1 pixel on the sensor.

importance-github
s=area of 1 pixel, S=area of remote surface, they are related as shown.

Now that term becomes: integral { L G W^1 C } dA(x_0), it must be =L to measure brightness=radiance. This integral is only over the small remote surface S.

You should now be able to follow the simple arithmetics in the image.

Now consider the full sensor, MxN pixels.

This is bdpt=bi-dir, so for each pixel we start a light subpath, and get splats. So brightness of the scene will be M*N times larger than it was for 1 pixel. So we need to compensate this increase by dividing by M*N. This corresponds to division by full area sensor A instead of s in the expression for C. And we get the expression from the pbrt textbook.

(this last argument is totally missing from the pbrt textbook, which is very sad)

Few additional comments

  • The index "j" on W on LHS is hidden inside W^1: (1) dependence on angle1, and (2) the "j" could have been expressed on the RHS by a multiplier like {1 inside j-th pixel, 0 outside}, but it's unnecessary (just keep in mind that we don't pick up any energy from outside of the pixel).

  • W^1 (=directional part) and C are dimensionless, final W (and the delta-fn) has dimension of 1/m^2.

  • The C is delta-like: it goes to infinity as pixel size goes to zero. It's ok: smaller the pixel => smaller the solid angle => smaller S, less energy comes to the pixel => the multiplier C has to become larger to measure same brightness.

Hope this is detailed enough.

@LittleTheFu
Copy link
Author

Thank you for your comment, it's very kind of you.

But I'm stuck in the middle,which is how to get the W^0.
I know the definition of W^0,W^1,like this:
we01

But I don't know how to get that W^0 as that in this step:
w

@rainbow-app
Copy link

Your second equation is good, but written the other way around. Should be W=W^0*W^1, it's just how we split the W (there's not much to think about it).

The first one is good angle-wise (all cosines get cancelled). Magnitude-wise -- no. If we consider only 1 pixel, there's no integral. You do write the integral, so it seems you consider the final W for whole sensor. Can't do it: no magic jumps please. You need to derive it in two steps: (1) 1 pixel, (2) whole sensor.

Neither of your two equations can be used as definition. W^0 and W^1 are not defined, they are derived.

Now to your question.

W^0 is derived from the way how you decide to model the camera. It doesn't follow from any equations. See "Camera position is modeled as a...". The general expression for W^0 (C*delta-fn) follows from those words. I'm sure there can be other approaches. I just picked the simplest (to my taste) model that could be fit into Veach's integrals.

@rainbow-app
Copy link

rainbow-app commented Oct 12, 2024

I guess, that (or something else) is still not clear.

The measurement eq. (Veach 8.7) gives us the freedom to choose a surface (existing in the scene, or introducing a new one) and a function (W). For a pinhole camera we don't need that much freedom: there's nothing to integrate (=average with a weight W). Well, almost don't need: we still would like some averaging over the pixel for anti-aliasing purposes. But roughly speaking, yes, we don't need that freedom.

Remember, we are at step 1 out of 2 = consider 1 pixel only.

The pixel value is determined by energy from a very narrow (again, no integral = no averaging = pixel is small) solid angle cone. The cone is determined by position of its origin (this is camera center) and position and size of the pixel.

Now there can be two approaches:

  1. Fix origin, and integrate over the pixel.

  2. Fix pixel, and integrate over surface that hosts the cone origin.

1st approach. Introduce a small sensor surface, and integrate over it. We'd choose the spatial W^0 to be similar to a delta-fn for that pixel: approximate the integral by a product of integrand and small pixel area. The camera center is fixed somewhere else (behind the sensor = off the surface), but it doesn't matter because camera is point-like anyway (point-like, yes, but in implementation we still can set d_s=1 -- it doesn't matter).

2nd approach. Introduce a small surface to host the camera center, and collapse the integral with a delta-fn (this time really delta, at a mathematical point) in W^0 (this is our freedom). And no integral over sensor surface. Well, roughly speaking: there will be integral, but it's a different integral, not like Veach 8.7, we'd approximate it as above.

We can choose either of the approaches, each does its work (=measures radiance for the "j"-th pixel) properly. It can be easily seen that both lead to same result. I chose 2nd originally.

(in both cases we equip the sensor with small ideal lens; and nothing of this participates in ray tracing)


I don't mind you taking breaks, or read Veach, or just live your life, but I was hoping that you confirm that you resolved it.

UPD. Imagine you are given the task of finding such a surface and a function W that would give (measure) radiance for a single pixel in a pinhole camera. Try to do it on your own. Most likely you'll end up with the same arguments, and the same expression for W^0.

@LittleTheFu
Copy link
Author

I'm sorry for taking so long time.

After reading your post, I think I finally understand, but I'm not entirely sure. Let me repeat it to see if I've got it right.

The "W" we want is the intergal over the red region(cone?).
The yellow line is one line carries its own weight.And this is direct function(or W^1, or detla-function).
And pixel area is W^0.
Because the pixel is so small,we can get the intergal simply by multipying the weight yellow carries with pixel area.

See the markers in the picture.
The real meaning of "delta" here is that, given a point on the pixel, we can get the only one weight line.(As in the picture,the blue circle specify the only yellow line).

w

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants