Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

depth and point cloud #19

Closed
LogWell opened this issue Apr 18, 2020 · 4 comments
Closed

depth and point cloud #19

LogWell opened this issue Apr 18, 2020 · 4 comments

Comments

@LogWell
Copy link

LogWell commented Apr 18, 2020

Notice that there are two pieces of code in PIFu/lib/renderer/gl/render.py

get_color(self, color_id=0): ...
get_z_value(self): ...

When I try to recover the point cloud from the depth value, the result shows that there is an error in the depth direction. If I translate the point cloud, it can match the original mesh.

I don't know if there's something wrong with my understanding of the camera model that makes the code incorrect, I hope to get your help, thanks!

out_all_z = rndr.get_z_value()

v = []
for ii in range(512):
    for jj in range(512):
        if out_all_z[ii, jj] != 1.0:
            X = (jj - 256) * cam.ortho_ratio
            Y = (256 - ii) * cam.ortho_ratio
            Z = (1 - out_all_z[ii, jj] * 2) * (cam.far - cam.near) / 2
            P = np.array([X, Y, Z]) / y_scale + vmed
            v.append(P)
v = np.array(v)
@shunsukesaito
Copy link
Owner

Sorry for late response. If you are interested in generating point cloud, you can change the following line in lib/renderer/gl/data/prt.vs
VertexOut.Position = R * pos;
to
VertexOut.Position = pos;

Then you can directly obtain point cloud rendered from the input view by
get_color(self, color_id=2)
You can refer to lib/renderer/gl/data/prt.fs to see what attribute corresponds to which color_id.
Hope it helps.

@LogWell
Copy link
Author

LogWell commented Apr 21, 2020

After change VertexOut.Position = R * pos; to VertexOut.Position = pos; in lib/renderer/gl/data/prt.vs, and insert the following code in apps/render_data.py,

out_all_z = rndr.get_color(2)

v1, v2 = [], []
for ii in range(512):
    for jj in range(512):
        if out_all_z[ii, jj][3] != 0.0:
            p1 = out_all_z[ii, jj, :3] / y_scale + vmed
            v1.append(p1)

            X = (jj - 256) * cam.ortho_ratio
            Y = (256 - ii) * cam.ortho_ratio
            Z = - out_all_z[ii, jj, 3] * 100
            p2 = np.array([X, Y, Z]) / y_scale + vmed
            v2.append(p2)

v1 = np.array(v1)
v2 = np.array(v2)

with open('v1.obj', 'w') as fp:
    fp.write(('v {:f} {:f} {:f}\n' * v1.shape[0]).format(*v1.reshape(-1)))
with open('v2.obj', 'w') as fp:
    fp.write(('v {:f} {:f} {:f}\n' * v2.shape[0]).format(*v2.reshape(-1)))

I get the following results: a) is the original input mesh; b) is v1 overlaps on a); c) is the side view, they match perfectly; in d), the green one is v1, the blue point cloud is v2 (there is a little translation between them)

If I use out_all_z = rndr.get_z_value() in the first comment, I got the same result as v2.

I'm a little bit confused about these results, is there anything wrong about on calculation of v2, or the first way to calculate point cloud?

@shunsukesaito
Copy link
Owner

The output depth should be normalized to [0,1] corresponding to [zNear, zFar] in the original space. As zNear=-100 and zFar=100, your depth renormalization code seems incorrect to me.
How about
Z = - (out_all_z[ii, jj, 3]-0.5) * 200
?

@LogWell
Copy link
Author

LogWell commented Apr 21, 2020

In the first comment:
I use out_all_z = rndr.get_z_value() and Z = (1 - out_all_z[ii, jj] * 2) * (cam.far - cam.near) / 2, you can check it.

In the third comment:
I use out_all_z = rndr.get_color(2) and Z = - out_all_z[ii, jj, 3] * 100.

As your reply, the first comment was the way you wrote, but there is a little deviation from the input mesh. If I replace the third comment about Z, the result is wrong.

Thanks for quick reply~

@LogWell LogWell closed this as completed May 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants