Skip to content

Given 3 images of the same object under different lighting conditions, recover the surface normal at each pixel and the overall shape of the object.

Notifications You must be signed in to change notification settings

dwang0721/Photometric-Stereo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Photometric-Stereo

Given 3 images of the same object under different lighting conditions, recover the surface normal at each pixel and the overall shape of the object. This technique is used in GelSight touch sensor, but my approach is simpler.

GelSight

 

1. Find Light Direction

Assuming lambert surface with all equal radiance from all angles, the Light direction is calculated by looking for the brightest spot on a sphere, I used Matlab to automatic find this position. The coordinate of the brightest spot is L(x, y, z). By dividing the vector by -z, we can get:

Where we define p, q values of the Light L are:

 

2. Build Intensity Lookup Table

Each Light L is represented by p and q. We know that the cross product of surface normal n and light is propotional to the image irradiance E:

Expand those and We get:

In Matlab, I used scatterInterpolant as the data structure to store E1/E2 and E2/E3 lookup value. q and p range from -10 to 10, with step size 0.1.

 

3. Build pq Map:

Since we have a Lookup table to find Gradience (p, q) at each pixel, we can easily build a gradience map the same size as the image. Each pixel of the image corresponds to a (p, q) value pair. I call this map pqMap. These pq value pair can also be represented by the surface normal.

I created a normal drawer function to plot the surface normals from a pqMap. Some render results:

normal normal

 

4. Recover the 3d model by Integration

I integrate surface gradience from 2 directions (LeftTop->Right Bottom, Right Bottom->LeftTop) and averaged them out. Some results are here under:

integration integration

 

5. Future Work

The sample images are taken by a camera and the surface is not fully lambert reflectance, so the mirror reflection (high light) causes the error of the calibration stage. Also if a pixel at the 3 input images is all black, there is no solution to the equation in Step 2, we get the error in some dark areas. Carefully placing light sources and using matte surface materials gives better results.

About

Given 3 images of the same object under different lighting conditions, recover the surface normal at each pixel and the overall shape of the object.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages