- Reconstructed the indoor scenes by transforming the depth images into point cloud models
- Making local scene fragments, registering (aligned) different fragments, and finally integrated fragments into meshes utilizing Open3D scene reconstruction pipeline.
- Capture RGB-D images using Realsense D435 and iPhone dual wide camera
- python 3.6.12
- jupyter-core: 4.6.3
- jupyter-notebook : 6.0.3
- numpy: 1.19.4
- librealsense SDK
- open3d: 0.10.0
- opencv-contrib-python: 4.4.0.46
Clone the repository with the following command:
git clone https://github.com/ashura1234/iOS-Depth-Camera.git
- Deploy the Depth Camera project to iPhone 7 or newer using Xcode
- Open the app and record the intrinsic matrix and scale shown in console
- Press the start button to start recording
- Press the start button again to stop recording
- Multiply the elements in intrinsic matrix by scale (except the 1.0)
- Save the scaled intrinsic matrix in camera_intrinsic.json
- Dump the color folder and data folder to the project folder using Xcode
- Create a new config json file
- Run main.ipynb
- Install Realsense camera
- Start recording
python realsense_recorder.py --record_imgs
- Dump outputs to project folder
- Create a new config json
- Run main.ipynb
- Hold the camera tight and avoid and disjoint movement
- Pan the scene first then go for details
- Avoid strong source of light
- Avoid mirrors
Open scene/integrated.ply
iPhone stereo camera has serious flickering and precision issues. Possible reason could be the built-in filtering function creates noise, unit converting (m to mm), small distance between lenses.
Realsense camera has better depth precision, but it is still not precise enough to perform perfect loop closure. Noise is still an issue.
- Turn off filtering in DepthCapture app and deal with nil Float
- Add support of Kinect and iPhone 12 LiDAR camera
- Fine tuning parameters for loop closure