Multi-View Fusion for 3D Semantic Segmentation of Building Materials from Residential Building Facades
Hanshuo Wu's Semester Project at ETH <Multi-View Fusion for 3D Semantic Segmentation of Building Materials from Residential Building Facades>
Import MatchPointPixel.py file as a tool, to create a point cloud object and develop a map between points and pixels.
import numpy as np
import pye57
import os
from MatchPointPixel import PointCloud
PC = PointCloud(your_file_path) # A Point Cloud Object
# Get images from the scanning
PC.image_list # A list of six images in <numpy.ndarray> format in RGB.
# Get extrinsic and intrinsic matrix for each image
PC.transformation_matrices_list # Length = 6, list of extrinsic matrices for six images.
PC.intrinsic_matrices_list # Length = 6, list of intrinsic matrices for six images.
# Get point cloud raw data
PC.to_world_system() # Return X_array, Y_array, Z_array, R_array, G_array, B_array, I_array, transformation_matrix.
# Match point to pixel
point = np.array([[0.753018],[15.486450],[4.570410],[1]], dtype = float)
PC.bridge_point_to_pixel(point) # Return (image_index, (pixel_x, pixel_y)) and polt the result
Use PointCloudProcess.ipynb file to
- segment materials in the image
- assign pixel's label to the corresponding point
- visualize the result
Use Quantification.ipynb.ipynb file to voxelize the predict results and quantify each material.
The current model is YOLO v8 and the weight is saved as best.pt.
In segment folder there are 3D segmentation results from the piepline.
scan1-0829.npy is the numpy array file of the result, can be viewed in the ResultVisualization.ipynb file.
![](https://private-user-images.githubusercontent.com/63944310/340324497-c1e1e351-7ccf-40f2-bbe5-d5ecdb5b8d2a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA5ODQ0OTIsIm5iZiI6MTcyMDk4NDE5MiwicGF0aCI6Ii82Mzk0NDMxMC8zNDAzMjQ0OTctYzFlMWUzNTEtN2NjZi00MGYyLWJiZTUtZDVlY2RiNWI4ZDJhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE0VDE5MDk1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTViNzI5NmQ5ZDRhZjc5MDBhMmNiMzUzNjcxYTg3ZTYyMGI5OGMzM2VlMjliZmFmYmFiZmYyN2I0ZTZlNTNmMDMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.7Np7QW95wZADLLoW41H4PKeWU0G8S7UraXP03mPDgnQ)
This project is supervised by Deepika Raghu, Martin Bucher and Prof. Dr. Catherine De Wolf from the Chair of Circular Economy for Architecture at ETH.
Original point cloud data are collected by Deepika Raghu, Martin Bucher and Matthew Gordon.
The 2D segmentation model is trained on the dataset collected by Deepika Raghu. The model is YOLOv8 from Ultralytics.
The code is built with pye57.
Thanks for their supports!