Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert main due to Issue-151, main is reported to be buggy, need to be fixed. #152

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions .github/workflows/run_experiment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
label_name: ${{ env.LABEL_NAME }}
steps:
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v3

- id: check
run: |
Expand All @@ -40,9 +40,8 @@ jobs:
issues: write

steps:

- name: Check out code
uses: actions/checkout@v2
uses: actions/checkout@v3

- name: Get the branch name
id: get-branch-name
Expand All @@ -67,6 +66,8 @@ jobs:
# use the branch name for the Docker tag
IMAGE_TAG: ${{ steps.get-branch-name.outputs.BRANCH_NAME }}
run: |
docker system prune -af
docker volume prune -f
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-2.amazonaws.com
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -f Dockerfile.aws .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
Expand All @@ -92,4 +93,4 @@ jobs:
LABEL_NAME: ${{ needs.check-label-prefix.outputs.label_name }}
run: |
python3 ci/run_experiment.py


3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ Data/
*.egg
*.egg-info
imgui.ini
image/
*.qdstrm
*.nsys-rep
*.sqlite
build/
*.ll
22 changes: 22 additions & 0 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,28 @@
],
"preLaunchTask": "install"
},
{
"name": "tat truck with noise",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/experiment/camera_pose_optimization/camera_pose_optimization.py",
"console": "integratedTerminal",
"justMyCode": false,
"args": [
"--train_config",
"config/tat_truck_every_8_test.yaml",
],
"preLaunchTask": "install"
},
{
"name": "tat truck pose estimation",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/experiment/camera_pose_optimization/pose_estimation.py",
"console": "integratedTerminal",
"justMyCode": false,
"preLaunchTask": "install"
},
{
"name": "tat m60 training",
"type": "python",
Expand Down
4 changes: 3 additions & 1 deletion Dockerfile.aws
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
FROM 763104351884.dkr.ecr.us-east-2.amazonaws.com/pytorch-inference:2.0.1-gpu-py310-cu118-ubuntu20.04-sagemaker
# FROM 763104351884.dkr.ecr.us-east-2.amazonaws.com/pytorch-inference:2.0.1-gpu-py310-cu118-ubuntu20.04-sagemaker
FROM 763104351884.dkr.ecr.us-east-2.amazonaws.com/pytorch-inference@sha256:69dcf4bccdf337e2eea1d195d63ec8f8528fab982a4697626684fe7f57a30a22
# preinstall dependencies for faster build
RUN pip install --upgrade pip && \
pip install --no-cache-dir -U taichi==1.6.0 matplotlib numpy pytorch_msssim dataclass-wizard pillow pyyaml pandas[parquet]==2.0.0 scipy argparse tensorboard
COPY . /opt/ml/code
WORKDIR /opt/ml/code
RUN pip install -i https://pypi.taichi.graphics/simple/ taichi-nightly
RUN pip install -r requirements.txt
RUN pip install -e .
7 changes: 5 additions & 2 deletions config/tat_truck_every_8_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,13 @@ log-metrics-interval: 100
print-metrics-to-console: False
enable_taichi_kernel_profiler: False
log_taichi_kernel_profile_interval: 3000
iteration_start_camera_pose_optimization: 50000
camera_pose_optimization_batch_size: 500
log_validation_image: False
feature_learning_rate: 0.005
position_learning_rateo: 0.00005
position_learning_rate_decay_rate: 0.9947
camera_pose_learning_rate: 1e-6
position_learning_rate_decay_interval: 100
loss-function-config:
lambda-value: 0.2
Expand All @@ -45,8 +48,8 @@ rasterisation-config:
depth-to-sort-key-scale: 10.0
far-plane: 2000.0
near-plane: 0.4
summary-writer-log-dir: logs/tat_truck_every_8_experiment
output-model-dir: logs/tat_truck_every_8_experiment
summary-writer-log-dir: logs/tat_truck_every_8_baseline
output-model-dir: logs/tat_truck_every_8_baseline
train-dataset-json-path: 'data/tat_truck_every_8_test/train.json'
val-dataset-json-path: 'data/tat_truck_every_8_test/val.json'
val-interval: 1000
4 changes: 4 additions & 0 deletions config/test_sagemaker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,14 @@ print-metrics-to-console: True
enable_taichi_kernel_profiler: True
log_taichi_kernel_profile_interval: 3000
log_validation_image: True
iteration_start_camera_pose_optimization: 50000
camera_pose_optimization_batch_size: 200
feature_learning_rate: 0.005
position_learning_rate: 0.00005
position_learning_rate_decay_rate: 0.9947
camera_pose_learning_rate_decay_rate: 0.9947
position_learning_rate_decay_interval: 100
camera_pose_learning_rate: 2e-5
loss-function-config:
lambda-value: 0.2
enable_regularization: False
Expand Down
57 changes: 57 additions & 0 deletions experiment/camera_pose_optimization/camera_pose_optimization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import argparse
from taichi_3d_gaussian_splatting.GaussianPointTrainer import GaussianPointCloudTrainer

DELTA_T_RANGE = 0.1
DELTA_ANGLE_RANGE = 0.01

def add_delta_to_se3(se3_matrix: np.ndarray):
delta_t = np.random.uniform(-DELTA_T_RANGE, DELTA_T_RANGE, size=(3,))
delta_angle = np.random.uniform(-DELTA_ANGLE_RANGE, DELTA_ANGLE_RANGE, size=(3,))
Rx = np.array([[1, 0, 0],
[0, np.cos(delta_angle[0]), -np.sin(delta_angle[0])],
[0, np.sin(delta_angle[0]), np.cos(delta_angle[0])]])
RY = np.array([[np.cos(delta_angle[1]), 0, np.sin(delta_angle[1])],
[0, 1, 0],
[-np.sin(delta_angle[1]), 0, np.cos(delta_angle[1])]])
Rz = np.array([[np.cos(delta_angle[2]), -np.sin(delta_angle[2]), 0],
[np.sin(delta_angle[2]), np.cos(delta_angle[2]), 0],
[0, 0, 1]])
delta_rotation = Rz @ RY @ Rx
se3_matrix[:3, :3] = se3_matrix[:3, :3] @ delta_rotation
se3_matrix[:3, 3] += delta_t
return se3_matrix


if __name__ == "__main__":
plt.switch_backend("agg")
parser = argparse.ArgumentParser("Train a Gaussian Point Cloud Scene")
parser.add_argument("--train_config", type=str, required=True)
parser.add_argument("--gen_template_only",
action="store_true", default=False)
args = parser.parse_args()
if args.gen_template_only:
config = GaussianPointCloudTrainer.TrainConfig()
# convert config to yaml
config.to_yaml_file(args.train_config)
exit(0)
config = GaussianPointCloudTrainer.TrainConfig.from_yaml_file(
args.train_config)

original_train_dataset_json_path = config.train_dataset_json_path

df = pd.read_json(original_train_dataset_json_path, orient="records")
df["T_pointcloud_camera"] = df["T_pointcloud_camera"].apply(lambda x: np.array(x).reshape(4, 4))
df["T_pointcloud_camera_with_noise"] = df["T_pointcloud_camera"].apply(lambda x: add_delta_to_se3(x))
# sample in row, select 20% of the data to add noise
df["T_pointcloud_camera"] = df.apply(lambda x: x["T_pointcloud_camera_with_noise"] if np.random.rand() < 0.2 else x["T_pointcloud_camera"], axis=1)


# save df to a temp json file
df.to_json("/tmp/temp.json", orient="records")
config.train_dataset_json_path = "/tmp/temp.json"

trainer = GaussianPointCloudTrainer(config)
trainer.train()
160 changes: 160 additions & 0 deletions experiment/camera_pose_optimization/pose_estimation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
# %%
import sys
sys.path.append("../..")
# %%
import argparse
import taichi as ti
from taichi_3d_gaussian_splatting.Camera import CameraInfo
from taichi_3d_gaussian_splatting.CameraPoses import CameraPoses
from taichi_3d_gaussian_splatting.GaussianPointCloudRasterisation import GaussianPointCloudRasterisation
from taichi_3d_gaussian_splatting.GaussianPointCloudScene import GaussianPointCloudScene
from taichi_3d_gaussian_splatting.ImagePoseDataset import ImagePoseDataset
from taichi_3d_gaussian_splatting.LossFunction import LossFunction
from taichi_3d_gaussian_splatting.utils import torch2ti, SE3_to_quaternion_and_translation_torch, quaternion_rotate_torch, quaternion_multiply_torch, quaternion_conjugate_torch, inverse_SE3_qt_torch
from dataclasses import dataclass
from typing import List, Tuple
import torch
import numpy as np
from scipy.spatial.transform import Rotation as R
import pandas as pd
import matplotlib.pyplot as plt
# %%
DELTA_T_RANGE = 0.2
DELTA_ANGLE_RANGE = 0.3


def add_delta_to_se3(se3_matrix: np.ndarray):
np.random.seed(0)
delta_t = np.random.uniform(-DELTA_T_RANGE, DELTA_T_RANGE, size=(3,))
delta_angle = np.random.uniform(-DELTA_ANGLE_RANGE,
DELTA_ANGLE_RANGE, size=(3,))
Rx = np.array([[1, 0, 0],
[0, np.cos(delta_angle[0]), -np.sin(delta_angle[0])],
[0, np.sin(delta_angle[0]), np.cos(delta_angle[0])]])
RY = np.array([[np.cos(delta_angle[1]), 0, np.sin(delta_angle[1])],
[0, 1, 0],
[-np.sin(delta_angle[1]), 0, np.cos(delta_angle[1])]])
Rz = np.array([[np.cos(delta_angle[2]), -np.sin(delta_angle[2]), 0],
[np.sin(delta_angle[2]), np.cos(delta_angle[2]), 0],
[0, 0, 1]])
delta_rotation = Rz @ RY @ Rx
se3_matrix[:3, :3] = se3_matrix[:3, :3] @ delta_rotation
# se3_matrix[:3, 3] += delta_t
return se3_matrix


# %%
ti.init(ti.cuda)
trained_parquet_path = "/home/kuangyuan/hdd/Development/taichi_3d_gaussian_splatting/logs/tat_truck_every_8_experiment/scene_29000.parquet"
dataset_json_path = "/home/kuangyuan/hdd/Development/taichi_3d_gaussian_splatting/data/tat_truck_every_8_test/train.json"

rasterisation = GaussianPointCloudRasterisation(
config=GaussianPointCloudRasterisation.GaussianPointCloudRasterisationConfig(
enable_grad_camera_pose=True,
near_plane=0.8,
far_plane=1000.,
depth_to_sort_key_scale=100.))
scene = GaussianPointCloudScene.from_parquet(
trained_parquet_path, config=GaussianPointCloudScene.PointCloudSceneConfig(max_num_points_ratio=None))
scene = scene.cuda()
train_dataset = ImagePoseDataset(
dataset_json_path=dataset_json_path)

loss_function = LossFunction(
config=LossFunction.LossFunctionConfig(
enable_regularization=False))

df = pd.read_json(dataset_json_path, orient="records")
df["T_pointcloud_camera_original"] = df["T_pointcloud_camera"].apply(
lambda x: np.array(x).reshape(4, 4))
df["T_pointcloud_camera"] = df["T_pointcloud_camera_original"].apply(
lambda x: add_delta_to_se3(x))

# save df to a temp json file
df.to_json("/tmp/temp.json", orient="records")
with_noise_dataset_json_path = "/tmp/temp.json"

camera_poses = CameraPoses(dataset_json_path=with_noise_dataset_json_path)
camera_poses = camera_poses.cuda()
camera_pose_optimizer = torch.optim.AdamW(
camera_poses.parameters(), lr=1e-3, betas=(0.9, 0.999))

distance_list = []
angle_list = []
loss_list = []

for i in range(1000):
# decay learning rate by 0.5 every 50 iterations
if i % 50 == 0:
for param_group in camera_pose_optimizer.param_groups:
param_group['lr'] *= 0.9
camera_pose_optimizer.zero_grad()
image_gt, input_q_pointcloud_camera, input_t_pointcloud_camera, camera_pose_indices, camera_info = train_dataset[
200]
input_q_camera_pointcloud, input_t_camera_pointcloud = inverse_SE3_qt_torch(
q=input_q_pointcloud_camera, t=input_t_pointcloud_camera)
trained_q_camera_pointcloud, trained_t_camera_pointcloud = camera_poses(
camera_pose_indices)
print(
f"trained_q_camera_pointcloud: {trained_q_camera_pointcloud.detach().cpu().numpy()}")
print(
f"input_q_camera_pointcloud: {input_q_camera_pointcloud.detach().cpu().numpy()}")
print(
f"trained_t_camera_pointcloud: {trained_t_camera_pointcloud.detach().cpu().numpy()}")
print(
f"input_t_camera_pointcloud: {input_t_camera_pointcloud.detach().cpu().numpy()}")

image_gt = image_gt.cuda()
input_q_camera_pointcloud = input_q_camera_pointcloud.cuda()
input_t_camera_pointcloud = input_t_camera_pointcloud.cuda()
trained_q_camera_pointcloud = trained_q_camera_pointcloud.cuda()
trained_t_camera_pointcloud = trained_t_camera_pointcloud.cuda()
camera_info.camera_intrinsics = camera_info.camera_intrinsics.cuda()

delta_t = input_t_camera_pointcloud - trained_t_camera_pointcloud
distance = torch.norm(delta_t, dim=-1).item()
distance_list.append(distance)
delta_angle_cos = (input_q_camera_pointcloud *
trained_q_camera_pointcloud).sum(dim=-1)
delta_angle = torch.acos(delta_angle_cos).item() * 180 / np.pi
angle_list.append(delta_angle)
camera_info.camera_width = int(camera_info.camera_width)
camera_info.camera_height = int(camera_info.camera_height)
gaussian_point_cloud_rasterisation_input = GaussianPointCloudRasterisation.GaussianPointCloudRasterisationInput(
point_cloud=scene.point_cloud.contiguous(),
point_cloud_features=scene.point_cloud_features.contiguous(),
point_object_id=scene.point_object_id.contiguous(),
point_invalid_mask=scene.point_invalid_mask.contiguous(),
camera_info=camera_info,
q_camera_pointcloud=trained_q_camera_pointcloud.contiguous(),
t_camera_pointcloud=trained_t_camera_pointcloud.contiguous(),
color_max_sh_band=3,
)
image_pred, image_depth, pixel_valid_point_count = rasterisation(
gaussian_point_cloud_rasterisation_input)
# clip to [0, 1]
image_pred = torch.clamp(image_pred, min=0, max=1)
# hxwx3->3xhxw
image_pred = image_pred.permute(2, 0, 1)
loss, l1_loss, ssim_loss = loss_function(
image_pred,
image_gt,
point_invalid_mask=scene.point_invalid_mask,
pointcloud_features=scene.point_cloud_features)
loss.backward()
camera_pose_optimizer.step()
camera_poses.normalize_quaternion()
loss_list.append(loss.item())

iteration = np.arange(len(distance_list))
ax, fig = plt.subplots(1, 3, figsize=(15, 5))
fig[0].plot(iteration, distance_list, label="distance")
fig[0].set_title("distance")
fig[1].plot(iteration, angle_list, label="angle")
fig[1].set_title("angle")
fig[2].plot(iteration, loss_list, label="loss")
fig[2].set_title("loss")
plt.show()

# %%

3 changes: 1 addition & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
taichi>=1.6.0
matplotlib
numpy
pytorch_msssim
Expand All @@ -9,4 +8,4 @@ pandas[parquet]>=2.0.0
scipy
argparse
tensorboard
plyfile
plyfile
Loading
Loading