You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there!
Thank you so much for this amazing work!
I'm trying to register known CAD models of anatomy to real world counterparts (patients) based solely on the partially occluded and noisy reconstructed point cloud data I get from the ToF camera of the HoloLens 2 mixed reality headset(using no RGB - only D). I'm completely new to Deep Learning, and I was wondering how I can best leverage the fact that I have the mesh of the 3D model I'm trying to register beforehand. Ideally I want to create a pipeline that lets one upload a 3D model and do a bunch of processing before to create the best possible network to register the CAD model.
Before doing any retraining, is there anything I can change in my config file to optimize the pretrained network for this use case of aligning smaller sized objects like the CT scan of a knee? I used the indoor config in my test and only changed dgcnn_k to 4 to avoid an error.(see What I have done so far ) Do you have advise what else I could optimize/ change? Unfortunately out of the box the network occasionally registers the CAD model upside down.
Would it be a good idea to train this network based solely on the one CAD model that I'm trying to register? Should I collect a lot of noise data that one would encounter in my use case and then train a network to be able to generate realistic noise that can be used during this custom training step?
Thank you so much for your consideration and sharing your work!
What I have done so far (for any complete beginners wondering how to use this for custom data):
SetUP before demo.py
on Ubuntu 20.04.3 LTS with the right devtools installed git, ninja etc
Due to the symmetry in your data, "registering objects upside down" is quite expected. If you check those deep models registering ModelNet40 datasets, quite often, they restrict the test samples to have rotation angles within 45 degrees such that they can directly avoid ambiguity caused by symmetry. In large scale data, we rely on larger context to disambiguate them, I guess this would be challenging in your case.
Without some hacks, KPConv backbone is sensitive to hyper-parameter changes, so not much you can do with the pre-trained models. However, you can check D3Feat paper to see how they hack the backbone to achieve generalisation, though only to some extent.
I think finetuning the pretrained model on your collected data is a good idea.
Let me know if your need more assistance. I was once asked about such application scenarios and I am looking forward to see how it works.
Hi there!
Thank you so much for this amazing work!
I'm trying to register known CAD models of anatomy to real world counterparts (patients) based solely on the partially occluded and noisy reconstructed point cloud data I get from the ToF camera of the HoloLens 2 mixed reality headset(using no RGB - only D). I'm completely new to Deep Learning, and I was wondering how I can best leverage the fact that I have the mesh of the 3D model I'm trying to register beforehand. Ideally I want to create a pipeline that lets one upload a 3D model and do a bunch of processing before to create the best possible network to register the CAD model.
Before doing any retraining, is there anything I can change in my config file to optimize the pretrained network for this use case of aligning smaller sized objects like the CT scan of a knee? I used the indoor config in my test and only changed dgcnn_k to 4 to avoid an error.(see What I have done so far ) Do you have advise what else I could optimize/ change? Unfortunately out of the box the network occasionally registers the CAD model upside down.
Would it be a good idea to train this network based solely on the one CAD model that I'm trying to register? Should I collect a lot of noise data that one would encounter in my use case and then train a network to be able to generate realistic noise that can be used during this custom training step?
Thank you so much for your consideration and sharing your work!
What I have done so far (for any complete beginners wondering how to use this for custom data):
SetUP before demo.py
demo.py works like a charm!.
Using custom data:
python scripts/demo.py configs/test/testconfig.yaml
Results:
The text was updated successfully, but these errors were encountered: