-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help Needed - Data Format for Synthetic Data #27
Comments
Hi, could you provide more details? The triangulation phase need images cropped by bbox (which is project from 3D to 2D). And moreover, after the crop, image intrinsics are needed to be adjusted accordingly, as in our demo code. Please check these operations are performed properly. |
Thank you for the answer.(@mizeller and I are working on the same project. ) By directly extracting transformation matrices ("T_wc" and "T_oc") from blender and a little bit hacking, we could crop images without making "Box.txt" and "ARposes.txt" and the OnePose++ pipeline worked successfully😊 |
@Maemaemaeko would you kindly share your pipeline for making new objects? |
Sure. This is our repository. |
Hello, have you solved the problem with Box.txt? Can you tell me the relevant solution |
Please refer to our small fix in parse_scanned_data.py. This might help you. |
Hey
I'm trying to use OnePose++ to estimate the pose of a novel object for which I created some synthetic data with Blender. I have this data in the standard BOP Format. I converted my data from BOP to the format used in your demo data. It is almost working, but currently the images that are cropped are completely wrong and sometimes the triangulation finds 0 matches because I used the same Box.txt from your demo data (which obviously makes no sense).
I couldn't figure out is how to get the Box.txt file correctly. Any help in that regard would be much appreciated!
Thanks in advance :-)
The text was updated successfully, but these errors were encountered: