-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a pipeline to convert video image stream into kapture data format? #21
Comments
Hi, |
Hi, |
In trajectories.txt are the poses of the images. I assumed that you didn't have them, and in this case would estimate them with kapture-localization (the mapping ends with kapture_colmap_build_map.py which runs colmap point_triangulator when poses are known or colmap mapper when they're not -> you can import them back to kapture with kapture_import_colmap.py). If you already know the poses, then depending on the format, you might have to create the trajectories.txt yourself. |
Hi! Looks good for 3D reconstruction without known poses. Try to run this script: https://github.com/naver/kapture-localization/blob/main/tools/kapture_colmap_build_map.py Best, |
It's either https://github.com/naver/kapture-localization/blob/main/pipeline/kapture_pipeline_mapping.py or manually running the commands. I'd recommand the pipeline script. For this you should follow the recommended-dataset-structure, in particular storing what's currently in "reconstruction" outside, the pipeline script is responsible for reconstructing the kapture folder as needed with symlinks (in code, they're called kapture_proxy). You could also run the commands manually: |
about the installation error, a comma was missing in setup.py so it shoud be fixed now, sorry for that. about the assertion, all it says is that kapture_compute_matches didn't find any descriptors in first make sure that you are running kapture 1.1.0, kapture 1.0.x would not be able to read the kapture data produced by kapture-localization 0.1.0. then
In descriptors.txt, keypoints_type would have to be r2d2_500 parameters to the pipeline command are identical to the tutorial in this configuration, note that --topk 5 is very low, usually we use --topk 20. after the crash, you can check colmap-sfm/r2d2_500/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping
|
kapture_pipeline_localize.py is used to get positions for images that were not in the mapping data. Mapping and query data cannot have images with the same name. you didn't have any poses at the start so I don't think you want to run it. After kapture_pipeline_mapping.py, you should have the reconstruction in the colmap format (sparse 3D + estimated positions for the images). you can also import it back to kapture if you want with kapture_import_colmap.py
|
Thank you! If I want to perform kapture_pipeline_localize.py, is there any existing method in Kapture I can use to estimate the poses? Also, I tried to use the resultant sparse 3D model to generate a dense model in Colmap, however, the result is not as good as I expected. (Since the features captured using R2D2 and Deep image retrieval should be better than the features captured using SIFT ) Do you have any idea why this is the case? Thank you so much for your time! Best! |
First about extraction parameters:
For AP-GeM (Deep image retrieval) we use the Resnet101-AP-GeM-LM18.pt model with the default parameters. What is holding you back though is probably camera intrinsics. With kapture_import_image_folder.py, one camera is added for each image, with the type UNKNOWN_CAMERA and garbage default parameters. First you would want to reduce the number of cameras, images that are from the same camera should be referencing the same sensor in records_camera.txt. You might have to write a script for your particular case or do it manually with rectangular selection for example. In the end, if there are no zoom or such, you could end up with one sensor per video in sensors.txt. Alternatively, you could import the colmap reconstruction that you obtained with SIFT and colmap, and use these camera parameters for the r2d2/apgem pipeline. |
Thank you so much for your help! It was really helpful! |
Hi, I am currently using Colmap for 3D reconstruction (Using video frames that I collected). I want to try kapture-localisation for 3D reconstruction, however, I am not sure how I can convert my data into kapture format. Do you have any suggestions? Thank you in advance.
The text was updated successfully, but these errors were encountered: