Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a pipeline to convert video image stream into kapture data format? #21

Closed
charlesyuJY opened this issue May 19, 2021 · 15 comments

Comments

@charlesyuJY
Copy link

Hi, I am currently using Colmap for 3D reconstruction (Using video frames that I collected). I want to try kapture-localisation for 3D reconstruction, however, I am not sure how I can convert my data into kapture format. Do you have any suggestions? Thank you in advance.

@yocabon
Copy link
Contributor

yocabon commented May 20, 2021

Hi,
we have an import script for folder of images that could be used (see https://github.com/naver/kapture/blob/main/tools/kapture_import_image_folder.py).
Does this help ?

@charlesyuJY
Copy link
Author

Hi,
Thank you so much for helping. I did manage to generate the records_camera.txt file. Just wondering if I need to obtain the trajectories.txt file by myself or there is a way in Kapture that I can use? Best, Thank you

@yocabon
Copy link
Contributor

yocabon commented May 20, 2021

In trajectories.txt are the poses of the images. I assumed that you didn't have them, and in this case would estimate them with kapture-localization (the mapping ends with kapture_colmap_build_map.py which runs colmap point_triangulator when poses are known or colmap mapper when they're not -> you can import them back to kapture with kapture_import_colmap.py).

If you already know the poses, then depending on the format, you might have to create the trajectories.txt yourself.

@charlesyuJY
Copy link
Author

Hi,
Again, thank you so much for such a fast reply. I obtained the local features and global features following the instructions given in git. All the files I have are shown below. Is this sufficient for Kapture-localization? Thank you so much for your support. It is really helpful! Best!
image

@humenbergerm
Copy link
Contributor

Hi!

Looks good for 3D reconstruction without known poses. Try to run this script: https://github.com/naver/kapture-localization/blob/main/tools/kapture_colmap_build_map.py

Best,
Martin

@charlesyuJY
Copy link
Author

Hi, Thanks for the update.
I did try it with kapture_colmap_build_map.py, however, I got the issue shown below, I guess I am missing the matches folder. Any suggestions on how to generate this folder? Thanks. Best!
image

@yocabon
Copy link
Contributor

yocabon commented May 22, 2021

It's either https://github.com/naver/kapture-localization/blob/main/pipeline/kapture_pipeline_mapping.py or manually running the commands.

I'd recommand the pipeline script. For this you should follow the recommended-dataset-structure, in particular storing what's currently in "reconstruction" outside, the pipeline script is responsible for reconstructing the kapture folder as needed with symlinks (in code, they're called kapture_proxy).

You could also run the commands manually:
in succession tools/kapture_compute_image_pairs.py ; tools/kapture_compute_matches.py ; tools/kapture_run_colmap_gv.py (for this one you'd need a copy of your kapture_output folder which will have the verified matches, usually we do this with symlinks to avoid wasting space - you need sensors and reconstruction/keypoints) ; tools/kapture_colmap_build_map.py ; and maybe kapture/tools/kapture_import_colmap.py

@charlesyuJY
Copy link
Author

Thank you so much, this is really helpful. Just discovered something else while trying to install Kapture this morning. Please see the issue shown below. Thank you!
image

@charlesyuJY
Copy link
Author

Hi yocabon, I tried to run kapture_pipeline_mapping.py using the recommended structure as shown
image
However, I still get an AssertionError as shown below.
image
My structure is the same as the recommended structure except I did not split the data into testing folder and training folder. Thank you for helping! Best

yocabon added a commit that referenced this issue May 22, 2021
@yocabon
Copy link
Contributor

yocabon commented May 22, 2021

about the installation error, a comma was missing in setup.py so it shoud be fixed now, sorry for that.

about the assertion, all it says is that kapture_compute_matches didn't find any descriptors in kapture_inputs/proxy_mapping, so either the path is wrong or there was no descriptors in the folder.

first make sure that you are running kapture 1.1.0, kapture 1.0.x would not be able to read the kapture data produced by kapture-localization 0.1.0.

then
if your path is local_features/r2d2_500/descriptors, then you must have

local_features/r2d2_500/descriptors/descriptors.txt
local_features/r2d2_500/descriptors/yourimages_relative_path_withext.desc

In descriptors.txt, keypoints_type would have to be r2d2_500

parameters to the pipeline command are identical to the tutorial in this configuration, note that --topk 5 is very low, usually we use --topk 20.

after the crash, you can check colmap-sfm/r2d2_500/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping
it should contain

sensors/records_camera.txt
sensors/records_data
sensors/sensors.txt
reconstruction/keypoints/r2d2_500/keypoints.txt
reconstruction/keypoints/r2d2_500/yourimages_relative_path_withext.kpt
reconstruction/descriptors/r2d2_500/descriptors.txt
reconstruction/descriptors/r2d2_500/yourimages_relative_path_withext.desc
reconstruction/global_features/Resnet-101-AP-GeM/global_features.txt
reconstruction/global_features/Resnet-101-AP-GeM/yourimages_relative_path_withext.gfeat
reconstruction/matches/r2d2_500/

@charlesyuJY
Copy link
Author

Thank you again. I managed to get kapture_pipeline_mapping.py running. When I try kapture_pipeline_localize.py. I got the error saying database UNIQUE constraint failed: images.name as shown below.
image
Is it something to do with the way I name the images? Thank you so much for your support! Best!

@yocabon
Copy link
Contributor

yocabon commented May 22, 2021

kapture_pipeline_localize.py is used to get positions for images that were not in the mapping data. Mapping and query data cannot have images with the same name.

you didn't have any poses at the start so I don't think you want to run it. After kapture_pipeline_mapping.py, you should have the reconstruction in the colmap format (sparse 3D + estimated positions for the images).
colmap-sfm/r2d2_500/AP-GeM-LM18_top5/colmap.db and colmap-sfm/r2d2_500/AP-GeM-LM18_top5/reconstruction ; you can open them in colmap to get a look.

you can also import it back to kapture if you want with kapture_import_colmap.py

kapture_import_colmap.py -v debug --database colmap-sfm/r2d2_500/AP-GeM-LM18_top5/colmap.db --reconstruction colmap-sfm/r2d2_500/AP-GeM-LM18_top5/reconstruction --images mapping/sensors/records_data -o colmap-sfm/r2d2_500/AP-GeM-LM18_top5/imported -kpt r2d2_500 -desc r2d2_500

@charlesyuJY
Copy link
Author

Thank you! If I want to perform kapture_pipeline_localize.py, is there any existing method in Kapture I can use to estimate the poses? Also, I tried to use the resultant sparse 3D model to generate a dense model in Colmap, however, the result is not as good as I expected. (Since the features captured using R2D2 and Deep image retrieval should be better than the features captured using SIFT ) Do you have any idea why this is the case? Thank you so much for your time! Best!

@yocabon
Copy link
Contributor

yocabon commented May 25, 2021

First about extraction parameters:
for r2d2, this is what we usually use:

r2d2_WASF_N8_big.pt

scale-f=2**0.25
min-scale=0.3
max-scale=1
min-size=128
max-size=9999
max-keypoints=40000
reliability-thr=0.7
repeatability-thr=0.7

For AP-GeM (Deep image retrieval) we use the Resnet101-AP-GeM-LM18.pt model with the default parameters.
Usually top 20 is good enough. For some datasets top 50 is better.

What is holding you back though is probably camera intrinsics. With kapture_import_image_folder.py, one camera is added for each image, with the type UNKNOWN_CAMERA and garbage default parameters.

First you would want to reduce the number of cameras, images that are from the same camera should be referencing the same sensor in records_camera.txt. You might have to write a script for your particular case or do it manually with rectangular selection for example. In the end, if there are no zoom or such, you could end up with one sensor per video in sensors.txt.
Second, kapture_pipeline_mapping.py was hardcoded to use what we call config 1, a set of parameters that tells colmap to not refine camera intrinsics here, but this is not what you want. I added the --config option to kapture_pipeline_mapping.py, so you should pull it and use --config 0 instead.

Alternatively, you could import the colmap reconstruction that you obtained with SIFT and colmap, and use these camera parameters for the r2d2/apgem pipeline.

@charlesyuJY
Copy link
Author

Thank you so much for your help! It was really helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants