Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I train my own dataset? #23

Open
iFimo opened this issue Feb 14, 2024 · 12 comments
Open

How do I train my own dataset? #23

iFimo opened this issue Feb 14, 2024 · 12 comments

Comments

@iFimo
Copy link

iFimo commented Feb 14, 2024


First of all, some feedback.

After days of trial and error, @dm-de provided the final step to compile it with a running viewer. That saved me.
Originally posted by @dm-de in #15 (comment)


And it looks really amazing. Great work, @lfranke + team!


With the “new” instructions in the readme, everything works fine.

Anyway, I have two comments:

1:
For Windows conda prompt shell, I can’t use ./build/bin/RelWithDebInfo/viewer.exe --scene_dir scenes/tt_train.
I have to use the full path, because the conda prompt shell, e.g., C:\Users\USERNAME\TRIPS\build\bin\RelWithDebInfo/viewer.exe --scene_dir scenes/tt_train.

The same goes for the training command.

This could be added to the readme file, as well.

2:
It’s a bit confusing to have a scene named “train” and the training command is also called “train”. Maybe you could rename the whole “train” scene to something like “locomotive” (in the readme also). So that it is clear when it is the "train" command and when it is the "train" scene="locomotive" scene.


Now the training.

I've created really cool stuff with 3D Gaussian Splatting, but I'm still failing with TRIPS at the moment.

I've tried a lot and tried to write everything here, but I think reading all of this will be more confusing and time-consuming than helpful.

It starts with the "scenes" folder and the "experiments" folder. I thought the “scenes” folder was for raw training data and the “experiments” folder was for training results. However, this doesn't seem right as the viewer only needs the "scenes" folder to work.
So how do these two folders relate to each other and what is which one for?

And much more importantly:

Can someone explain step by step with commands for windows conda prompt shell what i have to do, to train with my own training data, i.e. my photos?
Please start with colmap, because I can't get a "dense_point_cloud.ply" file there, which I seem to need. I get several .ply files, but no "dense_point_cloud.ply". I have COLMAP-3.9.1-windows-cuda.


If you are still interested in how I tried to start the training but failed. Click here.


First of all, I tried to train the playground scene, and this seems works fine.
But I can't manage to prepare my own images correctly.

I have Colmap "COLMAP-3.9.1-windows-cuda", where I use the “Automatic reconstruction”. I tried a scene with 22 pictures just to understand the workflow. My scene called “audih3” in this case.
After automatic reconstruction in Colmap, I save the project to my ColmapScenes folder.

Here’s a screenshot of what I get.
PS: I work on Windows 10 (screenshot just made with Mac to see the file structure).

Screenshot_1: Colmap output.
Screen 1


I put the ColmapScenes folder in my TRIPS root (makes things easier for me).


Then I used the following commands in my TRIPS root folder:

C:\Users\USERNAME\TRIPS\build\bin\RelWithDebInfo\colmap2adop.exe --sparse_dir ColmapScenes\cm_audih3\sparse\0\ --image_dir ColmapScenes\cm_audih3\images\ --point_cloud_file ColmapScenes\cm_audih3\dense_point_cloud.ply --output_path scenes\tt_audih3 --scale_intrinsics 1 --render_scale 1

What I am missing is the “dense_point_cloud.ply” in my Colmap output, the command is looking for? Shouldn´t it be there? Is it saved with another filename and do I need to rename any of the existing files?


When I move on and ignore that there is no “dense_point_cloud.ply”, the command will create a tt_audih3 folder in my \Trips\scenes and creates some data there.

Screenshot_2: Created data in scenes folder
Screen 2

Here’s a list, which files I would expect, and which were created.

dataset.ini 		[required]  	Check
Camera files 		[required]  	Check
images.txt		[required]   	Check
camera_indices.txt 	[required]   	Check

point_cloud.ply 	[required]   	Fail
poses.txt 		[required]	Fail

masks.txt 		[optional]  	Fail
exposure.txt 		[optional]	Fail

(source: https://github.com/lfranke/TRIPS/blob/main/scenes/README.md)

Why is there no "point_cloud.ply" and no "poses.txt"? Will this be created, when I have the "dense_point_cloud.ply" in my ColmapScenes folder?


When I again ignore the missing files, do I then have to use:

C:\Users\USERNAME\TRIPS\build\bin\RelWithDebInfo\preprocess_pointcloud.exe --scene_path scenes/tt_audih3 --point_factor 2

Will this create a "point_cloud.bin" out of my "point_cloud.ply" like it is in the pre-trained models?


If everything is correct, I should then be able to execute the final command:

C:\Users\USERNAME\TRIPS\build\bin\RelWithDebInfo\train.exe --config configs/train_normalnet.ini --TrainParams.scene_names tt_train --TrainParams.name audih3

Am I correct?

Where are my mistakes?


I would be very thankful if someone could explain the training in detail.

@dm-de
Copy link

dm-de commented Feb 14, 2024

@iFimo
I tested COLMAP using this tutorial and was able to create dense cloud with GUI:
https://www.youtube.com/watch?v=mUDzWCuopBo
But for the moment, I did not train yet.
See also inside my linked post below - here are some hints for train
Can you try COLMAP and post some feedback?


Can you also give me some hints?

  1. Can you confirm, that "tt_experiments.zip" works? Here are no checkpoint's in zip.
    See here my post: viewer.exe not working #22

  2. What is your GPU memory usage with viewer.exe & playground scene
    I have a issue with viewer and I'm not sure, if this happen, because I have only 8GB VRAM or because my exe ist compiled wrong. I think this has something to do with saiga & cuda compute capability.
    My special problem is, that I compile on one machine and copy release to another machine (where I have no admin rights).
    For this I need to set cuda version manually... It is hard to find out what is wrong.

@iFimo
Copy link
Author

iFimo commented Feb 14, 2024

@iFimo I tested COLMAP using this tutorial and was able to create dense cloud with GUI: https://www.youtube.com/watch?v=mUDzWCuopBo But for the moment, I did not train yet. See also inside my linked post below - here are some hints for train Can you try COLMAP and post some feedback?

@dm-de thanks for the tip, but which one is the “dense_point_cloud.ply”? I did exactly the same as in the video. But there is no file named “dense_point_cloud.ply”. Do I need to rename one of these files from Colmap? Or should there be a “dense_point_cloud.ply”.
Is the “meshed-poisson.ply” what I’m looking for?

You can click on the little arrow at the end of my first post. There you will find Screenshot_1 with my Colmap output. Is this correct as it is?

If it is correct, what would be the next command after i renamed the “meshed-poisson.ply” to “dense_point_cloud.ply”?
I would try:

C:\Users\USERNAME\TRIPS\build\bin\RelWithDebInfo\colmap2adop.exe --sparse_dir ColmapScenes\cm_audih3\sparse\0\ --image_dir ColmapScenes\cm_audih3\images\ --point_cloud_file ColmapScenes\cm_audih3\dense_point_cloud.ply --output_path scenes\tt_audih3 --scale_intrinsics 1 --render_scale 1

Correct? Or does the first command have to be different?


Can you also give me some hints?

Yes, of course, I tried it. Here - #22 (comment)

@dm-de
Copy link

dm-de commented Feb 15, 2024

hmmm
"Poisson surface reconstruction" is used to create triangulated mesh from cloud data
so it is not a point cloud

@lh-dm
Copy link

lh-dm commented Feb 15, 2024

fused.ply looks like dense cloud for me

did you read this?
https://github.com/lfranke/TRIPS/tree/main/scenes

more about colmap cli
https://colmap.github.io/cli.html

@lfranke
Copy link
Owner

lfranke commented Feb 15, 2024

Hi,

yes fused.ply is the dense point cloud output from COLMAP. The rest of the colmap reco looks fine and the commands should work.
Thanks for all the feedback, I will include it in the READMEs!

@iFimo
Copy link
Author

iFimo commented Feb 15, 2024

@lh-dm Thanks, yes I saw both. It was also just updated by @lfranke.


@lfranke , great, well then I have the file.
You also updated it directly. But I have a question about the command itself.
--point_cloud_file SCENE_BASE/fused.ply \ should it be: --point_cloud_file SCENE_BASE/dense/0/fused.ply \ or does the command itself search the entire folder structure for the fused.ply?


I tried:
I left out the variable completely to avoid errors and I have specified the path to “fused.ply” completely, better safe than sorry.

C:\Users\USERNAME\TRIPS\build\bin\RelWithDebInfo\colmap2adop.exe --sparse_dir C:\Users\USERNAME\TRIPS\ColmapScenes\cm_audih2\sparse\0\ --image_dir C:\Users\USERNAME\TRIPS\scenes\tt_audih2\images\ --point_cloud_file C:\Users\USENAME\TRIPS\ColmapScenes\cm_audih2\dense\0\fused.ply --output_path scenes\tt_audih2 --scale_intrinsics 1 --render_scale 1

But I also get the same incomplete result:

dataset.ini 		[required]  	was created in my folder
Camera files 		[required]  	was created in my folder
images.txt		[required]   	was created in my folder
camera_indices.txt 	[required]   	was created in my folder

point_cloud.ply 	[required]   	was NOT created in my folder
poses.txt 		[required]	was NOT created in my folder

Or is that correct?


At the end of the log, after it has gone through the cameras and images properly, I get the following error:

Der Befehl "cp" ist entweder falsch geschrieben oder
konnte nicht gefunden werden.
Assertion 'Copy failed!' failed!
  File: C:\Users\USERNAME\TRIPS\src\apps\colmap2adop.cpp:139
  Function: class std::shared_ptr<class SceneData> __cdecl ColmapScene(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,double,double)
FULL LOG


(trips) C:\Users\USERNAME\TRIPS>C:\Users\USERNAME\TRIPS\build\bin\RelWithDebInfo\colmap2adop.exe --sparse_dir C:\Users\USERNAME\TRIPS\ColmapScenes\cm_audih4\sparse\0\ --image_dir C:\Users\USERNAME\TRIPS\scenes\tt_audih4\images\ --point_cloud_file C:\Users\USERNAME\TRIPS\ColmapScenes\cm_audih4\dense\0\fused.ply --output_path scenes\tt_audih4 --scale_intrinsics 1 --render_scale 1
register neural render info
register TnnInfo
Preprocessing Colmap scene C:\Users\USERNAME\TRIPS\ColmapScenes\cm_audih4\sparse\0\ -> scenes\tt_audih4
Num cameras 22
id: 0 camera model: 2  K: 2828.28 2828.28 1877.5 1056 0  Dis: 0.0012025 0 0 0 0 0 0 0
id: 1 camera model: 2  K: 2832.26 2832.26 1880.5 1057.5 0  Dis: 0.00213251 0 0 0 0 0 0 0
id: 2 camera model: 2  K: 2833.47 2833.47 1882.5 1058.5 0  Dis: 0.00277437 0 0 0 0 0 0 0
id: 3 camera model: 2  K: 2835.28 2835.28 1882 1058 0  Dis: -0.000290257 0 0 0 0 0 0 0
id: 4 camera model: 2  K: 2834.72 2834.72 1886.5 1061 0  Dis: 0.00190884 0 0 0 0 0 0 0
id: 5 camera model: 2  K: 2826.71 2826.71 1886.5 1061 0  Dis: -0.000541296 0 0 0 0 0 0 0
id: 6 camera model: 2  K: 2808.17 2808.17 1888 1061.5 0  Dis: 0.000702892 0 0 0 0 0 0 0
id: 7 camera model: 2  K: 2825.88 2825.88 1876 1055 0  Dis: 0.00202279 0 0 0 0 0 0 0
id: 8 camera model: 2  K: 2827.03 2827.03 1877.5 1056 0  Dis: 0.00165332 0 0 0 0 0 0 0
id: 9 camera model: 2  K: 2829.3 2829.3 1879 1056.5 0  Dis: 0.00180999 0 0 0 0 0 0 0
id: 10 camera model: 2  K: 2829.6 2829.6 1879 1056.5 0  Dis: 0.00235088 0 0 0 0 0 0 0
id: 11 camera model: 2  K: 2831.56 2831.56 1880 1057 0  Dis: 0.00213916 0 0 0 0 0 0 0
id: 12 camera model: 2  K: 2835.38 2835.38 1881.5 1058 0  Dis: 0.00175842 0 0 0 0 0 0 0
id: 13 camera model: 2  K: 2834.71 2834.71 1881 1058 0  Dis: 0.00201243 0 0 0 0 0 0 0
id: 14 camera model: 2  K: 2835.71 2835.71 1881.5 1058 0  Dis: 0.00018535 0 0 0 0 0 0 0
id: 15 camera model: 2  K: 2836.08 2836.08 1882.5 1058.5 0  Dis: 0.000484706 0 0 0 0 0 0 0
id: 16 camera model: 2  K: 2837.54 2837.54 1883.5 1059.5 0  Dis: -0.000119496 0 0 0 0 0 0 0
id: 17 camera model: 2  K: 2836.53 2836.53 1884 1059.5 0  Dis: -0.000137174 0 0 0 0 0 0 0
id: 18 camera model: 2  K: 2829.23 2829.23 1884 1059.5 0  Dis: 0.000890753 0 0 0 0 0 0 0
id: 19 camera model: 2  K: 2826.52 2826.52 1884 1059.5 0  Dis: 0.000988389 0 0 0 0 0 0 0
id: 20 camera model: 2  K: 2812.04 2812.04 1884.5 1060 0  Dis: 0.00269993 0 0 0 0 0 0 0
id: 21 camera model: 2  K: 2810.05 2810.05 1886.5 1061 0  Dis: 0.00238842 0 0 0 0 0 0 0
Num images 22
VC-Video (178).jpg 2 -> 0 0  Position: 4.56865 0.402428 3.81118
VC-Video (179).jpg 1 -> 1 1  Position: 3.24059 0.388295 2.04673
VC-Video (180).jpg 4 -> 2 2  Position: 1.36288 0.356355 1.21645
VC-Video (181).jpg 5 -> 3 3  Position: -0.627224 0.355005 0.965031
VC-Video (182).jpg 3 -> 4 4  Position: -2.60588 0.363977 1.08652
VC-Video (183).jpg 6 -> 5 5  Position: -4.50373 0.362534 1.70273
VC-Video (184).jpg 7 -> 6 6  Position: -6.20436 0.385515 2.82511
VC-Video (234).jpg 12 -> 7 7  Position: 4.71995 -0.222849 2.60902
VC-Video (235).jpg 8 -> 8 8  Position: 4.20835 -0.212521 2.00833
VC-Video (236).jpg 13 -> 9 9  Position: 3.63185 -0.201384 1.5362
VC-Video (237).jpg 9 -> 10 10  Position: 3.03337 -0.192326 1.17043
VC-Video (238).jpg 14 -> 11 11  Position: 2.37779 -0.189329 0.934978
VC-Video (239).jpg 16 -> 12 12  Position: 1.66998 -0.183902 0.750663
VC-Video (240).jpg 11 -> 13 13  Position: 0.986149 -0.170495 0.601038
VC-Video (241).jpg 10 -> 14 14  Position: 0.252979 -0.168022 0.539481
VC-Video (242).jpg 15 -> 15 15  Position: -0.615593 -0.163119 0.475913
VC-Video (243).jpg 17 -> 16 16  Position: -1.64748 -0.165234 0.546774
VC-Video (244).jpg 18 -> 17 17  Position: -2.78187 -0.159715 0.749306
VC-Video (245).jpg 19 -> 18 18  Position: -3.88287 -0.158461 1.16343
VC-Video (246).jpg 20 -> 19 19  Position: -4.99325 -0.148023 1.62255
VC-Video (247).jpg 21 -> 20 20  Position: -6.01765 -0.134135 2.23865
VC-Video (248).jpg 22 -> 21 21  Position: -6.81641 -0.118056 3.09814
Num Point3D 10521
No EXIF exposure value found for image VC-Video (178).jpg
No EXIF exposure value found for image VC-Video (179).jpg
No EXIF exposure value found for image VC-Video (180).jpg
No EXIF exposure value found for image VC-Video (181).jpg
No EXIF exposure value found for image VC-Video (182).jpg
No EXIF exposure value found for image VC-Video (183).jpg
No EXIF exposure value found for image VC-Video (184).jpg
No EXIF exposure value found for image VC-Video (234).jpg
No EXIF exposure value found for image VC-Video (235).jpg
No EXIF exposure value found for image VC-Video (236).jpg
No EXIF exposure value found for image VC-Video (237).jpg
No EXIF exposure value found for image VC-Video (238).jpg
No EXIF exposure value found for image VC-Video (239).jpg
No EXIF exposure value found for image VC-Video (240).jpg
No EXIF exposure value found for image VC-Video (241).jpg
No EXIF exposure value found for image VC-Video (242).jpg
No EXIF exposure value found for image VC-Video (243).jpg
No EXIF exposure value found for image VC-Video (244).jpg
No EXIF exposure value found for image VC-Video (245).jpg
No EXIF exposure value found for image VC-Video (246).jpg
No EXIF exposure value found for image VC-Video (247).jpg
No EXIF exposure value found for image VC-Video (248).jpg
EV Statistic:
Num         = [22]
Min,Max     = [0,0]
Mean,Median,Rms = [0,0,0]
sdev,var    = [0,0]
dynamic range: 1
  Image Size 3755x2112
  Aspect     1.77794
  K          2828.28 2828.28 1877.5 1056 0
  ocam       3755x2112 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.0012025 0 0 0 0 0 0 0
  Image Size 3761x2115
  Aspect     1.77825
  K          2832.26 2832.26 1880.5 1057.5 0
  ocam       3761x2115 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00213251 0 0 0 0 0 0 0
  Image Size 3765x2117
  Aspect     1.77846
  K          2833.47 2833.47 1882.5 1058.5 0
  ocam       3765x2117 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00277437 0 0 0 0 0 0 0
  Image Size 3764x2116
  Aspect     1.77883
  K          2835.28 2835.28 1882 1058 0
  ocam       3764x2116 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       -0.000290257 0 0 0 0 0 0 0
  Image Size 3773x2122
  Aspect     1.77804
  K          2834.72 2834.72 1886.5 1061 0
  ocam       3773x2122 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00190884 0 0 0 0 0 0 0
  Image Size 3773x2122
  Aspect     1.77804
  K          2826.71 2826.71 1886.5 1061 0
  ocam       3773x2122 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       -0.000541296 0 0 0 0 0 0 0
  Image Size 3776x2123
  Aspect     1.77862
  K          2808.17 2808.17 1888 1061.5 0
  ocam       3776x2123 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.000702892 0 0 0 0 0 0 0
  Image Size 3752x2110
  Aspect     1.7782
  K          2825.88 2825.88 1876 1055 0
  ocam       3752x2110 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00202279 0 0 0 0 0 0 0
  Image Size 3755x2112
  Aspect     1.77794
  K          2827.03 2827.03 1877.5 1056 0
  ocam       3755x2112 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00165332 0 0 0 0 0 0 0
  Image Size 3758x2113
  Aspect     1.77851
  K          2829.3 2829.3 1879 1056.5 0
  ocam       3758x2113 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00180999 0 0 0 0 0 0 0
  Image Size 3758x2113
  Aspect     1.77851
  K          2829.6 2829.6 1879 1056.5 0
  ocam       3758x2113 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00235088 0 0 0 0 0 0 0
  Image Size 3760x2114
  Aspect     1.77862
  K          2831.56 2831.56 1880 1057 0
  ocam       3760x2114 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00213916 0 0 0 0 0 0 0
  Image Size 3763x2116
  Aspect     1.77836
  K          2835.38 2835.38 1881.5 1058 0
  ocam       3763x2116 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00175842 0 0 0 0 0 0 0
  Image Size 3762x2116
  Aspect     1.77788
  K          2834.71 2834.71 1881 1058 0
  ocam       3762x2116 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00201243 0 0 0 0 0 0 0
  Image Size 3763x2116
  Aspect     1.77836
  K          2835.71 2835.71 1881.5 1058 0
  ocam       3763x2116 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00018535 0 0 0 0 0 0 0
  Image Size 3765x2117
  Aspect     1.77846
  K          2836.08 2836.08 1882.5 1058.5 0
  ocam       3765x2117 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.000484706 0 0 0 0 0 0 0
  Image Size 3767x2119
  Aspect     1.77773
  K          2837.54 2837.54 1883.5 1059.5 0
  ocam       3767x2119 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       -0.000119496 0 0 0 0 0 0 0
  Image Size 3768x2119
  Aspect     1.7782
  K          2836.53 2836.53 1884 1059.5 0
  ocam       3768x2119 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       -0.000137174 0 0 0 0 0 0 0
  Image Size 3768x2119
  Aspect     1.7782
  K          2829.23 2829.23 1884 1059.5 0
  ocam       3768x2119 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.000890753 0 0 0 0 0 0 0
  Image Size 3768x2119
  Aspect     1.7782
  K          2826.52 2826.52 1884 1059.5 0
  ocam       3768x2119 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.000988389 0 0 0 0 0 0 0
  Image Size 3769x2120
  Aspect     1.77783
  K          2812.04 2812.04 1884.5 1060 0
  ocam       3769x2120 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00269993 0 0 0 0 0 0 0
  Image Size 3773x2122
  Aspect     1.77804
  K          2810.05 2810.05 1886.5 1061 0
  ocam       3773x2122 affine(1, 0, 0, 0, 0) cam2world() world2cam()
  ocam cut   1
  normalized center 0 0
  dist       0.00238842 0 0 0 0 0 0 0
Der Befehl "cp" ist entweder falsch geschrieben oder
konnte nicht gefunden werden.
Assertion 'Copy failed!' failed!
  File: C:\Users\USERNAME\TRIPS\src\apps\colmap2adop.cpp:139
  Function: class std::shared_ptr<class SceneData> __cdecl ColmapScene(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,double,double)

What am I doing wrong?

@jonstephens85
Copy link

jonstephens85 commented Feb 20, 2024

@iFimo I ran into the same issue as you last night. The issue is that colmap2adop is written to work in Linux. Fortunately, there is a quite easy workaround. Here is what I did on my Windows 11 machine:

Requirement: Git Bash
You will need Git Bash which comes with Git. I assume you have this already if you used Git to pull the project code.

I also used the colmap2adop.sh script which you will need to edit with a text editor ahead of time (I used Notepad++)

Edit colmap2adop.sh

  1. Open the file in a text editor
  2. On line 9 change ./build/bin/colmap2adop to C:/Users/USERNAME/TRIPS/build/bin/RelWithDebInfo/colmap2adop.exe
  3. Check the other paths in the script to ensure they make sense for your colmap data - it looks like you might need to update the point cloud file path to add a 0 folder after the dense folder.

Run the conversion script

  1. Launch Git Bash
  2. Path to the TRIPS folder - for you, it's probably CD Trips as Git Bash launches into your home directory
  3. Run chmod +x colmap2adop.sh, this will make the script executable
  4. Run ./colmap2adop.sh [input_directory] [output_directory] (based on your setup that would be
    ./colmap2adop.sh ColmapScenes/cm_audih2 scenes/tt_audih2)

Everything should work from there! Just a tip, pasting into Git Bash is Shift+Ins not Ctrl+V

Last note, if you are going to train the data, ensure you edit the config file to find your custom data.

@iFimo
Copy link
Author

iFimo commented Feb 20, 2024

@jonstephens85

Yeah!!!! thank you very much! It works with that.
I was just playing around with this exact file, but I never would have thought of the GitBatch commands (I just used GitBatch for the first time).

Check the other paths in the script to ensure they make sense for your colmap data - it looks like you might need to update the point cloud file path to add a 0 folder after the dense folder.

Right, there had to be a 0.

Very cool. Then I'll probably train a complete set tomorrow and run Coolmap on my large image set tonight.

@jonstephens85
Copy link

I am training my first model now. I should have ran one of the test datasets first. It is taking quite a while on 315 images, 9.7 million point dense cloud. I left parameters at default. Not sure if I was suppose to modify a parameter for an outside scene.

@iFimo
Copy link
Author

iFimo commented Feb 21, 2024

So here's short feedback. Everything now works on Windows from training to the viewer. Many thanks again to @jonstephens85 and all the other helpers.

One last question/request for @lfranke . For me too, a PLY export is crucial in order to continue using it in other software. Maybe you've already answered the question somewhere else and I missed it. Is there a rough timetable for when something like this might happen?

@lfranke
Copy link
Owner

lfranke commented Feb 21, 2024

One last question/request for @lfranke . For me too, a PLY export is crucial in order to continue using it in other software. Maybe you've already answered the question somewhere else and I missed it. Is there a rough timetable for when something like this might happen?

This is currently a bit difficult, as our point colors/descriptors have no real RGB meaning without the neural network convolutions afterwards. It would be possible to export a ply with point descriptors and point sizes, however the network.pth would still be required to get the weights for the neural network.

@Crush1111
Copy link

@iFimo我昨晚遇到了和你一样的问题。问题是colmap2adop是为在Linux上工作而编写的。幸运的是,有一个相当简单的变通方法。以下是我在Windows 11机器上所做的:

要求:Git Bash您将需要Git附带的Git Bash。如果你使用Git提取项目代码,我想你已经有这个了。

我还使用了colmap2adop.sh脚本,您需要提前使用文本编辑器进行编辑(我使用了记事本++)

编辑colmap2adop.sh

  1. 在文本编辑器中打开文件
  2. 在第9行将_./build/bin/colmap2adop_更改为_C:/Users/USERNAME/TRIPS/build/bin/RelWithDebInfo/colmap2adop.exe_
  3. 检查脚本中的其他路径,以确保它们对您的colmap数据有意义-看起来您可能需要更新点云文件路径,以在密集文件夹之后添加0文件夹。

运行转换脚本

  1. 启动Git Bash
  2. TRIPS文件夹的路径-对你来说,当Git Bash启动到您的主目录时,它可能是CD Trips
  3. 运行chmod +x colmap2adop.sh,这将使脚本可执行
  4. 运行./colmap2adop.sh [input_directory] [output_directory](基于您的设置
    ./colmap2adop.sh ColmapScenes/cm_audih2 scenes/tt_audih2

一切都应该从那里开始运作!只是一个提示,粘贴到Git Bash是Shift+Ins而不是Ctrl+V

最后请注意,如果您要训练数据,请确保编辑配置文件以找到您的自定义数据。

Hello, I used my own Colmap data set and converted the data format through./colmap2adop.sh [input_directory] [output_directory]. Then./build/bin/train --config configs/train_normalnet.ini trains my data, but when I look at the results, there is only one graph, which is different from the visualization of the official data set. Why?
image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants