Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you have any universal solution without inputting parameters in "sensor_width_database.txt"? #1977

Closed
zinuoli opened this issue Dec 3, 2021 · 20 comments
Labels

Comments

@zinuoli
Copy link

zinuoli commented Dec 3, 2021

Hello, sorry to interrupt you again. I have gradually reconstructed 3d model with my own dataset.
I notice that the parameters should be first set in file sensor_width_database.txt before we run openMVG to compute features, but it would be a little bit difficult for those users who do not know how to figure out their sensor width.
So I am wondering if there is an universal solution for those who don't know the sensor width.
Do you have any hints on this?
Thanks for your reply.

Sincerely.

@pmoulon
Copy link
Member

pmoulon commented Dec 4, 2021

Hi, there is no perfect universal solution. For photogrammetry, it matters to know your camera system. The better you know it, the more compelling the results will be.
We are choosing on purpose here to show this to our community, however, if you have not really any idea about the camera you deal with you can still set an approximate focal length by using the -f X option. X being the focal value you want to provide.

See here to learn how to compute an approximate value for your images: #669 (comment)

We will most likely introduce a focal_ratio parameter soon in the command line, for camera that has not been found in the camera DB, but still display a warning that it would be better to know the camera sensor size.

Hoping it answer your question.

@zinuoli
Copy link
Author

zinuoli commented Dec 4, 2021

Thanks for your reply!This will solve my confusion.
I will be looking forward to the introduction of focal_ratio parameter, thanks for your contribution.
May I ask the approximate time of release?
Anyway, thanks a lot.

@pmoulon
Copy link
Member

pmoulon commented Dec 4, 2021

We can potentially try in the coming next week, so definitely before Christmas ;-)

@zinuoli
Copy link
Author

zinuoli commented Dec 4, 2021

Wonderful!It will be exciting for us!
Thanks for your great contribution!

@pmoulon
Copy link
Member

pmoulon commented Dec 4, 2021

Please monitor the created task in the coming weeks ;-)
Your test on the branch will most likely be beneficial!

@zinuoli
Copy link
Author

zinuoli commented Dec 4, 2021

Hello,sorry for another question.We are trying to compute features using our dataset of leg.We notice that the result just shows few features,it seems that sfm doesn't run well with human skin.
Do you have any solution of it?
Thanks a lot.

@pmoulon
Copy link
Member

pmoulon commented Dec 4, 2021

Skin can be challenging since it could appears at feature less.
You can either try to extract more features per images by using the -p HIGH on computeFeatures or try to make the scene having more feature, by either using a projector and ask your patient to be still, or temporally add markings on the
skin (grid or dots).
Blendernation
@cogitas3d can explain better his protocol and test for skin reconstruction (faces, ...)

@zinuoli
Copy link
Author

zinuoli commented Dec 4, 2021

I will study it carefully and try to draw some markings on legs.
As always,thx.

@pmoulon
Copy link
Member

pmoulon commented Dec 4, 2021

Another less intrusive solution could be to use a tight socks/leggings that present patterns.

@zinuoli
Copy link
Author

zinuoli commented Dec 4, 2021

Great!I will try it as soon as possible.
It seems that SFM will get a better performance on those objects of rough material.

@zinuoli
Copy link
Author

zinuoli commented Dec 5, 2021

Hello, sorry for coming again, may I ask a question about openMVS?
The author of openMVS doesn't seem to be free.
I found that the openMVS doesn't seem to run normally after a calculation.
The first calculation was nice, I got a compelling result .ply, but the second time when I tried to compute another different scene.mvs with another dataset, the calling .\DensifyPointCloud.exe scene.mvs would run so quickly that the next calling .\ReconstructMesh would be failed.
How can I solve this problem?
Any hints on this?
Thanks a lot.

@zinuoli
Copy link
Author

zinuoli commented Dec 5, 2021

Does openMVS pipline includes -f 1 like that in openMVG?
I want to ensure that every time I run openMVS the points cloud will be recomputed, maybe that will solve my problem.
I will be always looking forward to your reply, it's a little bit urgent.
Sincerely.

@pmoulon
Copy link
Member

pmoulon commented Dec 5, 2021

To force depth map recomputation either remove the *.dmap in the working folder or use a temp folder created for each launch.

@zinuoli
Copy link
Author

zinuoli commented Dec 5, 2021

Thanks for your quick reply!I have created a temporary folder and used -o flag to ensure the result would be output to that folder.
However,the result was still different from that obtained by a totally rebuilt openMVS.The latter one would produced a nice result,but as for the former one some depth map information was always missing.
Should I delete *.dmap files all the time except outputting results to a temp folder?

@zinuoli
Copy link
Author

zinuoli commented Dec 6, 2021

I found that even if I set an output folder to direct the result, *.damp files will still be created in the root directory of .\DensifyPoints.exe of openMVS.

@pmoulon
Copy link
Member

pmoulon commented Dec 6, 2021

There is an option called working_folder

@zinuoli
Copy link
Author

zinuoli commented Dec 6, 2021

Please forgive my negligence,I will check that as soon as possible.

@zinuoli zinuoli closed this as completed Dec 9, 2021
@pmoulon
Copy link
Member

pmoulon commented Dec 24, 2021

@lzn1273180880 Please see #1981
We created a branch with a --focal_multiplier option and a default value of 1.2
So if your camera is not in the sensor camera db, it will now default to an approximate focal length value.
We still recommend to know your camera in order to maximize results quality.

@zinuoli
Copy link
Author

zinuoli commented Dec 24, 2021

@lzn1273180880 Please see #1981
We created a branch with a --focal_multiplier option and a default value of 1.2
So if your camera is not in the sensor camera db, it will now default to an approximate focal length value.
We still recommend to know your camera in order to maximize results quality.

Thanks a million,I'll check it out as soon as possible.

@zinuoli
Copy link
Author

zinuoli commented Dec 24, 2021

By the way,MARRY CHRISMAS in advance!Thanks for your all great contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants