Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question]How to use different set of image for texturing? #1559

Closed
aeoleader opened this issue Oct 25, 2021 · 7 comments
Closed

[question]How to use different set of image for texturing? #1559

aeoleader opened this issue Oct 25, 2021 · 7 comments
Labels
stale for issues that becomes stale (no solution) type:question

Comments

@aeoleader
Copy link

I am trying to use downscaled image sets for sfm and mesh reconstruction and use the original images for texturing. Since the feature extraction node doesn't offer image downscale options I need to preprocess the input images first then perform feature extraction and the following calculations. However, to ensure the final quality of the output mesh I want to calculate texture based on the original image sets. Is there a way to do so?

I checked the previous issues and the closest matching problem is #763 but it is using the previous version of meshroom, the new version we need to provide Dense SfMData to the texturing node.

@natowi
Copy link
Member

natowi commented Oct 25, 2021

I am trying to use downscaled image sets

You can, but why?

Use the method described here: https://github.com/alicevision/meshroom/wiki/Projected-Light-Patterns

@aeoleader
Copy link
Author

I am trying to use downscaled image sets

You can, but why?

Use the method described here: https://github.com/alicevision/meshroom/wiki/Projected-Light-Patterns

Because my image is too large to use the popsift method. It always fails with not enough cuda memory error, but I still want the best texture quality. I followed the method you mentioned but get the dimension mismatch error.
Below is my workflow and the error message.
image
image

@natowi
Copy link
Member

natowi commented Oct 31, 2021

Popsift hast a limit on the image size alicevision/popsift#77
I don´t know how to solve your dimension mismatch problem for now. Maybe dsp-sift works with the full image?.

@aeoleader
Copy link
Author

Popsift hast a limit on the image size alicevision/popsift#77 I don´t know how to solve your dimension mismatch problem for now. Maybe dsp-sift works with the full image?.

Thanks for your reply!

Would switching to a GPU with larger VRAM help?

@aeoleader
Copy link
Author

BTW, I think maybe a feature request can be made regarding adding downscale factor to the feature extraction node, because sometimes it is inefficient to use the fullsize images to perform tracking, and the fullscale images are usually required at the reconstruction stage in practice.

@stale
Copy link

stale bot commented Apr 16, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale for issues that becomes stale (no solution) label Apr 16, 2022
@stale
Copy link

stale bot commented Apr 28, 2022

This issue is closed due to inactivity. Feel free to re-open if new information is available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale for issues that becomes stale (no solution) type:question
Projects
None yet
Development

No branches or pull requests

2 participants