Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Texturing with an alternative set of photos #763

Closed
tpieco opened this issue Jan 19, 2020 · 27 comments
Closed

Texturing with an alternative set of photos #763

tpieco opened this issue Jan 19, 2020 · 27 comments

Comments

@tpieco
Copy link

tpieco commented Jan 19, 2020

Is there a way to generate a mesh from one set of photos but then texture with another set? I've tried a number of things, like unplugging the imagesFolder connector from the texturing node and pointing to the directory of the new photos, and swapping the generated images in the PrepareDenseScene folder with the alternative photos but it causes the process to fail.

Is it possible at all now or will be in the future?

Thanks

@natowi
Copy link
Member

natowi commented Jan 19, 2020

Use a structured light dataset for example, then set the image dataset without structured light in the PrepareDenseScene node (Images Folders (+) ). Use the same image names. Sample datasets

(correction: duplicate the PrepareDenseScene node and connect it to Texturing input)

im1

@tpieco
Copy link
Author

tpieco commented Jan 20, 2020

Thanks for the reply.
The process completed but with mesh errors, seen in the image below. I might be completely wrong in thinking it would work, but I was with trying to make more accurate roughness maps by taking polarised and unpolarized photos of an object, using the texture generated from the unpolarized photos for the roughness map.

additionalphotos

@natowi
Copy link
Member

natowi commented Jan 20, 2020

So the "without" sample is computed from polarized images?

@tpieco
Copy link
Author

tpieco commented Jan 20, 2020

Yes. The object on the right (without) is the result using polarized images. The object on the left uses the polarized images but with the unpolarized images added as Image Folders in the PrepareDenseScene node. Hope that makes sense.

@natowi
Copy link
Member

natowi commented Jan 20, 2020

I think in your case it is best to use the polarized images only. The results are good and there is no reason to add the normal images for texturing, as they may contain reflections we want to avoid.

@tpieco
Copy link
Author

tpieco commented Jan 20, 2020

I can't fault the results I'm getting with polarized images, they're brilliant. It's just that creating roughness maps is a frustration to me and I'm trying any tricks I can think of to make them more simply and accurate.

Thanks.

@natowi
Copy link
Member

natowi commented Jan 20, 2020

Projecting a pattern on the surface and use it for the reconstruction could improve the quality of the surface reconstruction. But this can be difficult to set up.

You can also try out adding powder or paint on your model to highlight more features (before, capture an image set for later texturing).

If you need only a part of the model in high resolution for later use in a 3d modelling tool, you can generate a texture using Reflectance Transformation Imaging (RTI) and export the normals map.

@hargrovecompany
Copy link

I am needing to use the capability to swap from using projected pattern lighting images for mesh to normal lighting for texture. I can't find the image that shows the entire workflow. I found one in your wiki that has an image, but it's cut off on the left, doesn't show initiation of that part of the workflow. I believe there was another issue or something here at one time that had an image of the full workflow...can anyone help? Thanks!

@natowi
Copy link
Member

natowi commented Jan 20, 2020

@hargrovecompany
Copy link

Natowi, I finally found the post that I was looking for. https://groups.google.com/forum/#!topic/alicevision/Y1mde4F1KmU

I'm a bit confused....do I need to actually modify the workflow as in the link, or simply add the file location as in the link that you provided above?

Also, I have some really good sample image sets now. My rig is giving me great results. If anyone here might need a set of images for testing, let me know.

I've haven't tuned in here in quite some time.....I noticed something today about software to support a rig...I have a pretty good system, written in python. It supports pi cameras and a bunch of DSLR cameras, and it has the ability to mix them. It has color correction (post capture processing) capabilities. Pretty nifty stuff. If you guys need any of it let me know.

@natowi
Copy link
Member

natowi commented Jan 21, 2020

@hargrovecompany oh yes, thank you, I forgot, sorry for the mixup. @tpieco you could try this
image
I corrected it in the wiki.
(without the new node, the images for texturing will also be used for depth map generation)

Also, I have some really good sample image sets now. My rig is giving me great results. If anyone here might need a set of images for testing, let me know.

Having a camera rig sample dataset for Meshroom under a creative commons license would be nice, so it could be used for testing and for tutorials similar to the monstree dataset.

I noticed something today about software to support a rig

Yes, there is some basic rig support. https://github.com/alicevision/meshroom/wiki/Multi-Camera-Rig

I have a pretty good system, written in python. It supports pi cameras and a bunch of DSLR cameras, and it has the ability to mix them. It has color correction (post capture processing) capabilities. Pretty nifty stuff. If you guys need any of it let me know.

Yes, we talked about this in #480. Your contribution is welcome.

@tpieco
Copy link
Author

tpieco commented Jan 21, 2020

@natowi Fantastic! That does exactly what I was looking to do.
@hargrovecompany Thanks for the link. I wasn't aware of that forum before.

@hargrovecompany
Copy link

hargrovecompany commented Jan 22, 2020

I just ran a projected/normal sequence adding only the NORMAL images location under the Prepare Dense Seen node I just edited this....I entered bad info accidentally earlier. Simply adding the normal image folder location under Prepare Dense Cloud folder location does not result in texture from the normal lighting images.
I will run it again and try to map the node connectors similar to the pic showing the mapping above. I did notice that the node connectors options are different on the latest version of the software....

@hargrovecompany
Copy link

hargrovecompany commented Jan 22, 2020

Here is a link to one of my sample sets....projected and normal lighting, full size human scan

https://www.dropbox.com/sh/3zmb8hqtd84at2n/AACU-YMBZmBT2lXusyXQ_FuIa?dl=0

feel free to use it however you would like...

@hargrovecompany
Copy link

I tried adding the second Prepare Dense Scene node, and added the folder location for normal images. I'm using version 2019.2.0 Here's a few shots of the workflow. I still seem to have my patterned lighting images used as texture...
nodes
folder

@tpieco
Copy link
Author

tpieco commented Jan 22, 2020

@hargrovecompany The photos in the Normal folder are also projected photos.

@hargrovecompany
Copy link

wow.....well, that's embarrassing....sorry about that

@tpieco
Copy link
Author

tpieco commented Jan 23, 2020

lol. No probs.

@canonex
Copy link

canonex commented Jan 24, 2020

Why in some screenshot PrepareDenseScene has 2 output and in some images 1?
The same for Texturing? I can't reproduce exactly the node setup in Projected Light Patterns...

I have also trouble because of this:
#614 (comment)
as I have commented, the node is causing error

Here is my setup:
PrepareDense

Thank you,
Riccardo

@natowi
Copy link
Member

natowi commented Jan 24, 2020

I have updated the wiki.

@canonex
Copy link

canonex commented Jan 25, 2020

I made the stupid mistake of not renaming the files correctly...
the bold text in the wiki helped me.

Thank you,
Riccardo

@hargrovecompany
Copy link

hargrovecompany commented Feb 9, 2020

Here's another set of images of a test subject (real American Rodeo Cowboy)
I checked to make certain that it's correct....a set with pattern projection and a set with normal lighting, .35 seconds between
https://www.dropbox.com/sh/rbegeqgihpp6xwj/AAAWZFLvBCG5PlPIk059vVJpa?dl=0
These are my images, so anyone here has my permission to use them

@natowi
Copy link
Member

natowi commented Feb 9, 2020

@tpieco can you share your results with your polarised and unpolarized photos like you did before?
I would like to add the comparison to the wiki/documentation.

@tpieco
Copy link
Author

tpieco commented Feb 9, 2020

@natowi here are some screenshots but I'm not sure if they're going to be useful.

The first screenshot is the result of using cross polarized photos.

The second screenshot is the result using unpolarized photos, which surprisingly gives a better mesh than polarized photos.

The third screenshot is the result of using polarized photos, with unpolarized photos used for texturing without adding a new PrepareDenseScene node.

The fourth screenshot is the result of using polarized photos, with unpolarized photos used for texturing with a new PrepareDenseScene node added.

polarized
unpolarized
combined
combined2

@natowi
Copy link
Member

natowi commented Feb 10, 2020

@tpieco Thank you for this nice example.
I see you have a red icon on your images that point to missing metadata or sensor information. Meshroom does a good estimation job, but adding this information can improve the overall accuracy.

@tpieco
Copy link
Author

tpieco commented Feb 10, 2020

@natowi No problem. I shoot RAW photos and export them to PNGs using Capture One, which doesn't include the EXIF data. However I work out the correct FoV using an online calculator. Thanks for the lookout.

@natowi
Copy link
Member

natowi commented Feb 24, 2020

Solved and added to the wiki.
https://github.com/alicevision/meshroom/wiki/Projected-Light-Patterns

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants