-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Texturing with an alternative set of photos #763
Comments
Use a structured light dataset for example, then set the image dataset without structured light in the PrepareDenseScene node (Images Folders (+) ). Use the same image names. Sample datasets (correction: duplicate the PrepareDenseScene node and connect it to Texturing input) |
Thanks for the reply. |
So the "without" sample is computed from polarized images? |
Yes. The object on the right (without) is the result using polarized images. The object on the left uses the polarized images but with the unpolarized images added as Image Folders in the PrepareDenseScene node. Hope that makes sense. |
I think in your case it is best to use the polarized images only. The results are good and there is no reason to add the normal images for texturing, as they may contain reflections we want to avoid. |
I can't fault the results I'm getting with polarized images, they're brilliant. It's just that creating roughness maps is a frustration to me and I'm trying any tricks I can think of to make them more simply and accurate. Thanks. |
Projecting a pattern on the surface and use it for the reconstruction could improve the quality of the surface reconstruction. But this can be difficult to set up. You can also try out adding powder or paint on your model to highlight more features (before, capture an image set for later texturing). If you need only a part of the model in high resolution for later use in a 3d modelling tool, you can generate a texture using Reflectance Transformation Imaging (RTI) and export the normals map. |
I am needing to use the capability to swap from using projected pattern lighting images for mesh to normal lighting for texture. I can't find the image that shows the entire workflow. I found one in your wiki that has an image, but it's cut off on the left, doesn't show initiation of that part of the workflow. I believe there was another issue or something here at one time that had an image of the full workflow...can anyone help? Thanks! |
Natowi, I finally found the post that I was looking for. https://groups.google.com/forum/#!topic/alicevision/Y1mde4F1KmU I'm a bit confused....do I need to actually modify the workflow as in the link, or simply add the file location as in the link that you provided above? Also, I have some really good sample image sets now. My rig is giving me great results. If anyone here might need a set of images for testing, let me know. I've haven't tuned in here in quite some time.....I noticed something today about software to support a rig...I have a pretty good system, written in python. It supports pi cameras and a bunch of DSLR cameras, and it has the ability to mix them. It has color correction (post capture processing) capabilities. Pretty nifty stuff. If you guys need any of it let me know. |
@hargrovecompany oh yes, thank you, I forgot, sorry for the mixup. @tpieco you could try this
Having a camera rig sample dataset for Meshroom under a creative commons license would be nice, so it could be used for testing and for tutorials similar to the monstree dataset.
Yes, there is some basic rig support. https://github.com/alicevision/meshroom/wiki/Multi-Camera-Rig
Yes, we talked about this in #480. Your contribution is welcome. |
@natowi Fantastic! That does exactly what I was looking to do. |
I just ran a projected/normal sequence adding only the NORMAL images location under the Prepare Dense Seen node I just edited this....I entered bad info accidentally earlier. Simply adding the normal image folder location under Prepare Dense Cloud folder location does not result in texture from the normal lighting images. |
Here is a link to one of my sample sets....projected and normal lighting, full size human scan https://www.dropbox.com/sh/3zmb8hqtd84at2n/AACU-YMBZmBT2lXusyXQ_FuIa?dl=0 feel free to use it however you would like... |
@hargrovecompany The photos in the Normal folder are also projected photos. |
wow.....well, that's embarrassing....sorry about that |
lol. No probs. |
Why in some screenshot PrepareDenseScene has 2 output and in some images 1? I have also trouble because of this: Thank you, |
I have updated the wiki. |
I made the stupid mistake of not renaming the files correctly... Thank you, |
Here's another set of images of a test subject (real American Rodeo Cowboy) |
@natowi here are some screenshots but I'm not sure if they're going to be useful. The first screenshot is the result of using cross polarized photos. The second screenshot is the result using unpolarized photos, which surprisingly gives a better mesh than polarized photos. The third screenshot is the result of using polarized photos, with unpolarized photos used for texturing without adding a new PrepareDenseScene node. The fourth screenshot is the result of using polarized photos, with unpolarized photos used for texturing with a new PrepareDenseScene node added. |
@tpieco Thank you for this nice example. |
@natowi No problem. I shoot RAW photos and export them to PNGs using Capture One, which doesn't include the EXIF data. However I work out the correct FoV using an online calculator. Thanks for the lookout. |
Solved and added to the wiki. |
Is there a way to generate a mesh from one set of photos but then texture with another set? I've tried a number of things, like unplugging the imagesFolder connector from the texturing node and pointing to the directory of the new photos, and swapping the generated images in the PrepareDenseScene folder with the alternative photos but it causes the process to fail.
Is it possible at all now or will be in the future?
Thanks
The text was updated successfully, but these errors were encountered: