-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model Inference Output Issue | Base Color not Showing #20
Comments
It seems that the values of albedo, roughness, and metallic are still at their initial settings. Could you please check if the material representation is indeed being optimized? |
I appreciate your prompt response! One more thing, does it have something to do with the |
Previously, someone encountered a similar problem when using Docker images, and he mentioned that pulling the image again solved the issue. Although I haven't personally experienced this, you might want to try setting up the environment directly by installing packages in Indeed, the |
I switched to building a Dockerfile since the Conda environment was not being setup, due to the issues caused by the dependencies which are based on other GitHub repos e.g.
The Dockerfile had some issues too, but they were less compared to Conda environment. I can try 2 things if the issue is related to Environment Setup
About this, if you check the file, by opening the following URL: The Last, I don't think it is an Environment related issue rather a Command/Code related issue, where the model is getting some default parameters, instead of the actual ones (as you stated). And no matter you start Training or Testing, both commands run the code, which means my environment has been set up properly. Anyways, please keep this issue open, till I experiment further keeping in view your suggestions. I'll update you accordingly. Thanks! |
The Could you please specify what exact problems you encountered when installing other GitHub repositories in the Conda environment? One potential solution could be to try cloning the repositories and installing them. |
I apologize for getting back to you a bit late, as I missed your message and was a bit busy. Moreover, my main concern is regarding reproducing the exact inference results, as discussed earlier, and which I feel is due to some initialized value being passed (as stated by you earlier) or some other code-related issue (and NOT environment issue). Can you please comment if the Inference for both the steps, i.e. PBR Material Generation and the output generation (3D textured model) is being done in 1 step via the command below, OR it requires running multiple commands?
|
No,you should run this command for PBR material generation and output generation.
|
Okay, actually the flag Just let me know in the meantime, that on average, how much time it should take to generate the PBR materials for 1 example, on a RTX 3090 (24 GB) GPU? |
The training process, which pertains to the SDS procedure, indeed takes some time. Pre-rendering with Blender typically requires around 15 minutes, but this duration may vary depending on your CPU performance. When the pre-rendering is done, the generation process should be completed within 20 minutes on a RTX 3090 GPU. |
Alright, thanks for the clarification! Right now, the command you shared, is working. I'll update you here, once outputs are generated. Moreover, I'll try to set up Conda environment as well, and let you know regarding exact issues. |
Hi again, I still haven't been able to set up the Conda environment since I was busy, so please keep the issue open till then. Right now, I have 3 small questions, in which I need your assistance.
|
|
Alright!
Thanks for the tip! Will try to explore it further, and test it!
Thanks a lot for the insight, I was wondering today why the mesh size of output changed 😅, so it makes sense now! Will try to experiment with those as well! |
Hi again, Yuqing,
I would love to hear your opinion on the above-mentioned questions, thanks! |
About the quality of the generated textures not being "up to the mark," I'm not entirely sure what specific issues you're encountering. While I cannot guarantee that adjusting cond_scale will definitely improve the quality, since DreamMat isn't a perfect material generation scheme, I would be very interested in seeing some failure cases. These could provide valuable directions for future improvements. The scaling adjustments we make are applied on a normalized mesh. The determination of scale primarily depends on the shape of the object to ensure it occupies a reasonable proportion of the overall scene. I appreciate your feedback on the parameters, and I agree that 0.7-0.8 seems suitable for most models. We do not alter the topology of the input model; we only change the model's scale. For models without UVs, we use xatlas to generate UV mapping. The underlying model can be replaced in the config file. For image-guided texture generation, please look forward to our work at SIGGRAPH Asia. Regarding the environment setup issues, feel free to create a new issue for that. You might also find solutions in the closed issues if others have encountered similar problems. Thank you for your questions, and please do not hesitate to contact me if you have more. |
I agree that it isn't perfect, but it still is one of the best available model. I'll definitely share the failure cases. Mostly either the Generated Texture isn't up to the mark as the mesh is also a bit of low polygon (maybe due to normalization) and in some cases, it takes even more memory than 24 GB (for inference) e.g. cube/box shapes, giving
Thanks for the clarification. And yes, I was considering to apply the generated PBR material to the original mesh, but it doesn't fit properly on that, which is why I asked.
I will check it for exploration and let you know for any queries. Upon some research, I believe it's an upcoming event, so is there any arXiV or pre-release of that work available? =============================================
About the Environment Issues, thankfully, after a tedious effort, I managed to set it up at last, successfully. There were some CUDA/GCC version incompatibility issues at my server/machine side, but I feel some further instructions may be added, in the provided guidelines for DreamMat too. I'll let you know, after documenting them, if I found any major change. And yes, I checked all the issues too, they did help to some extent, initially. Thanks once again for the detailed answers to my queries! |
Hi Yuqing, I'm awaiting your response to my previous comment. In the meantime, I'll add my observations regarding conda environment setup.The order for installation of files/packages/libraries which is mentioned in Dockerfile seems more appropriate and relevant, however, there are some things missing in that, which I'll try to add:
|
Our geometry- and light-aware ControlNet is designed to encourage the generated textures to stay consistent with the current geometry. Therefore, it indeed might struggle with very low-polygon models, such as cubes. Additionally, if the front and back of a model have similar geometries, the multi-face issue might occur, which is a common problem with SDS methods. Regarding the In DreamMat, we utilize xatlas to obtain UV mappings. For the generated textures, we have incorporated some post-processing techniques such as UV padding to address issues with seams. Our subsequent work has not yet been released. We've been quite busy recently, and we expect to release it in November. You can find some existing research on Arxiv, such as FlexiTex (https://arxiv.org/pdf/2409.12431) and EASI-Tex (https://arxiv.org/abs/2405.17393). Lastly, thank you very much for your comments and observations regarding conda environment setup. |
Hi, I couldn't get back to you earlier. Moreover, I have some questions from you, regarding the generated output of DreamMat.
P.s. You may answer these questions at the moment and test the box issue too. Moreover, I do have some queries regarding fine-tuning, so I'll hopefully ask them in a separate issue, soon. Thanks again for your assistance! |
Hi, any update on the highlighted matter? Also, I would appreciate your response on the queries I mentioned, in the two issues. |
Thank you for the tip! I'll check it for all such cases.
In this command, did you generate conditional maps by setting
Does the Linear setting apply for Roughness and Metallic too?
Alright. I was looking forward to it, but I have some doubts, as stated in the Issue #25, so, I request your valuable comments in that as well (whenever convenient for you). And thanks again. I really appreciate your support! |
|
Got it, thanks! |
Just two small queries:
Your assistance will be appreciated, Thanks! |
Hi, kudos for the amazing work, and the results shown in paper looks promising!
However, I'm facing issues while generating/reproducing the exact output as highlighted in the results.
I have created a docker image, and uploaded the provided controlnet weights, as suggested by you.
Now, when I run the inference, the generated 3D model lacks color. Seems like the PBR materials are not generating properly, especially the Albedo output (3rd image in gif), which must include the base color. Here is the 'a brown basketball' evaluation.gif.
I am using this command (same as guided), so I don't know where the issue lies. Maybe the trained model is not working properly, or there is some parameter related issue.
python launch.py --config configs/dreammat.yaml --test --gradio --gpu 0 system.prompt_processor.prompt="A brown basketball" system.geometry.shape_init=mesh:load/shapes/objs/basketball.obj system.geometry.shape_init_params=0.6 data.blender_generate=true
I'm aiming to achieve these results:

Please help me resolve the issue, as I am doing research in the domain of 3D and really impressed by your work. I am looking forward to getting a quick response from your end, so I can incorporate your work in my pipeline accordingly. Thanks!
The text was updated successfully, but these errors were encountered: