Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retrieve all configurable config/gin parameters. #195

Open
Enderfga opened this issue Feb 7, 2024 · 5 comments
Open

Retrieve all configurable config/gin parameters. #195

Enderfga opened this issue Feb 7, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@Enderfga
Copy link

Enderfga commented Feb 7, 2024

Are all the parameters in the gin files under infinigen_examples/configs/scene_types currently supported by Infinigen, and can the effects shown in Infinigen's official promotional video be achieved by adjusting these parameters? I hope to figure out how many customizable options there are in total.

@Enderfga Enderfga added the enhancement New feature or request label Feb 7, 2024
@araistrick
Copy link
Contributor

araistrick commented Mar 12, 2024

Hello,

Unfortunately there is no single easily accessible list of all the customization options present in the current version of infinigen. There is customizable-parameter-count numbers in the original CVPR paper, which does provide a somewhat exhaustive list of the parameters available at that time, but it is now out of date (the codebase is roughly twice as large). We are working on standardizing the random parameter format into something that is uniform across all assets, which would enable them to be easily counted, but at the moment the only way to know for sure is to go into implementations and count them (as we did for CVPR)

The majority of videos in the intro video are directly taken from the result of running manage_jobs with monocular_video on our slurm cluster. the exceptions I can think of are:

  • any videos with fluid/fire sims came from a separate run using the GeneratingFluidSimulations.md docs
  • for scenes with effects designed only for the video (stopping time, dissolving the whole scene, etc) we some extra specific code to randomly generate these, but they are achievable with small amounts of additional python/geometrynodes.
  • the videos towards the end with asset parameters being adjusted are generated by stitching together many runs of infinigen_examples/generate_asset_demo.py with different parameters each time.

Let me know if you have further questions. I wish I could provide a more concrete number, as I think measuring the overall complexity would be useful. Although, it is also worth noting that infinigen also contains millions and millions more random numbers than just these user interpretable ones - besides the user interpretable values like "tree branching frequency" and "bark noise magnitude" there is also millions like "exact angle of every leaf" and "exact displacement of every vertex" or "how did the light bounce during raytracing".

@Enderfga
Copy link
Author

Thank you very much for your patient reply; it has been a huge help to me! Additionally, I would like to ask if InfiniGen supports importing content from an existing 3D material library.

@araistrick
Copy link
Contributor

Yes, importing existing materials is not hard at all, you can apply them to infinigen objects the same as you would to any other blender object.

@Enderfga
Copy link
Author

I mean to ask if it's possible to call the material library directly within InfiniGen when generating scenes (which might require completing the corresponding asset class code), rather than generating a .blend file and then having modelers add it manually. Can this be achieved?

@Enderfga
Copy link
Author

For example, I have a batch of FBX files that contain motion information. I hope to implement character movement in the rendered scene.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants