New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate high RAM usage during smoke export #164

Theverat opened this Issue May 19, 2018 · 1 comment


None yet
1 participant

Theverat commented May 19, 2018

One idea I had is:
Currently there is no "AddAllHalf" or "AddAllByte" function for LuxCore properties (and I don't even know if they can store half or byte floating point numbers).

So if we for example use half precision when baking smoke in Blender, and then render with LuxCore, the following conversion happens:
smoke grid (half) -> Python list (double) -> LuxCore property (float) -> LuxCore densitygrid texture (half)

To actually use half (or byte) precision during the whole conversion, we would need

  • A python list (or array) that supports half/byte (e.g. numpy arrays)
  • AddAllHalf and AddAllByte functions in pyluxcore, and the ability for properties to store half/byte

Note that I am not sure if the half/byte issue is the only thing causing the RAM usage to skyrocket during smoke export.

Here is an example of the RAM usage during smoke export:


@Theverat Theverat added this to the BlendLuxCore v2.1 milestone Nov 15, 2018

@Theverat Theverat closed this in cb3ef25 Nov 15, 2018


This comment has been minimized.


Theverat commented Nov 15, 2018


  • Unfortunately, the bpy_prop_array that represents the smoke data in Blender does not support foreach_get or the Python buffer interface, so it is a bit on the slow side
  • We now use a Python float array instead of a list (which uses doubles), cutting down memory usage by 50%
  • The conversion from bpy_prop_array to array takes longer than it took to convert to list - however, the rest of the smoke export code now runs faster. In the end, the export time stayed equal.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment