New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prep materials very slow [on high-res resource pack, due to grayscale check] #178
Comments
Appreciate identifying the code sample and indeed sharing the suggestion of using PIL, which indeed would run much faster. Your time estimates sound correct, for images of that size. Unfortunately because of the way blender exposes image pixel data, even reading only one pixel value results in copying the whole of pixel data into memory, hence the lag. So the sampling limit only helps so much. Something I have to take into consideration is cross compatibility, and building a system to import/package/and keep fresh another library not coming with blender adds additional friction to installing the addon. One thing worth noting: results are cached with the material, meaning it should only be slow the first time prep materials are used. Now it's worth noting that this is specific to the blend file, a possible improvement could be to cache the grayscale status more "globally" by e.g. placing a json file or next to the texture pack itself, indicating which materials have been tested for grayscale and what the result is. This doesn't solve for the speed of the first time, but every other time that same resource pack is used, the prior results would be accepted. Does that seem fair, would that alleviate your issue? Or is it frustrating enough/you switch between texture packs often enough that it's really unusable even for a first-time load? Keen to hear your thoughts. |
Minor edit, worth actually confirming this while we can, are subsequent prep material runs in the same file faster after done the first time? |
Because I did not find a tutorial how to use Swap Texture Pack Because I feel that 1 4k texture saves memory compared to 300 64*64 textures, I did not continue to study the Swap Texture Pack
The tab symbol may be displayed incorrectly:P |
Sorry for the delayed reply - so in your experience, this code change is sufficient, even though it's still just using blender's image pixel access? Or is the 5seconds about the same as before? Also am I understanding your code correctly, that you are doing 30 by 30, 2d evenly spaced out in samples of the image? If so, that at the very least is indeed an improvement to my current method. Another thing I could attempt to do is actually thread this method, so it does multiple image checks in parallel. Most likely it would make sense to parallelize at the level of materials, possibly pre-fetching the list of materials to be checked, and then rejoining threads once all materials have been checked once it gets to the point of the normal grayscale check. |
Side note to self - in a separate project I found that, while silly sounding, a faster alternative to using this sampling in python memory loaded pixels, was to use the image resize function (which uses a lower level function), and literally resize the image down to a very small number, maybe like 16x16 or at most 32x32. Then operate over all of these pixels to determine if greyscale, and then reload the source image. The cost is the amount of time reloading. However, if we are able to duplicate the image datablock of an already loaded 2K etc texture, this would operate quite fast I think, entirely negating the need for sampling here (rather, sampling just gets shifted into an interpolated average over nearby pixels). |
Addressed this in the referenced commit, this fix will be in the next release which greatly improves the speed of swap textures for large texturepacks. |
When I used mineway to generate a map with a texture pack, a texture with a resolution of 40964096 was generated
Then I was very slow in prep materials
I checked the source code and found that it was a problem with the function is_image_grayscale in \materials\generate.py
It will traverse and detect the pixels of the entire picture. Even if the interval is added, it is still very slow. It seems that the speed of accessing the large list will be slower.
I tested it and it takes 0.1~0.2s to access a pixel of a 40964096 picture
And the time to read a row of pixels is the same as the time to read one pixel
So whether it can be read line by line to judge
Or zoom the picture to detect or use the PIL library to increase the speed
The text was updated successfully, but these errors were encountered: