Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of core rendering isn't working ... at least not for me. #651

Open
AndreasReschGGL opened this issue Mar 17, 2021 · 6 comments
Open

Comments

@AndreasReschGGL
Copy link

OS: Windows 10, GTX 1060 and GTX 980 Ti (different PC) - 6GB GPU MEM each
Blender version: Blender 2.91, 2.92
LuxCore version: 2.5 RC1

I've tested a scene that is just big enough not to fit into VRAM on two PCs. On none of those the OOC option works and scenes always quit with an out of memory error. The scene in Cycles pushes the shared GPU memory to about 7.5GB. But with Luxcore it goes to about 6.1GB and then the error is triggered.

Here's a screenshot of the output in the command line window ...

And here's the graph from the task manager shortly before the error kicked in ...

@AndreasReschGGL
Copy link
Author

Here are the last few lines of the command window output. Maybe it helps.

RuntimeError: CUDA driver API error CUDA_ERROR_OUT_OF_MEMORY (code: 2, file:D:\a\1\Luxcore\src\luxrays\devices\cudadevice.cpp, line: 516): out of memory

ERROR: CUDA driver API error CUDA_ERROR_OUT_OF_MEMORY (code: 2, file:D:\a\1\Luxcore\src\luxrays\devices\cudadevice.cpp, line: 516): out of memory

[LuxCore][1180.203] [GeForce GTX 980 Ti CUDAIntersect] Memory used for hardware image pipeline: 405000Kbytes
[LuxRays][1180.203] [Optix][4][DISK CACHE] Closed database: "C:\Users\S\AppData\Local\NVIDIA\OptixCache\cache7.db"
[LuxRays][1180.203] [Optix][4][DISK CACHE] Cache data size: "0 Bytes"

@AndreasReschGGL
Copy link
Author

I made a few more tests but with no luck. Both OPENCL and CUDA don't care about the out of core process. When OPENCL is used, Blender just crashes every time. With CUDA there can be an error or a crash.

I first thought the issue was the high amount of textures, but that doesn't seem to be the issue. I tried smaller textures and ended up with very simple scenes. The issue is merely the image output resolution. Once it gets really big (around 8000px at either side) Luxcore can't handle it. Even if I just try to render a cube. The command line shows that "OUT OF CORE" is somehow used, but the error is triggered anyway.

Here's the command line readout when the scene crashes or throws a CUDA error.

Luxcore_OOC_Error_01.txt

@AndreasReschGGL
Copy link
Author

Out of curiosity I started a ridiculous rendering in Cycles (18000px x 12000px) and the GPU memory went up from 2.4GB to 2.7GB (it occasionally jumps to 4.7GB but quickly goes back down to 2.7GB).

@Dade916
Copy link
Member

Dade916 commented Mar 22, 2021

Out of core rendering works only with CUDA and only on a specific list of buffers: https://forums.luxcorerender.org/viewtopic.php?f=5&t=2102

Film frame buffer is not one of them.

@AndreasReschGGL
Copy link
Author

AndreasReschGGL commented Mar 22, 2021

Film frame buffers is on that list as they should be.

Looking at other read outs, "GPUTaskState buffer" seems the buffer that triggers the error. And it seems as this one usually is a pretty big block. Not sure what it contains, but it should probably added to the OOC list as well - if possible.

And if you look at my TXT file above, until the moment the error is triggered, the accumulated memory that is NOT moved out of core, isn't too much. So it has to be that one "GPUTaskState buffer" that takes up a lot of memory.

Maybe this should be moved over to the regular Luxcore issue list - might not be specifically Blender related.

@AndreasReschGGL
Copy link
Author

AndreasReschGGL commented Mar 22, 2021

Here's a side by side comparison of the command window output. The left side shows the output when OOC was enabled, the right side shows the output with OOC disabled. From what I can see there's no real difference, except the "OUT OF CORE" labels.

It's interesting that the CUDA error is triggered at the same point although with OOC enabled some huge blocks are declared as "OUT OF CORE" and the error should probably be triggered later (or not at all). Looking at that, it seems as if there's no difference. But I'm not an expert, just observing.

Luxcore_Bug_14

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants