Skip to content

Conversation

@wouterlucas
Copy link
Contributor

@wouterlucas wouterlucas commented Jan 28, 2025

Why?

The Renderer was off in its GFX Memory allocation, most cases it would report 50 Mb GFX usage while the real usage was >150 Mb.

This was due to:

  • mipmaps being incorrectly enabled
  • no baseline, calculations started at 0
  • no texture padding

What changed?

Introduced baselineMemoryAllocation memory setting to set a baseline usage of your device. For example on my test device (RPI3b+ with WPEWebKit 2.46) an empty Canvas with a WebGL1 context uses 25 Mb. Every device you have will have a base allocation for just the WebGL context alone, adjust this setting to set that immutable baseline.

All textures subsequently loaded will go on top of that baseline.

Removed mipmap's on Textures - we're using fixed scale textures and are not making use of Level-Of-Detail. Enabling mipmap's on textures generates a 33% extra memory allocation while we're never going to use that functionality. This PR thus reduces memory usage of textures allocated on the GPU that had a power of two for height & width.

Locked textures to 4 bytes, regardless of alpha channel. This was introduced in the assumption it would save memory, in tests on my device it didn't matter and all textures where aligned on a 4 byte anyway. So might as well use it, this allows the Renderer to more accurately estimate the memory usage. With 3 bytes in the non-alpha channel the calculation was always way off.

Added a default 10% texture padding, now this is a guesstimate sometimes its 5%, sometimes its more and very much depends on the texture that you're loading. With WebGL 1 there is no way to read the actual texture width/height or do any metrics on the allocation of the texture on the GPU. So this is an unfortunate side effect that we have to "guess" our allocations at all times and never 100% accurately know what the gfx memory usage is. If I missed something or you have a way, please let me know.

plus some random bits:

  • Noise texture support in Canvas2D (useful for testing)
  • .svg detection will now also work with bla.svg?query=string which I used for loading up unique SVGs and test

Tested on an RPI3b + with WPEWebKit and Thunder:

prior to these changes:
image

Renderer assumed we where using 76 Mb of GFX memory while in reality this was 109 Mb.

image

calculated memory by the renderer on the left and actual device GPU usage on the right with 500 textures.

Now it gets worse if we use textures with a power of two with mipmap's, prior to these changes:
image

with these changes:
image

@wouterlucas wouterlucas added this pull request to the merge queue Jan 30, 2025
Merged via the queue into main with commit 3f5e0ef Jan 30, 2025
2 checks passed
@wouterlucas wouterlucas mentioned this pull request Feb 13, 2025
github-merge-queue bot pushed a commit that referenced this pull request Feb 14, 2025
Hunted, patched and pushed the following fixes for memory leaks:
* RTT nodes where always spawned regardless of their boundary state
* Child RTT nodes where never updated with the correct boundary state
* `framebuffer`'s where never deleted on `RenderTexture.free()` calls

Fixes #518

Plus bonus memMonitor overlay fixes that where introduced with #505 with
baseline handling in the bar (now shows a grey bottom bar that can't be
cleared).

Tested on RPI3b+ with Thunder/WPEWebKit 2.46 using modified
`texture-memory-allocation` test, this is the result of scrolling
through 20 rows of RTT textures inside a clipped container:

![image_2025_02_13T13_20_42_822Z](https://github.com/user-attachments/assets/f9951463-e88b-4d51-8a3e-1b33611f0ad7)

Above image is using the `Device Info` graph feature of the ThunderUI,
directly reflecting the RPI3 GPU memory allocation, I could not for the
love of it test this properly on a Chrome desktop instance.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants