You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With buildpack 4.x we took a significant step forward in more refined memory configuration. We took all the memory items we could reasonably find that were fixed (metaspace, threads, DirectMemory, etc) and set them at fixed sizes so that more of the memory of a container could stay with the heap. However, we've since discovered that applications in larger containers are now more frequently over running the bounds of the container.
We've been able to identify a major components of this being GC memory which grows with the heap #512 However, this memory setting turned out to be difficult to account for. I suspect there are others as well that may be even harder to track down.
In the meantime I propose we add a simple jvm_overhead config value to the jdk which is a simple percentage which the jbp and memory calculator will use as a catch all in an attempt to account for these random unaccountable jvm memory pools that scale with the size of the heap.
In my fork of the JBP I added an 11%. Though my buildpack fork this problem may have been a bit exaggerated because we set -Xss256k leaving less unused thread overhead to act as a jvm memory overhead buffer.
In my fork we also calculate the 11% off of MEMORY_LIMIT since that was easier. Not sure if this value should be calculated off of heap size or MEMORY_LIMIT. I could go either way.
If this issue is accepted then #512 can also be closed.
The text was updated successfully, but these errors were encountered:
youngm
changed the title
Add support for simple jvm overhead configurable percentage
Add support for simple configurable jvm memory overhead percentage
Apr 24, 2018
@glyn Can you please take this as a task to update the memory calculator to take the argument and when that's available I'll add the functionality to the buildpack itself.
@nebhale When you've added the functionality to the buildpack itself, please would you remember to update how to configure the buildpack in the memory calculator's README.
With buildpack 4.x we took a significant step forward in more refined memory configuration. We took all the memory items we could reasonably find that were fixed (metaspace, threads, DirectMemory, etc) and set them at fixed sizes so that more of the memory of a container could stay with the heap. However, we've since discovered that applications in larger containers are now more frequently over running the bounds of the container.
We've been able to identify a major components of this being GC memory which grows with the heap #512 However, this memory setting turned out to be difficult to account for. I suspect there are others as well that may be even harder to track down.
In the meantime I propose we add a simple
jvm_overhead
config value to the jdk which is a simple percentage which the jbp and memory calculator will use as a catch all in an attempt to account for these random unaccountable jvm memory pools that scale with the size of the heap.In my fork of the JBP I added an 11%. Though my buildpack fork this problem may have been a bit exaggerated because we set -Xss256k leaving less unused thread overhead to act as a jvm memory overhead buffer.
In my fork we also calculate the 11% off of
MEMORY_LIMIT
since that was easier. Not sure if this value should be calculated off of heap size or MEMORY_LIMIT. I could go either way.If this issue is accepted then #512 can also be closed.
The text was updated successfully, but these errors were encountered: