Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container memory limits, Java -Xms, any recommendation? #57

Closed
stefcl opened this issue Nov 23, 2015 · 15 comments
Closed

Container memory limits, Java -Xms, any recommendation? #57

stefcl opened this issue Nov 23, 2015 · 15 comments

Comments

@stefcl
Copy link

stefcl commented Nov 23, 2015

Hello,
Until recently, I used to run my dockerized java apps without specifying --memory and --memory-swap parameters. I thought passing Xms and Xmx to the containerized java app and settings them to the same value (eg: 1024m) would be enough to keep memory usage predictable and under good control.
However, doing so, after a few days of intense activity (our apps are tomcat-based web services), docker stats would report many times the Xmx amount under the MEM USAGE column.

I started specifying limits the following way :

 --memory=< Xmx + 70m>
 --memory-swap=0

70m may seem a bit restrictive but it seems that the actual container memory limit (which you can see in docker stats) is always 40-50M larger than the number you specify. But I encountered stability issues, the app would eventually crash without notice. Not sure what would be a safe margin...

Any advice for setting these limits?

@tianon
Copy link
Member

tianon commented Jan 22, 2016

Is it possible that Java's -Xmx value is per-process, and that your application ends up launching multiple processes? Does docker top on the container show anything suspicious or helpful?

@stefcl
Copy link
Author

stefcl commented Jan 25, 2016

Hi, thanks for the hint but no, it's a single process application. I finally set my memory limits to Xmx + 120mb and it seems safe, it may be too much but it's really difficult to tell.
Perhaps at some point (not right after startup), some of the JDK packages I am using require the system to lazy-load some native libs (adding a few megs to the footprint) or this could be a memory leak in my code but in this latter case, I'd expect to see an OutOfMemoryException somewhere in my logs or in the stdout. The only message I can see while inspecting a stopped container with docker log is something like (14) killed.

It may also be relevant to note that I am setting swap limits to "0". Perhaps that's not the best idea for a stable container but on the other hand I would not want my apps to silently rely on memory swapping to stay operational.

I am going to give another try with the most recent version of docker and only Xmx values specified, without container-level memory limits, and check what docker stats say.

@mjaverto
Copy link

mjaverto commented Feb 1, 2016

@stefcl we seem to have similar issues with Java running in docker on AWS. Pretty much describes our problem to a T.

I tried exactly what you have here, Xmx + around 120mb. However long running tasks still have an issue and the Java process RES memory grows and ends up hitting the limit.

We still haven't found a great solution other than increasing that 120mb buffer but the container still ends up crashing on long running tasks.

@carlossg
Copy link

The jvm will always consume memory off heap, and depending on your application the off heap memory can be more. For instance ElasticSearch will use at least twice as much.
It all depends on your app how much margin you need

@tianon
Copy link
Member

tianon commented Mar 16, 2016

@carlossg ah, thanks for clarifying 👍 Sounds like we can't really create a generic solution to this 😞

@carlossg
Copy link

I have added a script in #71 that will allow setting Xmx easily when running inside a container. With JVM_HEAP_RATIO you can define how much memory in the container you want to assign to the heap.

@rmaugusto
Copy link

I am looking for answers as well... But I think there is no magic number, I dont know exactly if java 8 has the same behavior but before it you should consider:

Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]

@jmkgreen
Copy link

jmkgreen commented Jun 7, 2016

See also moby/moby#15020 which may be relevant..?

@jhovell
Copy link

jhovell commented Aug 5, 2016

No one mentioned this blog post, so I thought I'd share as I found it very helpful/informative

http://matthewkwilliams.com/index.php/2016/03/17/docker-cgroups-memory-constraints-and-java-cautionary-tale/

@megastef
Copy link

Hi, there might be a related a memory leak in openjdk 8 "HotSpot leaking memory in long-running requests":
https://bugs.openjdk.java.net/browse/JDK-8164293

application does not crash unless placed inside a cgroup (for example) with a hard memory limit. 
CUSTOMER SUBMITTED WORKAROUND : 
Revert to Java 7 
Disable HotSpot completely with -Xint 
Set the MALLOC_ARENAS_MAX environment variable to 2, which slows the memory growth, but does not halt it. 

@stefcl
Copy link
Author

stefcl commented Jan 18, 2017

Thanks megastef 👍, this bug report could well be related to our issue here. However, the issue is marked as fixed in a future release due in... Q4 :-(.

I tested adding the -Xint flag, but it did not seem to make any difference (I am running java 8 latest).

I found another related issue here :
https://bugs.openjdk.java.net/browse/JDK-8146115

Along with some documentation found elsewhere on the web, it seems to indicate that there is a general problem with java in cgroups, partly because the JVM may make wrong assumptions regarding the ressources available to it. Somebody suggested setting -XX:MaxRAM to match the actual memory limit of the cgroup/container but it's unclear if this value has any use other than determining a default heap size if you don't specify -Xmx.

In earlier posts, I mentioned the formula Xmx + 70mo for setting memory limits, but my actual production settings are now more like Xmx + 250mo for a Play2 web app.

@ant8e
Copy link

ant8e commented May 2, 2017

JDK 8u131 has a experimental new VM option that maybe solves this: Experimental support for cgroup memory limits in container (ie Docker) environments

@rcoup
Copy link

rcoup commented May 2, 2017

From one of the linked JDK issues from @ant8e's link:

Default behaviour in 128MB Docker container:

root@4b4024ad1b4d:/# ./jdk8/bin/java -XX:+PrintGCDetails -XX:+Verbose -version 
  Maximum heap size 16870012928 
  Initial heap size 1054375808 
  Minimum heap size 6815736 

New behaviour:

root@4b4024ad1b4d:/# ./jdk8/bin/java -XX:+PrintGCDetails -XX:+Verbose \
  -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -version 
Setting phys_mem to the min of cgroup limit (128MB) and initial phys_mem (64353MB) 
  Maximum heap size 67108864 
  Initial heap size 6815736 
  Minimum heap size 6815736 

There's no 8u131 docker releases yet, but trying with 9-b161:

$ docker run --rm -it -m 128m openjdk:9-jre

root@a5f7fcc4a95e:/# java -XX:+PrintGCDetails -version
[0.001s][warning][gc] -XX:+PrintGCDetails is deprecated. Will use -Xlog:gc* instead.
[0.007s][info   ][gc,heap] Heap region size: 1M
[0.009s][info   ][gc     ] Using G1
[0.009s][info   ][gc,heap,coops] Heap address: 0x00000000e0c00000, size: 500 MB, Compressed Oops mode: 32-bit
openjdk version "9-Debian"
OpenJDK Runtime Environment (build 9-Debian+0-9b161-1)
OpenJDK 64-Bit Server VM (build 9-Debian+0-9b161-1, mixed mode)
[0.168s][info   ][gc,heap,exit ] Heap
[0.168s][info   ][gc,heap,exit ]  garbage-first heap   total 32768K, used 1024K [0x00000000e0c00000, 0x00000000e0d00100, 0x0000000100000000)
[0.168s][info   ][gc,heap,exit ]   region size 1024K, 2 young (2048K), 0 survivors (0K)
[0.168s][info   ][gc,heap,exit ]  Metaspace       used 3715K, capacity 4480K, committed 4480K, reserved 1056768K
[0.168s][info   ][gc,heap,exit ]   class space    used 322K, capacity 384K, committed 384K, reserved 1048576K

root@a5f7fcc4a95e:/# java -XX:+PrintGCDetails -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -version
[0.001s][warning][gc] -XX:+PrintGCDetails is deprecated. Will use -Xlog:gc* instead.
[0.001s][info   ][gc,heap] Setting phys_mem to the min of cgroup limit (128MB) and initial phys_mem (1999MB)
[0.008s][info   ][gc,heap] Heap region size: 1M
[0.012s][info   ][gc     ] Using G1
[0.013s][info   ][gc,heap,coops] Heap address: 0x00000000fc000000, size: 64 MB, Compressed Oops mode: 32-bit
openjdk version "9-Debian"
OpenJDK Runtime Environment (build 9-Debian+0-9b161-1)
OpenJDK 64-Bit Server VM (build 9-Debian+0-9b161-1, mixed mode)
[0.147s][info   ][gc,heap,exit ] Heap
[0.148s][info   ][gc,heap,exit ]  garbage-first heap   total 8192K, used 0K [0x00000000fc000000, 0x00000000fc100040, 0x0000000100000000)
[0.149s][info   ][gc,heap,exit ]   region size 1024K, 1 young (1024K), 0 survivors (0K)
[0.149s][info   ][gc,heap,exit ]  Metaspace       used 3710K, capacity 4480K, committed 4480K, reserved 1056768K
[0.150s][info   ][gc,heap,exit ]   class space    used 322K, capacity 384K, committed 384K, reserved 1048576K

@tianon
Copy link
Member

tianon commented May 8, 2017

Related docs PR: docker-library/docs#900

@tianon
Copy link
Member

tianon commented Jan 3, 2018

Closing given the great upstream features / recommendations that are now documented from docker-library/docs#900. 👍

@tianon tianon closed this as completed Jan 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants