Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows image build Docker runs out of memory on Windows 10 #31604

Closed
Sarafian opened this Issue Mar 7, 2017 · 23 comments

Comments

Projects
None yet
@Sarafian
Copy link

Sarafian commented Mar 7, 2017

I'm opening this issue as a bug because from what I've read docker is supposed to not enforce memory or disk space limitations during container's build or run actions.

I have a docker file that does a lot when building and I can't really share to help with reproducing. This docker file behaves differently on different hosts. In one of them runs out of memory which contradicts the above. To help me troubleshoot the issue I've added a cmd that reports the free memory before it runs out of memory

(Get-Counter -Counter "\Memory\Available MBytes").CounterSamples[0].CookedValue
  1. On my Windows 10 host (my workstation laptop) the container fails to build. The workstation has 16GB of memory. The reported free memory before the out of memory crash is 200MB.
  2. On a Windows 2016 on a Hyper-V instanced hosted on my workstation the container builds successfully. The Hyper-V instance is assgined 4GB of memory. The reported free memory before the out of memory crash is 538MB.
  3. On a Windows 2016 host on azure the container builds successfully. The Azure VM is running with 7GB. The reported free memory before the out of memory crash is 3000MB.

Each of the hosts reports the following version
1. Windows 10 Host

Client:
 Version:      17.03.0-ce
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   60ccb22
 Built:        Thu Feb 23 10:40:59 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.03.0-ce
 API version:  1.26 (minimum version 1.24)
 Go version:   go1.7.5
 Git commit:   60ccb22
 Built:        Thu Feb 23 10:40:59 2017
 OS/Arch:      windows/amd64
 Experimental: true

2. Windows 2016 Host on Hyper-V

Client:
 Version:      17.03.0-ee-1
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   9094a76
 Built:        Wed Mar  1 00:49:51 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.03.0-ee-1
 API version:  1.26 (minimum version 1.24)
 Go version:   go1.7.5
 Git commit:   9094a76
 Built:        Wed Mar  1 00:49:51 2017
 OS/Arch:      windows/amd64
 Experimental: false

2. Windows 2016 Host on Azure

Client:
Version:      1.12.2-cs2-ws-beta
API version:  1.25
Go version:   go1.7.1
Git commit:   050b611
Built:        Tue Oct 11 02:35:40 2016
OS/Arch:      windows/amd64

Server:
Version:      1.12.2-cs2-ws-beta
API version:  1.25
Go version:   go1.7.1
Git commit:   050b611
Built:        Tue Oct 11 02:35:40 2016
OS/Arch:      windows/amd64

I understand I've not provided all necessary information to help you reproduce but the artifacts being referenced in the container are not freely available. What I can additionally say, is that the same succeeds on Windows 10 host (case 1) when building on microsoft/windowsservercore:latest but fails when building on asarafian/mssql-server-windows-express:2014SP2. The difference between the two is the extra disk space required for SQL Server 2014SP2 and the memory that the sql server process take. Keep in mind that the sql server has one very small database attached, so it's strange that it makes that big of a difference.

I'm more than willing to help troubleshoot this issue but I need some help on how. My feeling is that there is a difference between how docker and containers behave between Windows 10 and Windows Server hosts. The windows 10 machine has the most memory available from all the other one and reports 8-9GB free when the our of memory is thrown. On the other hand, Windows Server 2016 manages better with less memory available to the host. As there is a difference on setting up docker for Windows 10 and Windows Server, is it possible that there some limitations for Windows 10? If so, then I would consider this bug's resolution and documentation fix because I can't find any relevant information, besides limiting the memory available to the container.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Mar 7, 2017

@jhowardmsft

This comment has been minimized.

Copy link
Contributor

jhowardmsft commented Mar 7, 2017

I think this is a case of documenting how memory management differs between Hyper-V and Windows Server containers. See MicrosoftDocs/Virtualization-Documentation#477

ping @PatrickLang

@thaJeztah thaJeztah added the area/docs label Mar 7, 2017

@Sarafian

This comment has been minimized.

Copy link
Author

Sarafian commented Mar 7, 2017

Aren't Hyper-V containers necessary only when running Linux containers on Windows 10?

When running Windows based containers on Windows hosts, I don't believe Hyper-V is involved. At least you don't see any references.

@friism

This comment has been minimized.

Copy link
Contributor

friism commented Mar 7, 2017

@Sarafian no, on Windows 10, Docker users Hyper-V isolation to run Windows containers.

@Sarafian

This comment has been minimized.

Copy link
Author

Sarafian commented Mar 7, 2017

@friism I'm aware about the hyper-v isolation, but this is a special case of containers. One where the docker host creates literally an new VM and launches the operating system. But that is only when the --isolation=hyperv flag is specified. In the above examples I didn't specify the --isolation, so I should be only limited by the maximum memory the operating system can offer to the container host.

To help with keeping context. On my three examples, docker machine is running on

  1. Windows 10 with 16GB RAM.
  2. Windows Server 2016 in Hyper-V VM with 4GB RAM assigned that is hosted on same windows 10 with 16GB RAM.
  3. Windows Server 2016 on Azure VM with 7GB RAM.

Although case No1 seems to have the most memory available to the host operating system and therefore for the container's processes, it is the one that fails and runs out of memory.

@friism

This comment has been minimized.

Copy link
Contributor

friism commented Mar 7, 2017

@Sarafian on Windows 10 running Windows containers, --isolation=hyperv is the default and the only isolation mode that works.

C:\code> docker info -f "{{ .Isolation }}"
hyperv
@johnstep

This comment has been minimized.

Copy link
Member

johnstep commented Mar 7, 2017

@Sarafian Without specifying --isoliation, you get the default:

Specify isolation technology for container (–isolation)

Here is a sample of the error message from Docker on Windows 10:

$ docker run --isolation=process --rm microsoft/nanoserver ipconfig
docker: Error response from daemon: Windows client operating systems only support Hyper-V containers.
See 'docker run --help'.
@jhowardmsft

This comment has been minimized.

Copy link
Contributor

jhowardmsft commented Mar 7, 2017

FWIW - on client, you can increase the memory to the Hyper-V container similar to dockerfile.Windows. eg see https://github.com/docker/docker/blob/master/Dockerfile.windows#L69

@Sarafian

This comment has been minimized.

Copy link
Author

Sarafian commented Mar 8, 2017

@friism, @johnstep and @jhowardmsft thank you for the explanation and with better understanding the problem is fixed. I offered 2GB of memory as a parameter when detecting a client os. For anyone landing on this issue with the same problem here is my example from Invoke-DockerBuild.ps1

$caption=(Get-CimInstance Win32_OperatingSystem).Caption
$regex="Microsoft Windows (?<Server>(Server) )?((?<Version>[0-9]+( R[0-9]?)?) )?(?<Type>.+)"
if($caption -match $regex)
{
    $isWindowsClient=$Matches["Server"] -eq $null
}
else
{
    throw "Could not determine if the operating system is client of not"
}

if($isWindowsClient)
{
    $memory="2GB"
    Write-Warning "Client operating system detected. Container will run with Hyper-V isolation. Increasing the memory size to $memory"
    $dockerArgs+=@(
        "-m"
        "2GB"
    )
}

With regards to this issue, it can be closes if you want because it was my misunderstanding. As a suggested enhancement I would try to make the isolation difference between Windows OS variants of Client and Server more clear at the beginning. Before creating this issue I had a wrong understanding. Other people also seem to have the same wrong understanding. Therefore, i would suggest to increase the scope of the message for when working with Windows.

Thank you all!

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Mar 10, 2017

@jhowardmsft @johnstep if the -m / --memory option is not supported with --isolation=process, can we produce an error if someone tries to use it? Alternatively, if only process is supported, show a warning in the output of docker info, like we do for Linux here;

https://github.com/thaJeztah/docker/blob/19215597982232f65dcbc873e54e632b99cddecc/cli/command/system/info.go#L280-L316

@jhowardmsft

This comment has been minimized.

Copy link
Contributor

jhowardmsft commented Mar 10, 2017

@darrenstahlmsft PTAL ^^ AFAIK it is supported on both. For Hyper-V containers, it is applied to the memory of the utility VM with an algorithm as per the link I provided above. For Windows Server containers, it is a constraint on the job object.

@darstahl

This comment has been minimized.

Copy link
Contributor

darstahl commented Mar 10, 2017

@jhowardmsft Correct. Windows Server container memory is an optional limit, with the default being no limit. You can specify -m 2GB in the case of both Hyper-V containers and Windows Server containers and both will respect the limit.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Mar 10, 2017

That's good news, Thanks for clarifying!

@Sarafian

This comment has been minimized.

Copy link
Author

Sarafian commented Mar 11, 2017

I would like to clarify a small difference for -m/--memory option for an out of the box configuration between Client and Server OS and their respected default isolation modes.

  • With server os and default isolation mode process it is only to limit the memory available for the container.
  • Win client OS and default isiolation mode 'hyperv' it would used to increase the available memory as the default setting of 1GB (I presume) is already to low especially for microsoft/windowsservercore based images.

This is one of the reasons I got confused initially. I've read the general statements of

  • docker doesn't enforce memory restrictions by default.
  • -m/--memory enforces optional memory restrictions.

And I made the logical deduction that there was something wrong and created this issue. In my opinion, the documentation needs to improve a bit to make it more clear when as a developer you are working with windows containers on a client OS, e.g. Windows 10.

@paulsapps

This comment has been minimized.

Copy link

paulsapps commented Apr 4, 2017

I agree that is completely not clear that containers ONLY work with Hyper-V isolation on Windows 10, and thus have the "hidden" 1GB memory limit.

Also I can assume we are probably missing a CPU usage/cores limit if Hyper-V is involved? Is there a 1 core limit by default?

@FFLSH

This comment has been minimized.

Copy link

FFLSH commented May 15, 2017

+1 - This caused days of head scratching, wondering why our containers were near-constantly sluggish when the host machine had gigabytes of memory free and we had not specified any limits on the containers. Adding mem_limit=4G to the service in the compose file fixed this.

The appropriate fix would be to remove the magical 1GB limit. Until then at least update the documentation which states contradictory information:

By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler will allow

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented May 15, 2017

@FFLSH can you open an issue for that in the documentation repository? (There are buttons to "request documentation changes", or to "edit the page", which allows you to open a pull request) That part of the documentation describes the behaviour on Linux, which is different in this case

@MikhailTymchukDX

This comment has been minimized.

Copy link

MikhailTymchukDX commented Aug 29, 2017

Not sure where to post my question, because it involves both docker-compose and this memory issue.

I try to increase available memory of a container on Windows 10 using docker-compose.yml, like @FFLSH did:

version: '3.1'
...
deploy:
  resources:
    limits:
      memory: 3g

The problem is that when I check total memory inside the container, it reports 1 Gb.
So what is wrong here: is this setting not mapped to docker run --memory option or I used it in a wrong way?

> docker version
Client:
 Version:      17.06.1-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:48:20 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.24)
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 23:03:03 2017
 OS/Arch:      windows/amd64
 Experimental: true
@fizxmike

This comment has been minimized.

Copy link

fizxmike commented Apr 25, 2018

What's the status on this? Is there some workaround I'm missing? I'm on Windows Server Core 1709, running Docker EE preview. Neither -m or --memory parameters work for my Linux images (with or without LCOW enabled).

PS C:\> docker run -it -m 2g --rm busybox free -m
             total       used       free     shared    buffers     cached
Mem:           972        152        819         20          0         20
-/+ buffers/cache:        132        840

I have an image that peeks at 1.5 GB of memory usage (when I docker stats it on linux host). On docker for windows EE, the container crashes due to out of memory error (obviously due to only 927 megs being available).

@jhowardmsft

This comment has been minimized.

Copy link
Contributor

jhowardmsft commented Apr 25, 2018

Hold on, there are three completely seperate scenarios here from the added me-toos. Let's try to keep them isolated and not overload one issue. From a pure moby/moby perspective:

a) Windows containers running on Server OS's using the default "process" isolation.
b) Windows containers running on Client OS's using the default (and only available) "Hyper-V" isolation, or running on Server OS's using Hyper-V isolation (ie --isolation=hyperv)
c) Linux containers on Windows (aka LCOW) which are by definition Hyper-V isolation using a 'utility' VM.

There's also d) which is the Docker-For-Windows non-experimental "LCOW" where a "real" or "not 'utility'" Linux VM is used to host containers running on Windows. I'm not going to comment on that mode in which D4W runs as it's a closed-source solution owned/maintained by Docker Inc.

On top of that, there's docker compose which is a different beast entirely, and I'm also not going to comment on. Any issues there need to be addressed in that repo.

a) The -m option should work fine
b) The -m option should work fine - here, the memory is applied to the utility VM hosting the container. The container itself is not constrained inside the UVM.
c) The -m option (along with many other parameters) is not hooked up and defaults to 1GB. Note also LCOW is experimental (requires the daemon to be started with --experimental) and is NOT production ready. Many pieces remain missing from this still.

For c) while you can build a private daemon to set the memory the ultimate answer isn't that simple.
There really needs to be two "memory" CLI/API parameters - one for the size of the utility VM, one to apply to the container (or containers from RS5+) running inside the utility VM. This work is still undefined and requires agreement from docker maintainers on how this should look. HCS (the underlying interface to the Windows platform) is capable of supporting this, and I'm doing work to enable this in the go-binding for HCS (HCSShim) for RS5 support. It will still be a while before agreement is reached on what any docker (moby) API/CLI should look like for this. An interim PR (I can dig out link, but should be easy enough to find) to force -m to refer to both the UVM and container memory was not accepted and remains pending.

In other words, the only workaround for c) is to build your own private dockerd.exe.

@fizxmike

This comment has been minimized.

Copy link

fizxmike commented Apr 25, 2018

Sorry to add noise... I've been fishing on various issues for any information I could find, or for anyone who could give me feedback. Thank you @jhowardmsft for detailing the scenarios so well, and confirming my suspicion about c). Can anyone point me at information about d) the LCOW that is "non-experimental" LCOW? Google might have a hard time with that...

... I'm only concerned with Linux images on windows server since I'm unfortunate enough to have a customer who will only sysadmin windows server.

(I wont comment on the failure of D4W to conform to the principle of least surprise. I Lied. I'd submit that -m should be the only API needed, and the user should be informed of the behavior/consequences in the documentation. No one expects Windows to behave like Linux, and I'm sure we all have a hunch that the Windows kernel will be replaced with a custom Linux kernel eventually, so why add a second memory API to accommodate a temporary situation? Just say "in situation c) -m will allocate fixed memory in utility VM, the default is 1GB." -- The situation now is that -m does nothing and containers run out of memory and crash.)

Also, @jhowardmsft, is this the PR you mention?

@anarkia7115

This comment has been minimized.

Copy link

anarkia7115 commented Mar 15, 2019

In my case, docker run -m <larger memory> ... is not working.
The following setting works for me.
Docker Settings -> Advanced -> Memory

ref: https://stackoverflow.com/a/55174331/8936782

@jhowardmsft

This comment has been minimized.

Copy link
Contributor

jhowardmsft commented Mar 15, 2019

Looks like a D4W issue rather than moby. Can you open the issue there? Thanks. @anarkia7115

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.