Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Question on Resource Limits? #471

Closed
qrpike opened this Issue Apr 24, 2013 · 30 comments

Comments

Projects
None yet

qrpike commented Apr 24, 2013

I see there is now Memory limiting

I was curious on if/how/when we can limit CPU and Storage on the containers.

Thanks,

Contributor

jpetazzo commented Apr 24, 2013

CPU limiting will be straightforward, thanks to the cpu.shares cgroup
setting.
However, storage is trickier. Cgroups don't have hooks for that.
Solutions include:

  1. use an underlying filesystem supporting per-directory quotas (eg XFS +
    project quotas)
  2. switch AUFS vs btrfs or zfs, which support per-branch quotas
  3. meter usage on a regular basis (hourly, daily...) and flag containers
    for abuse
    Solution (1) requires some hooks within container creation code.
    Solution (2) is obviously even more invasive.
    Solution (3) is expensive (I/O-wise) and can let a container fill a disk
    anyway (if it fills it faster than the check cycle).

On Wed, Apr 24, 2013 at 4:45 PM, Quinton Pike notifications@github.comwrote:

I see there is now Memory limiting

I was curious on if/how/when we can limit CPU and Storage on the
containers.

Thanks,


Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/471
.

qrpike commented Apr 24, 2013

Sounds like the storage limitation is a little ways off.

Also, is it possible to mount a sharable file/drive between containers. NFS shares or something?

Thanks,

Contributor

jpetazzo commented Apr 24, 2013

Hmm, in which direction do you want the share to happen?

On Wed, Apr 24, 2013 at 4:55 PM, Quinton Pike notifications@github.comwrote:

Sounds like the storage limitation is a little ways off.

Also, is it possible to mount a sharable file/drive between containers.
NFS shares or something?

Thanks,


Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/471#issuecomment-16979941
.

qrpike commented Apr 25, 2013

So basically, to have a PaaS the nodes/drones need to be able to read/write to a central location. Obviously some choose object storage like S3 or GridFS. But for PHP and older applications, the apps usually use the disk to store things.

So basically I would do a docker run... and then mount the folder/drive/nfs share to that container at X mount point. And If I mount that same thing to another container, they can both see / read / write on it.

Thanks for the fast responses!

Contributor

jpetazzo commented Apr 25, 2013

Indeed!
This is probably within the scope if persistent data storage (issue #111,
dotcloud#111).
Warning: this is a very long thread, I advise you to brace yourself before
reading it in full!

On Wed, Apr 24, 2013 at 5:00 PM, Quinton Pike notifications@github.comwrote:

So basically, to have a PaaS the nodes/drones need to be able to
read/write to a central location. Obviously some choose object storage like
S3 or GridFS. But for PHP and older applications, the apps usually use the
disk to store things.

So basically I would do a docker run... and then mount the
folder/drive/nfs share to that container at X mount point. And If I mount
that same thing to another container, they can both see / read / write on
it.

Thanks for the fast responses!


Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/471#issuecomment-16980124
.

Contributor

steakknife commented May 3, 2013

Read this and #111.

Metering IOPS (priorities, limits) to storage devices would be clearly awesome.

Most of the time, apps aren't CPU nor network bound, most things being equal are usually IOPS bound, especially for boxen lacking SSDs or operating from non-local storage.

Contributor

jpetazzo commented May 3, 2013

Regarding metered IOPS, there is the cgroups blkio controller.
It lets you define per-device limits in iops and bps, separately for read
and writes.
It has one severe downside, though: it applies only to synchronous
operations, which means that it will be wildly inaccurate:

  • most accesses done through mapped memory won't be accounted for,
  • even normal write() calls will only be accounted for once the dirty ratio
    is exceeded (and the kernel turns them into synchronous writes).
    However, this might improve in future kernel versions.

On Thu, May 2, 2013 at 7:49 PM, Barry Allard notifications@github.comwrote:

Read this and #111 dotcloud#111.

Metering IOPS (priorities, limits) could be challenging, but absolutely
awesome.


Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/471#issuecomment-17376305
.

Contributor

steakknife commented May 5, 2013

Probably should continue this part of the thread on the LKML and cc: zfsonlinux, Ts'o.

On 3 May 2013, at 11:36 AM, Jérôme Petazzoni wrote:

Regarding metered IOPS, there is the cgroups blkio controller.
It lets you define per-device limits in iops and bps, separately for read
and writes.
It has one severe downside, though: it applies only to synchronous
operations, which means that it will be wildly inaccurate:

  • most accesses done through mapped memory won't be accounted for,
  • even normal write() calls will only be accounted for once the dirty ratio
    is exceeded (and the kernel turns them into synchronous writes).
    However, this might improve in future kernel versions.

On Thu, May 2, 2013 at 7:49 PM, Barry Allard notifications@github.comwrote:

Read this and #111 dotcloud#111.

Metering IOPS (priorities, limits) could be challenging, but absolutely
awesome.


Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/471#issuecomment-17376305
.


Reply to this email directly or view it on GitHub.

cantino commented May 7, 2013

What's the status of setting CPU limits on containers?

Contributor

jpetazzo commented May 7, 2013

@cantino: pull request #551 provides CPU quotas.

cantino commented May 13, 2013

Awesome, thanks. Looks like it got merged!

Contributor

creack commented May 13, 2013

Indeed :) Closing.

@creack creack closed this May 13, 2013

chenyf commented Aug 9, 2013

In our product, we add disk and network limit for docker, the idea is from warden.

for disk limit: we create a 'worker' user account in each container, and assign a user id for 'worker'; On the host , we use setquota to limit this userid.

for network limit: we use tc to control the host-side veth device.

@chenyf Can multiple users in Linux have the same username but different userids? When a user/userid is created in the container, isn't it in its own namespace? I'm curious if you could provide a little more detail because I need to address the same issue. Thx!

Contributor

jpetazzo commented Aug 11, 2013

Hi,

In fact, you don't "create a user" on a UNIX system. When you "create a
user", it just means that you add an entry in /etc/passwd, to map the
numeric ID with the user name. But you can chown and setuid
numerically, even if the user id does not exist in the user database. In
fact, when you chown with a user name, it will first resolve the user
name to a numeric ID.

It also means that when you do e.g. ls -l, if will show the user name
according to the local database. In other words, if UID 1000 is "joe" in
your host system, but "jack" in a container, when you do ls -l in the
container you will see that the files belong to "jack" but if you check the
same directory from within the host, it will show that they belong to
"joe". For that reason, if you use rsync across chroot or container
boundaries, it is strongly recommended to use --numeric-ids to avoid a
user mapping error.

I don't know the specific details of the solution implemented by @chenyf;
but it is likely that they use the same username in each container, but
mapped to a different numeric ID, with a different entry in /etc/passwd
for each container.

Note: the newer "user namespace" lets you map container user IDs to
different host user IDs, using a translation mapping in /etc/subuid in
the host. It is another possibility.

On Sat, Aug 10, 2013 at 5:27 PM, Debnath Sinha notifications@github.comwrote:

@chenyf https://github.com/chenyf Can multiple users in Linux have the
same username but different userids? When a user/userid is created in the
container, isn't it in its own namespace? I'm curious if you could provide
a little more detail because I need to address the same issue. Thx!


Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/471#issuecomment-22450065
.

chenyf commented Aug 15, 2013

hi, actually the idea is from cloud foundy warden; we mantain a UID pool, we get a uid from the pool when we create a container, inside the container: we create a user account with this uid; outside the container, we use setquota to set disk limit for this uid. we return the uid to pool when we delete this container

chenyf commented Aug 15, 2013

jerome is correct; each container can has same user account like 'worker' but with different uid.

Contributor

jpetazzo commented Aug 15, 2013

Thanks for the explanation!

Thanks for all the details! Was very helpful because I think I may need
something similar in the future...

On Thu, Aug 15, 2013 at 11:08 AM, Jérôme Petazzoni <notifications@github.com

wrote:

Thanks for the explanation!


Reply to this email directly or view it on GitHubhttps://github.com/dotcloud/docker/issues/471#issuecomment-22718468
.

http://www.debnathsinha.com

My colleague just sent this to the mailing list:

As a fun side project, some friends and I will be adding support for cpu resource limiting / guarantees through cgroups and cpuset. The existing solution provides for relative cpu resource limiting, but we want absolute resource guarantees. A sample use case for this (and indeed, our use case) is for creating environments in which to grade code for performance challenges; we want every container to have exactly 2 cpus and X memory (allowing extra cpu could artificially improve the performance of some programs, etc).

Taytay commented Oct 23, 2013

@chenyf, any chance you could elaborate or show how you track your UID pool? This sounds like a great workaround until true disk limits are in place, but I think this stuff is a bit over my head, so any other details or examples would be greatly appreciated.

Contributor

denibertovic commented Nov 29, 2013

Does anyone have any info on this? I think the right way would be to use user namespaces and then quotas. The problem is that I'm not seeing them enabled in stock debian/ubuntu kernels. :(

Contributor

jpetazzo commented Dec 2, 2013

You can also use the devicemapper plugin; each container will be allowed to use up to a certain amount of disk space.

chenyf commented Dec 9, 2013

@Taytay We have a daemon process which maintain a UID pool, and it will persistence this UID pool into disk.
each time we create a container, we get a UID from this daemon;
each time we delete a container, we return the UID to this daemon;

Contributor

kieslee commented Mar 20, 2014

@jpetazzo , If I use devicemapper, how can I setup the amount of disk space of each container

Contributor

jpetazzo commented Mar 24, 2014

@kies, by default, each container will get 10 GB of total disk usage.

If you need to change that amount, you can check http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/, section "Growing containers". (There are other and better ways to do it, but it will get you started in the right direction!)

jpetazzo, great write up on growing containers!! Great docker class in San Fran as well :)

@jpetazzo , I followed your blog on growing containers and I am getting error while using resize2fs.
following is the error message:-
resize2fs: Device or resource busy while trying to open /dev/mapper/docker-202:2-5113011-8c2ec367dbd416c013fbeecd2425cc515973742f74f7b0f73a7c6c78084527fe
Couldn't find valid filesystem superblock.

I am running docker on oracle linux VM. Please let me know how this can be solved!

We are trying the uid concept for limiting disk quotas. Since Swarm is the scheduler we use, the way I am trying to tackle is that having a mapping file of uid at swarm server level, and use constraints at the docker daemon level to let swarm choose containers of a specifc UID end up going to the node and thereby quota gets applied.Thoughts/Suggestions?

Member

thaJeztah commented Apr 1, 2016

@shashankmjain the GitHub issue tracker is not a general discussion / support forum. Can you ask y question in the #docker IRC channel, on https://forums.docker.com or StackOverflow? Those are more suitable for those questions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment