New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--blkio-weight doesn't take effect in docker Docker version 1.8.1 #16173

Open
SunWeicheng0001 opened this Issue Sep 9, 2015 · 14 comments

Comments

9 participants
@SunWeicheng0001
Copy link

SunWeicheng0001 commented Sep 9, 2015

Description of problem:
docker version:

Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:40:42 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:40:42 UTC 2015
 OS/Arch:      linux/amd64

docker info:

Containers: 3
Images: 211
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 217
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-15-generic
Operating System: Ubuntu 15.04
CPUs: 1
Total Memory: 7.791 GiB
Name: ubuntu-docker
ID: LKJY:A7BM:K4AQ:KBAL:2UK2:7M6T:ZUAP:3IUR:K7KJ:SNYW:FU6E:5SRF
WARNING: No swap limit support

uname -a: 3.19.0-15-generic

I start up two containers in one Ubuntu 14.04 which was a virtual machine on hype-V(and I test on the vm on VMware, which cames the same result), which are assigned with different blkio weight, as follows

$ docker run -ti --name c1 --blkio-weight 300 ubuntu:14.04 /bin/bash
$ docker run -ti --name c2 --blkio-weight 600 ubuntu:14.04 /bin/bash

then, I run the dd command to test the throughput of both containers at the same time with the follow command

time dd if=/dev/zero of=test.out bs=1M count=1024 oflag=direct

However, the throughput of the containers are almost the same.
C1:

root@0fb961adc196:/# time dd if=/dev/zero of=test.out bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.8467 s, 581 MB/s

real    0m1.850s
user    0m0.000s
sys 0m0.416s

C2:

root@3e961e42121a:/# time dd if=/dev/zero of=test.out bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.83314 s, 586 MB/s

real    0m1.840s
user    0m0.000s
sys 0m0.412s

The document of the dockers said that You’ll find that the proportion of time is the same as the proportion of blkio weights of the two containers.
Anyone can figure out what the problem is? Thank you!

@coolljt0725

This comment has been minimized.

Copy link
Contributor

coolljt0725 commented Sep 9, 2015

Please make sure your IO scheduler is cfq
cat /sys/block/sdx/queue/scheduler replace sdx with your block device.

@SunWeicheng0001

This comment has been minimized.

Copy link

SunWeicheng0001 commented Sep 9, 2015

@coolljt0725
I followed you suggestion to change IO scheduler to cfq, which seems take effect. However, it is not my expected result.
I started two containers as follows

$ docker run -ti --name c1 --blkio-weight 10 ubuntu:14.04 /bin/bash
$ docker run -ti --name c2 --blkio-weight 1000 ubuntu:14.04 /bin/bash

but the output is as follows
C1:

root@75dcf0be5526:/# time dd if=/dev/zero of=test.out bs=1M count=1024 oflag=direct      
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.49889 s, 430 MB/s

real    0m2.501s
user    0m0.000s
sys 0m0.404s

C2:

root@41435bed761f:/# time dd if=/dev/zero of=test.out bs=1M count=1024 oflag=direct      
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.28073 s, 838 MB/s

real    0m1.286s
user    0m0.000s
sys 0m0.396s

Is there any more potential problems? Or the proportion of the weight is just like this? Thx

@coolljt0725

This comment has been minimized.

Copy link
Contributor

coolljt0725 commented Sep 10, 2015

@SunWeicheng0001 run cli in the container
while true;do time dd if=/dev/zero of=test.out bs=1M count=100 oflag=direct;done
I can see these on my testing

104857600 bytes (105 MB) copied, 2.79353 s, 37.5 MB/s
real    0m2.795s
user    0m0.000s
sys     0m0.053s
100+0 records in
100+0 records out
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 206.645 s, 507 kB/s
real    3m26.648s
user    0m0.002s
sys     0m0.060s
@SunWeicheng0001

This comment has been minimized.

Copy link

SunWeicheng0001 commented Sep 10, 2015

@coolljt0725 WOW! How can you do this?
Just start two containers with 10 and 1000 blkio.weight and run the command while true;do time dd if=/dev/zero of=test.out bs=1M count=100 oflag=direct;done in both containers in the same time?
However, I found that it is the same to the before. Please tell me how to reproduce your test,thanks a lot!

@coolljt0725

This comment has been minimized.

Copy link
Contributor

coolljt0725 commented Sep 10, 2015

@SunWeicheng0001

Just start two containers with 10 and 1000 blkio.weight and run the command while true;do time dd if=/dev/zero of=test.out bs=1M count=100 oflag=direct;done in both containers in the same time?

Yes, I just do this

@sumitkgaur

This comment has been minimized.

Copy link

sumitkgaur commented Sep 22, 2015

Does it work for you too @SunWeicheng0001 ? It never work for me also it gets devided 50-50?

I try on AWS ubuntu VM and as below

$ docker run -ti --name c1 --blkio-weight 300 ubuntu:14.04 /bin/bash
$ docker run -ti --name c2 --blkio-weight 600 ubuntu:14.04 /bin/bash
and
while true;do time dd if=/dev/urandom of=test1.out bs=1M count=50 oflag=direct; rm test1.out;done

@jessfraz

This comment has been minimized.

Copy link
Contributor

jessfraz commented Oct 2, 2015

see also my comment here: #14466 (comment)

@odedpriva

This comment has been minimized.

Copy link

odedpriva commented Oct 31, 2017

this is still not working on

 Version:      17.10.0-ce
 API version:  1.33 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   f4ffd25
 Built:        Tue Oct 17 19:05:23 2017
 OS/Arch:      linux/amd64
 Experimental: true

image

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Oct 31, 2017

I'm also seeing the same result as @odedpriva sees on a DigitalOcean droplet; looking at @coolljt0725's suggestion, there looks to be no scheduler set;

cat /sys/block/vda/queue/scheduler 
none

the cgroup looks to be set correctly for the container;

cat /sys/fs/cgroup/blkio/docker/a144caf4fa2b6dd1878685028f4c035af21919f903ef9d14aefe17c3f3275949/blkio.weight
10

This droplet is using overlay2 as storage driver (in case it's relevant here);

ping @coolljt0725 any suggestions?

@coolljt0725

This comment has been minimized.

Copy link
Contributor

coolljt0725 commented Oct 31, 2017

@odedpriva Have you set your IO scheduler to cfq?

@odedpriva

This comment has been minimized.

Copy link

odedpriva commented Oct 31, 2017

this is my current configuration ? is it o.k ?

/ # cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
@hzxuzhonghu

This comment has been minimized.

Copy link

hzxuzhonghu commented Nov 23, 2017

I also find the same issue. No matter how to set the --blkio-weight-device "/dev/xvda:100" blkio-weight 100 , the write speed of the two containers are the same.

root@SZX1000353068:/mnt/go/src/k8s.io/kubernetes# docker version
Client:
 Version:      17.06.1-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:53:09 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:51:03 2017
 OS/Arch:      linux/amd64
 Experimental: false

@hzxuzhonghu

This comment has been minimized.

Copy link

hzxuzhonghu commented Nov 23, 2017

root@SZX1000353068:/sys/fs/cgroup/blkio/docker# cat /sys/block/xvda/queue/scheduler
noop deadline [cfq]
@shalini-b

This comment has been minimized.

Copy link

shalini-b commented Apr 30, 2018

Is there any update on this issue? I am facing the same problem :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment