New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance degradation in data volume inside docker with kernel 3.16 #25656

Closed
arthurlogilab opened this Issue Aug 12, 2016 · 7 comments

Comments

Projects
None yet
4 participants
@arthurlogilab

arthurlogilab commented Aug 12, 2016

Output of docker version:

# docker version
Client:
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Tue Apr 26 23:11:07 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Tue Apr 26 23:11:07 2016
 OS/Arch:      linux/amd64

Output of docker info:

# docker info
Containers: 39
 Running: 9
 Paused: 0
 Stopped: 30
Images: 1095
Server Version: 1.11.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 846
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge null host
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 31.47 GiB
Name: dorado
ID: OMV7:74M4:C4W7:NKA7:2QYO:LP5R:PA6S:55L3:IHJD:6WGL:ZO3S:KUB2
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support

Additional environment details (AWS, VirtualBox, physical, etc.):

physical server PowerEdge R410

Steps to reproduce the issue:

  1. apt-get install dbench
  2. cd /var/lib/docker/volumes/8a803618d4ac2047857254d0dae4bc0107ba08c22c63251ee7eb447849a22951/_data
  3. dbench -s 10
    [snip]
    Throughput 164.96 MB/sec (sync open) 10 clients 10 procs max_latency=2871.258 ms
  4. docker exec -ti SNIP bash
  5. apt-get install dbench
  6. cd /var/lib/postgresql/data
  7. dbench -s 10
    Throughput 50.0133 MB/sec (sync open) 10 clients 10 procs max_latency=183.647 ms

Describe the results you received:

On the baremetal we're getting 164MB/sec, inside de bind mount of the data volume in docker, we're getting 50MB/sec

Describe the results you expected:

Comparable performance, it's ok to have a little overhead, but a 3 factor is a bit much.

Additional information you deem important (e.g. issue happens only occasionally):

I am not talking about performance of a storage driver, but about the data volume which is supposed to be a simple bind mount.

Upgrading to kernel 4.3 (from debian backports) solves the problem (but requires to migrate to a new device driver).

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 12, 2016

Member

Performance in a volume should be exactly the same as performance on the host, but we had an issue reported a while back that describes a similar issue. I'm not sure if it's the same, but the discussion on that issue may include helpful information to help narrowing down the cause. Could you have a look there? #21485

Member

thaJeztah commented Aug 12, 2016

Performance in a volume should be exactly the same as performance on the host, but we had an issue reported a while back that describes a similar issue. I'm not sure if it's the same, but the discussion on that issue may include helpful information to help narrowing down the cause. Could you have a look there? #21485

@arthurlogilab

This comment has been minimized.

Show comment
Hide comment
@arthurlogilab

arthurlogilab Aug 12, 2016

@thaJeztah thanks for the link, the problem seems to be the same. I'll try the scheduler tweaks suggested in the comment #21485 (comment) when I get a chance.

arthurlogilab commented Aug 12, 2016

@thaJeztah thanks for the link, the problem seems to be the same. I'll try the scheduler tweaks suggested in the comment #21485 (comment) when I get a chance.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 12, 2016

Member

Awesome, keep us posted; especially interested to hear if this is something other people may run into (in which case, it may be something worth including in the docs)

Member

thaJeztah commented Aug 12, 2016

Awesome, keep us posted; especially interested to hear if this is something other people may run into (in which case, it may be something worth including in the docs)

@dynamicnet

This comment has been minimized.

Show comment
Hide comment
@dynamicnet

dynamicnet Sep 6, 2016

Same problem here.
Huge I/O degradation under 3.16.0-4-amd64 (stock Debian 8.3) using AUFS
Solved upgrading to 4.6.0-0.bpo.1-amd64 from jessie-backport, switching to Overlayfs

dynamicnet commented Sep 6, 2016

Same problem here.
Huge I/O degradation under 3.16.0-4-amd64 (stock Debian 8.3) using AUFS
Solved upgrading to 4.6.0-0.bpo.1-amd64 from jessie-backport, switching to Overlayfs

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Sep 13, 2016

Member

@arthurlogilab have you been able to verify if this is indeed a duplicate of #21485? Is it ok to close this issue?

Member

thaJeztah commented Sep 13, 2016

@arthurlogilab have you been able to verify if this is indeed a duplicate of #21485? Is it ok to close this issue?

@arthurlogilab

This comment has been minimized.

Show comment
Hide comment
@arthurlogilab

arthurlogilab Sep 14, 2016

@thaJeztah sorry, haven't got round to trying it out.

arthurlogilab commented Sep 14, 2016

@thaJeztah sorry, haven't got round to trying it out.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Sep 14, 2016

Member

Let me go ahead and close this for now, but happy to reopen if it's a different issue

Member

thaJeztah commented Sep 14, 2016

Let me go ahead and close this for now, but happy to reopen if it's a different issue

@thaJeztah thaJeztah closed this Sep 14, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment