Skip to content

docker service logs stops showing logs from containers on different nodes #33183

@benturner

Description

@benturner

Description
Running docker service logs foo on a swarm master where foo is a service with multiple replicas across different nodes eventually stops merging the logs from those other nodes. It seems to always work just fine right after the service is created.

Steps to reproduce the issue:

  1. Create a service foo with replicas across multiple nodes
  2. Run docker service logs --follow foo
  3. Initially observe logs from multiple containers across different nodes
  4. Go away and do something else for a while
  5. Run docker service logs --follow foo
  6. Observe old logs from multiple containers across different nodes but new logs only contain logs from the node on which you're running the command

Describe the results you received:
Logs from containers on the current node only

Describe the results you expected:
Logs from all containers on all nodes

Additional information you deem important (e.g. issue happens only occasionally):
Seems to work fine at first but then within some amount of time it stops working. I've tried with both json-file and journald log drivers.

Output of docker version:

Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:10:54 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:10:54 2017
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

Containers: 7
 Running: 7
 Paused: 0
 Stopped: 0
Images: 6
Server Version: 17.05.0-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 57
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: jbsbgj3on5coa7f996rle8bpk
 Is Manager: true
 ClusterID: 7uzbzxfjt8nf6p18wbzv8ek84
 Managers: 1
 Nodes: 2
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 172.16.0.5
 Manager Addresses:
  172.16.0.5:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-75-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.636GiB
Name: swarmm-master-94917428-0
ID: NNL7:YHDL:5ALU:4ZXF:J3BL:VAIV:UI2T:TV5U:UGQL:UCQC:WWCP:TQDO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 149
 Goroutines: 310
 System Time: 2017-05-12T22:02:26.629917059Z
 EventsListeners: 7
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):
Running on azure using an acs-engine template (https://github.com/Azure/acs-engine). Currently just testing this so I'm using one manager and one worker node. The replicas for my service get split over both nodes.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/swarmkind/bugBugs are bugs. The cause may or may not be known at triage time so debugging may be needed.version/17.05

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions