Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to restart daemon after 24.0.2 -> 24.0.3 upgrade #45898

Closed
kotso opened this issue Jul 7, 2023 · 15 comments · Fixed by #45902
Closed

Failed to restart daemon after 24.0.2 -> 24.0.3 upgrade #45898

kotso opened this issue Jul 7, 2023 · 15 comments · Fixed by #45902
Labels
area/daemon area/volumes kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/24.0

Comments

@kotso
Copy link

kotso commented Jul 7, 2023

Description

After update on all hosts (20+, all that I'm running) docker daemon is failed.

error is:

Jul 07 03:03:17 server systemd[1]: Starting Docker Application Container Engine...
Jul 07 03:03:17 server dockerd[52024]: time="2023-07-07T03:03:17.076517762Z" level=info msg="Starting up"
Jul 07 03:03:17 server dockerd[52024]: time="2023-07-07T03:03:17.086137356Z" level=info msg="[graphdriver] using prior storage driver: fuse-overlayfs"
Jul 07 03:03:17 server dockerd[52024]: time="2023-07-07T03:03:17.103391077Z" level=info msg="Loading containers: start."
Jul 07 03:03:17 server dockerd[52024]: time="2023-07-07T03:03:17.411734779Z" level=info msg="there are running containers, updated network configuration will not take affect"
Jul 07 03:03:17 server dockerd[52024]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 07 03:03:17 server dockerd[52024]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x564e9be4c7c0]
Jul 07 03:03:17 server dockerd[52024]: goroutine 607 [running]:
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00054b7b8?, 0xc0007c2500)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc0007c2500, 0x0?)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
Jul 07 03:03:17 server dockerd[52024]: created by github.com/docker/docker/daemon.(*Daemon).restore
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8
Jul 07 03:03:17 server dockerd[52024]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 07 03:03:17 server dockerd[52024]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x564e9be4c7c0]
Jul 07 03:03:17 server dockerd[52024]: goroutine 601 [running]:
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00054b7b8?, 0xc000cb2000)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc000cb2000, 0x0?)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
Jul 07 03:03:17 server dockerd[52024]: created by github.com/docker/docker/daemon.(*Daemon).restore
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8
Jul 07 03:03:17 server dockerd[52024]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 07 03:03:17 server dockerd[52024]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x564e9be4c7c0]
Jul 07 03:03:17 server dockerd[52024]: goroutine 603 [running]:
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00054b7b8?, 0xc000187680)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc000187680, 0x0?)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
Jul 07 03:03:17 server dockerd[52024]: created by github.com/docker/docker/daemon.(*Daemon).restore
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8
Jul 07 03:03:17 server dockerd[52024]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 07 03:03:17 server dockerd[52024]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x564e9be4c7c0]
Jul 07 03:03:17 server dockerd[52024]: goroutine 600 [running]:
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00054b7b8?, 0xc00063e500)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
Jul 07 03:03:17 server dockerd[52024]: github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc00063e500, 0xc00005e600?)
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
Jul 07 03:03:17 server dockerd[52024]: created by github.com/docker/docker/daemon.(*Daemon).restore
Jul 07 03:03:17 server dockerd[52024]:         /root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8

I'm running daemon with config:

{
  "live-restore": true
}

any chance to manage restart daemon without restarting containers?

Reproduce

systemctl restart docker

Expected behavior

systemd service to start

docker version

unable to provide, as service is down

docker info

unable to provide, as service is down

Additional Info

cant provide docker info as service do not start, I have containers running on servers.

If i reboot server (i.e. containers get stopped, docker starts without issues)

@kotso kotso added kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/0-triage labels Jul 7, 2023
@calvinbui
Copy link

calvinbui commented Jul 7, 2023

the panic is caused by scripts updating running container network configs. I worked around this:

  1. rolling back to the previous version: sudo apt-get install docker-ce="5:24.0.2-1~ubuntu.22.04~jammy"
  2. stopping all containers: docker stop $(docker ps -a -q)
  3. perform the upgrade to docker
  4. start all containers: docker start $(docker ps -a -q)

However, restarting the docker service causes the issue again, so I recommend staying on the previous version.

@tomsiewert
Copy link

tomsiewert commented Jul 7, 2023

For already broken systems, we used ctr -n moby task ls -q | xargs ctr -n moby task kill ; apt install -f to recover them.

@kotso
Copy link
Author

kotso commented Jul 7, 2023

thank you @tomsiewert @calvinbui

If anyone found solution to apply this update without restarting containers, please share.

@thaJeztah
Copy link
Member

@ap-wtioit
Copy link

ap-wtioit commented Jul 7, 2023

This happened to me already on 3 systems (have 2 more to offer for debugging before i have no test envs left).

Do you need any more info?

docker info before it happened:

Client: Docker Engine - Community
 Version:    24.0.2
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.5
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.18.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 19
  Running: 19
  Paused: 0
  Stopped: 0
 Images: 11
 Server Version: 24.0.2
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
 runc version: v1.1.7-0-g860f061
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.15.0-75-generic
 Operating System: Ubuntu 22.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 62.71GiB
 Name: redacted
 ID: bdb15c83-d798-44a3-aacf-72a8ed931294
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: true
 Default Address Pools:
   Base: 172.16.0.0/12, Size: 26
   Base: 192.168.0.0/16, Size: 26

running the upgrade with sudo apt update && sudo apt upgrade && sudo apt autoremove

Hit:1 http://mirror.hetzner.com/ubuntu/packages jammy InRelease
Hit:2 http://mirror.hetzner.com/ubuntu/packages jammy-updates InRelease                                                                                                                 
Hit:3 http://mirror.hetzner.com/ubuntu/packages jammy-backports InRelease                                                                                                               
Hit:4 http://mirror.hetzner.com/ubuntu/packages jammy-security InRelease                                                                                                                
Hit:5 http://de.archive.ubuntu.com/ubuntu jammy InRelease                                                                                                                               
Hit:6 https://artifacts.elastic.co/packages/oss-7.x/apt stable InRelease                                                                                                                
Hit:7 http://security.ubuntu.com/ubuntu jammy-security InRelease                                                                                                                        
Hit:8 https://download.docker.com/linux/ubuntu jammy InRelease                                                                                                      
Hit:9 http://de.archive.ubuntu.com/ubuntu jammy-updates InRelease                         
Hit:10 http://de.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:12 https://packages.graylog2.org/repo/debian sidecar-stable InRelease                                  
Hit:11 https://packages.gitlab.com/runner/gitlab-runner/ubuntu jammy InRelease                            
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
9 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  docker-buildx-plugin docker-ce docker-ce-cli docker-ce-rootless-extras docker-compose-plugin filebeat iotop libmm-glib0 ubuntu-drivers-common
9 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 110 MB of archives.
After this operation, 12.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://mirror.hetzner.com/ubuntu/packages jammy-updates/main amd64 ubuntu-drivers-common amd64 1:0.9.6.2~0.22.04.4 [58.3 kB]
Get:2 http://mirror.hetzner.com/ubuntu/packages jammy-updates/main amd64 iotop amd64 0.6-24-g733f3f8-1.1ubuntu0.1 [23.1 kB]                                                        
Get:3 http://mirror.hetzner.com/ubuntu/packages jammy-updates/main amd64 libmm-glib0 amd64 1.20.0-1~ubuntu22.04.2 [263 kB]                                         
Get:4 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-buildx-plugin amd64 0.11.1-1~ubuntu.22.04~jammy [28.2 MB]                                     
Get:5 https://artifacts.elastic.co/packages/oss-7.x/apt stable/main amd64 filebeat amd64 7.17.11 [24.0 MB]                          
Get:6 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-ce-cli amd64 5:24.0.3-1~ubuntu.22.04~jammy [13.3 MB]
Get:7 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-ce amd64 5:24.0.3-1~ubuntu.22.04~jammy [22.9 MB]
Get:8 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-ce-rootless-extras amd64 5:24.0.3-1~ubuntu.22.04~jammy [9032 kB]
Get:9 https://download.docker.com/linux/ubuntu jammy/stable amd64 docker-compose-plugin amd64 2.19.1-1~ubuntu.22.04~jammy [11.9 MB]
Fetched 110 MB in 1s (104 MB/s)                
Preconfiguring packages ...
(Reading database ... 62173 files and directories currently installed.)
Preparing to unpack .../0-ubuntu-drivers-common_1%3a0.9.6.2~0.22.04.4_amd64.deb ...
Unpacking ubuntu-drivers-common (1:0.9.6.2~0.22.04.4) over (1:0.9.6.2~0.22.04.3) ...
Preparing to unpack .../1-docker-buildx-plugin_0.11.1-1~ubuntu.22.04~jammy_amd64.deb ...
Unpacking docker-buildx-plugin (0.11.1-1~ubuntu.22.04~jammy) over (0.10.5-1~ubuntu.22.04~jammy) ...
Preparing to unpack .../2-docker-ce-cli_5%3a24.0.3-1~ubuntu.22.04~jammy_amd64.deb ...
Unpacking docker-ce-cli (5:24.0.3-1~ubuntu.22.04~jammy) over (5:24.0.2-1~ubuntu.22.04~jammy) ...
Preparing to unpack .../3-docker-ce_5%3a24.0.3-1~ubuntu.22.04~jammy_amd64.deb ...
Unpacking docker-ce (5:24.0.3-1~ubuntu.22.04~jammy) over (5:24.0.2-1~ubuntu.22.04~jammy) ...
Preparing to unpack .../4-docker-ce-rootless-extras_5%3a24.0.3-1~ubuntu.22.04~jammy_amd64.deb ...
Unpacking docker-ce-rootless-extras (5:24.0.3-1~ubuntu.22.04~jammy) over (5:24.0.2-1~ubuntu.22.04~jammy) ...
Preparing to unpack .../5-docker-compose-plugin_2.19.1-1~ubuntu.22.04~jammy_amd64.deb ...
Unpacking docker-compose-plugin (2.19.1-1~ubuntu.22.04~jammy) over (2.18.1-1~ubuntu.22.04~jammy) ...
Preparing to unpack .../6-iotop_0.6-24-g733f3f8-1.1ubuntu0.1_amd64.deb ...
Unpacking iotop (0.6-24-g733f3f8-1.1ubuntu0.1) over (0.6-24-g733f3f8-1.1build2) ...
Preparing to unpack .../7-libmm-glib0_1.20.0-1~ubuntu22.04.2_amd64.deb ...
Unpacking libmm-glib0:amd64 (1.20.0-1~ubuntu22.04.2) over (1.20.0-1~ubuntu22.04.1) ...
Preparing to unpack .../8-filebeat_7.17.11_amd64.deb ...
Unpacking filebeat (7.17.11) over (7.17.10) ...
Setting up ubuntu-drivers-common (1:0.9.6.2~0.22.04.4) ...
Setting up docker-buildx-plugin (0.11.1-1~ubuntu.22.04~jammy) ...
Setting up iotop (0.6-24-g733f3f8-1.1ubuntu0.1) ...
Setting up docker-compose-plugin (2.19.1-1~ubuntu.22.04~jammy) ...
Setting up docker-ce-cli (5:24.0.3-1~ubuntu.22.04~jammy) ...
Setting up libmm-glib0:amd64 (1.20.0-1~ubuntu22.04.2) ...
Setting up docker-ce-rootless-extras (5:24.0.3-1~ubuntu.22.04~jammy) ...
Setting up filebeat (7.17.11) ...
Setting up docker-ce (5:24.0.3-1~ubuntu.22.04~jammy) ...
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
invoke-rc.d: initscript docker, action "restart" failed.
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: activating (auto-restart) (Result: exit-code) since Fri 2023-07-07 09:42:53 UTC; 6ms ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
    Process: 2998807 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=2)
   Main PID: 2998807 (code=exited, status=2)
        CPU: 623ms
dpkg: error processing package docker-ce (--configure):
 installed docker-ce package post-installation script subprocess returned error exit status 1
Processing triggers for man-db (2.10.2-1) ...
Processing triggers for libc-bin (2.35-0ubuntu3.1) ...
Errors were encountered while processing:
 docker-ce
needrestart is being skipped since dpkg has failed
E: Sub-process /usr/bin/dpkg returned an error code (1)

"restore" with:

reboot # worked fine (all containers up and socket working)
# note this is "just" a build system do not perform the next 2 steps if your data has some meaning
# docker ps | awk '($1 != "CONTAINER"){print $1}' | xargs docker stop | xargs docker rm
# docker system prune -af --volumes 
sudo apt update && sudo apt upgrade && sudo apt autoremove
# restore containers with build config

docker info from another server (same config, after upgrade + "restore")

Client: Docker Engine - Community
 Version:    24.0.3
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.11.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.19.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 17
  Running: 17
  Paused: 0
  Stopped: 0
 Images: 7
 Server Version: 24.0.3
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
 runc version: v1.1.7-0-g860f061
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.15.0-76-generic
 Operating System: Ubuntu 22.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 62.71GiB
 Name: redacted2
 ID: 8fa2ec4e-18fd-4214-a814-dea47415ee95
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: true
 Default Address Pools:
   Base: 172.16.0.0/12, Size: 26
   Base: 192.168.0.0/16, Size: 26

Also happened on my Ubuntu 23.04 Workstation.

The containers seem fine when in the failed upgrade state (tested by checking some service ports that i could find with sudo iptables -S)

sudo journalctl -xeu docker.service shows the same messages as the original poster (skipping all the systemd retries of starting docker again)

Jul 07 09:49:44 redacted dockerd[3003578]: time="2023-07-07T09:49:44.524579471Z" level=info msg="Starting up"
Jul 07 09:49:44 redacted dockerd[3003578]: time="2023-07-07T09:49:44.546107947Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jul 07 09:49:44 redacted dockerd[3003578]: time="2023-07-07T09:49:44.562624662Z" level=info msg="Loading containers: start."
Jul 07 09:49:45 redacted dockerd[3003578]: time="2023-07-07T09:49:45.908333644Z" level=info msg="there are running containers, updated network configuration will not take affect"
Jul 07 09:49:45 redacted dockerd[3003578]: panic: runtime error: invalid memory address or nil pointer dereference
Jul 07 09:49:45 redacted dockerd[3003578]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x55709c724020]
Jul 07 09:49:45 redacted dockerd[3003578]: goroutine 793 [running]:
Jul 07 09:49:45 redacted dockerd[3003578]: github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00091a0f0?, 0xc000b90c80)
Jul 07 09:49:45 redacted dockerd[3003578]:         /go/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
Jul 07 09:49:45 redacted dockerd[3003578]: github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc000b90c80, 0xc000bfca00?)
Jul 07 09:49:45 redacted dockerd[3003578]:         /go/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
Jul 07 09:49:45 redacted dockerd[3003578]: created by github.com/docker/docker/daemon.(*Daemon).restore
Jul 07 09:49:45 redacted dockerd[3003578]:         /go/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8
Jul 07 09:49:45 redacted systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

@ap-wtioit
Copy link

It seems after rolling back the update with sudo apt-get install docker-ce="5:24.0.2-1~ubuntu.22.04~jammy" (from @calvinbui #45898 (comment)) our server (that had the pending update) is working fine again without the need to stop or remove the containers.

On ubuntu/debian the previously installed version can be found with sudo apt-cache madison docker-ce.

@neersighted
Copy link
Member

Debug logs, as well as information on what kinds of volumes you have mounted would be helpful. Are you making use of any Volume plugins or Swarm CSI drivers, or are they all 'local' volumes?

@thaJeztah
Copy link
Member

Looks like the panic happens here;

"volume": config.Volume.Name(),

Which likely originates from here;

moby/daemon/volumes.go

Lines 265 to 272 in 1d9c861

type volumeWrapper struct {
v *volumetypes.Volume
s volumeMounter
}
func (v *volumeWrapper) Name() string {
return v.v.Name
}

Which would panic if volumeWrapper.v is nil (it's a pointer)

@ap-wtioit
Copy link

No Swarm CSI drivers, no volume drivers (afaik):
docker volume ls

local     3b88079ba2faf2c155d2c1e211d46a48993fb13aba7ff1979a4e0c137a624045
local     4a9635073be636d8aa3cc015252610aba85a8a8317e7016c9cf51c8d485ff448
local     5af16cb82ac9697f48d73a3ea8606fd988744d7202e62766b0dfbdae09134398
local     8f37325f4109dcd617dd3e31226b3b55e07a34082f37eb45dc5aa78d20fec914
local     29b87479a68961c2ca2efc420fffc4ab6cd826634619ca1e498cc867deca3e95
local     666c829cbb9eb9b888f165568be3f0b0fae4a2e63239218a52df2ca3948ac8b0
local     48917ae2929ba20e74e586db7862e612081c800efcf18595470f2dae6b45e44c
local     a0949226237fcf2945e9e60d0e0637abecc080803b0032c094d9b0e9ec495793
local     af213695e4c4c618d5e836ddfe364a4aefe9777773d7843dd1a77514ef2ca9d2
local     c23da8331dc4967a4fc837c909f0727264663ff1764f44463fae2ad4277ee4f4
local     c26ea74766cb1984ae04b5aa60ea073543dc699534fa2decab39c49e1224e4b4
local     daf407dccef38ef9cc6bb4e21dbe06d5a0302ad88b3d26416bc5299113a5c3b0
local     redacted_name
local     redacted_name2
local     f1ef13a0fbe7c672c3dfbcb67b0c85ab93594a282fc073a8422e66e8988f273a
local     ff6b60f57037e60eeb520e67a745b692a629ee5215bc3ab483cec6803403adf1
local     runner-bdn9e36m-project-20-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-20-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-118-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-118-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-148-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-148-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-221-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-221-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-222-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-231-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-233-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-254-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-254-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-271-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-271-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-288-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-350-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-350-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-387-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-422-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-422-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-437-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-437-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-447-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-447-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-462-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-497-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-497-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-530-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-530-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-532-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-532-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-535-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-535-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-548-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-548-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-550-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
local     runner-bdn9e36m-project-550-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
local     runner-bdn9e36m-project-558-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70

(No (gitlab) runner caches where present on my local workstation only local volumes as well)

@thaJeztah
Copy link
Member

I think the problem is here; https://github.com/moby/moby/blob/1d9c8619cded4657af1529779c5771127e8ad0e7/daemon/volumes.go#L243-L252C2

That function only looks at the volume if it has Driver set and Volume is nil, but if there's no Driver, it doesn't bother with the Volume, and returns nil.

And because there's no error, the code continues with the branch below that function call, which includes printing a log message;

moby/daemon/mounts.go

Lines 18 to 25 in 1d9c861

if err := daemon.lazyInitializeVolume(container.ID, config); err != nil {
return err
}
if alive {
log.G(context.TODO()).WithFields(logrus.Fields{
"container": container.ID,
"volume": config.Volume.Name(),
}).Debug("Live-restoring volume for alive container")

@neersighted
Copy link
Member

neersighted commented Jul 7, 2023

Agreed; though I'm not familiar enough with the volume code to know what the correct thing to do is. Naively, I think it's possibly always creating a volumeWrapper even when the driver is empty (which is what I assume we use for local volumes?).

Edit: no, the driver for local volumes is local; a nil driver seems to signify 'unknown, try all possible drivers.'

@thaJeztah
Copy link
Member

@thaJeztah
Copy link
Member

Edit: no, the driver for local volumes is local; a nil driver seems to signify 'unknown, try all possible drivers.'

It's empty for bind-mounts etc, which don't need restoring

@thaJeztah
Copy link
Member

See

func (m *MountPoint) LiveRestore(ctx context.Context) error {
if m.Volume == nil {
logrus.Debug("No volume to restore")
return nil
}

@kotso
Copy link
Author

kotso commented Jul 10, 2023

Issue fixed with 24.0.4 update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/daemon area/volumes kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/24.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants