Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix Bind-mounts not properly registering after daemon restart #10147

Merged

Conversation

cpuguy83
Copy link
Member

Fixes #9629 #9768

  1. Volume config is not restored if we couldn't find it with the graph
    driver, but bind-mounts would never be found by the graph driver since
    they aren't in that dir

  2. container volumes were only being restored if they were found in the
    volumes repo, but volumes created by old daemons wouldn't be in the
    repo until the container is at least started.

@cpuguy83 cpuguy83 changed the title 9629 volumes from unregistered container Bind-mounts not properly registered after daemon restart Jan 17, 2015
@cpuguy83 cpuguy83 changed the title Bind-mounts not properly registered after daemon restart fix Bind-mounts not properly registering after daemon restart Jan 17, 2015
@cpuguy83 cpuguy83 force-pushed the 9629_volumes-from_unregistered-container branch from 477b6f9 to 16f9a04 Compare January 17, 2015 16:22
@tiborvass tiborvass added this to the 1.5.0 milestone Jan 19, 2015
@cpuguy83 cpuguy83 force-pushed the 9629_volumes-from_unregistered-container branch from 16f9a04 to a3de790 Compare January 19, 2015 17:51
v.AddContainer(container.ID)
} else {
// if container was created with an old daemon, this volume may not be registered
v, err := container.daemon.volumes.FindOrCreateVolume(path, container.VolumesRW[path])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the volume is not a rw volume in VolumesRW?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, from what I can tell this is only used for very old daemons

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be pre 1.3 daemons, so not that old.

I'll update this VolumesRW thing to make sure we don't panic here. It shouldn't be nil, but it could be.

@cpuguy83 cpuguy83 force-pushed the 9629_volumes-from_unregistered-container branch from a3de790 to 7b2752b Compare January 20, 2015 02:27
@cpuguy83
Copy link
Member Author

@crosbymichael Updated

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 20, 2015

@cpuguy83 Is it possible to write test for this? We have test-daemon for restarting.

@cpuguy83
Copy link
Member Author

@LK4D4 There is a test there.

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 20, 2015

@cpuguy83 Yup, sorry :)

v.AddContainer(container.ID)
} else {
// if container was created with an old daemon, this volume may not be registered
var mode = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe mode := true? Also mode is pretty weird name, maybe writtable?

@cpuguy83 cpuguy83 force-pushed the 9629_volumes-from_unregistered-container branch from 7b2752b to c837416 Compare January 20, 2015 20:54
Fixes moby#9629 moby#9768

A couple of issues:

1) Volume config is not restored if we couldn't find it with the graph
driver, but bind-mounts would never be found by the graph driver since
they aren't in that dir

2) container volumes were only being restored if they were found in the
volumes repo, but volumes created by old daemons wouldn't be in the
repo until the container is at least started.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@cpuguy83 cpuguy83 force-pushed the 9629_volumes-from_unregistered-container branch from c837416 to e744b0d Compare January 20, 2015 20:55
@cpuguy83
Copy link
Member Author

@LK4D4 Updated

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 20, 2015

LGTM

2 similar comments
@crosbymichael
Copy link
Contributor

LGTM

@jessfraz
Copy link
Contributor

LGTM

crosbymichael added a commit that referenced this pull request Jan 20, 2015
…d-container

fix Bind-mounts not properly registering after daemon restart
@crosbymichael crosbymichael merged commit cc66fa3 into moby:master Jan 20, 2015
@grossws
Copy link
Contributor

grossws commented Jan 20, 2015

Does it fix #9801?

@jessfraz
Copy link
Contributor

thanks @grossws I have closed that issue.

@HuKeping
Copy link
Contributor

It will cause a panic with this patch.

@cpuguy83
Copy link
Member Author

How so?

@HuKeping
Copy link
Contributor

hi @cpuguy83 , when u restart the daemon without -g to specify another new storage path, u would like to get this:

goroutine 8 [running]:
runtime.panic(0xb22d80, 0x164e5e8)
    /usr/local/go/src/pkg/runtime/panic.c:266 +0xb6
github.com/docker/docker/volumes.(*Volume).createIfNotExist(0xc2101b16e0, 0x0, 0x27)
    /go/src/github.com/docker/docker/volumes/volume.go:102 +0x2b2
github.com/docker/docker/volumes.(*Volume).initialize(0xc2101b16e0, 0x0, 0x0)
    /go/src/github.com/docker/docker/volumes/volume.go:130 +0x83
github.com/docker/docker/volumes.(*Repository).restore(0xc210245810, 0x0, 0xc210245840)
    /go/src/github.com/docker/docker/volumes/repository.go:110 +0x6fe
github.com/docker/docker/volumes.NewRepository(0xc21013d160, 0x17, 0x7f29b9c9c928, 0xc21017d0b0, 0x17, ...)
    /go/src/github.com/docker/docker/volumes/repository.go:39 +0x178
github.com/docker/docker/daemon.NewDaemonFromDirectory(0x1670b20, 0xc21008da50, 0x0, 0x47, 0xb9)
    /go/src/github.com/docker/docker/daemon/daemon.go:892 +0xed6
github.com/docker/docker/daemon.NewDaemon(0x1670b20, 0xc21008da50, 0x0, 0x2, 0x0)
    /go/src/github.com/docker/docker/daemon/daemon.go:779 +0x31
main.func·001()
    /go/src/github.com/docker/docker/docker/daemon.go:55 +0x43
created by main.mainDaemon
    /go/src/github.com/docker/docker/docker/daemon.go:78 +0x290

goroutine 1 [chan receive]:
github.com/docker/docker/api/server.ServeApi(0xc210047880, 0x7f29b9c91040)
    /go/src/github.com/docker/docker/api/server/server.go:1639 +0x440
github.com/docker/docker/engine.(*Job).Run(0xc210047880, 0x0, 0x0)
    /go/src/github.com/docker/docker/engine/job.go:83 +0x76e
main.mainDaemon()
    /go/src/github.com/docker/docker/docker/daemon.go:93 +0x514
main.main()
goroutine 3 [syscall]:
os/signal.loop()
    /usr/local/go/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.init·1
    /usr/local/go/src/pkg/os/signal/signal_unix.go:27 +0x31

goroutine 4 [syscall]:
runtime.goexit()
    /usr/local/go/src/pkg/runtime/proc.c:1394

goroutine 10 [runnable]:
runtime.gosched()
    /usr/local/go/src/pkg/runtime/proc.c:1368 +0x27
runtime.gc(0x1)
    /usr/local/go/src/pkg/runtime/mgc0.c:2047 +0x27c
runfinq()
    /usr/local/go/src/pkg/runtime/mgc0.c:2325 +0x1f2
runtime.goexit()
    /usr/local/go/src/pkg/runtime/proc.c:1394

goroutine 7 [chan receive]:
github.com/docker/docker/pkg/signal.func·002()
    /go/src/github.com/docker/docker/pkg/signal/trap.go:30 +0x6c
created by github.com/docker/docker/pkg/signal.Trap
    /go/src/github.com/docker/docker/pkg/signal/trap.go:53 +0x26c

goroutine 9 [chan receive]:
github.com/docker/docker/pkg/listenbuffer.(*defaultListener).Accept(0xc210185720, 0x0, 0x1000000a349c0, 0xc210185dc0, 0x2100efd70)
    /go/src/github.com/docker/docker/pkg/listenbuffer/buffer.go:43 +0x7c
net/http.(*Server).Serve(0xc210265dc0, 0x7f29b9c9c7e0, 0xc210185720, 0x0, 0x0)
    /usr/local/go/src/pkg/net/http/server.go:1622 +0x91
github.com/docker/docker/api/server.(*HttpServer).Serve(0xc210185dc0, 0x4, 0xc2100b5967)
    /go/src/github.com/docker/docker/api/server/server.go:51 +0x3a
github.com/docker/docker/api/server.func·006()
    /go/src/github.com/docker/docker/api/server/server.go:1634 +0x206
created by github.com/docker/docker/api/server.ServeApi
    /go/src/github.com/docker/docker/api/server/server.go:1635 +0x3f1

And i think this was caused by nil pointer dereference in volumes/volume.go function createIfNotExist(), since the Path has already been removed by the patch, the nil.xxx() will cause a panic.

@cpuguy83
Copy link
Member Author

@HuKeping Hi, thanks for the report! This got resolved in #10243

@HuKeping
Copy link
Contributor

OK, so fast........

@cpuguy83 cpuguy83 deleted the 9629_volumes-from_unregistered-container branch September 20, 2017 16:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Data container goes stale, won't mount volumes-from
7 participants