New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No container locks on `docker ps` #31273

Merged
merged 18 commits into from Jun 23, 2017

Conversation

@fabiokung
Contributor

fabiokung commented Feb 22, 2017

- What I did

Fixes #28183

We are seeing a fair amount of lockups on docker ps. More details here.

While things seem to get better with every new version of Docker, there may always be a legitimate reason to hold container locks during some not-very-quick operations. Container queries (docker ps) currently try to grab a lock on every single container being inspected. The risk for lockups is big.

I started by trying to pick up @cpuguy83's work on #30225 (the memdb branch). Completely moving the container in-memory store to a consistent ACID DB is a noble cause, but I quickly realized it would be a huge undertaking (references that would need to be deep copied, structs that would need to be broken apart, etc.).

This is a more incremental step towards that goal. Containers keep being stored in the existing in-memory store, and mutations still grab a Lock() to avoid races. We then keep a consistent view of all containers rendered during all of these mutations, so readers (queries) do not need to Lock() anything.

Queries and docker ps are now very cheap. There are now virtually no chances of them getting stuck in a lockup, at the expense of some more lock contention during some mutation operations, because the current MemDB implementation (using hashicorp/go-memdb) does a table-level Lock() during write transactions.

These write locks are very short and hopefully won't be a problem (memdb.Save()). If they are, in the future another in-memory ACID implementation that supports row level locking could be investigated. Or replications could be done asynchronously (and optimistically), reducing lock contentions but causing the read-only view to be eventually consistent.

There is also admittedly some risk of missing parts of the code that are mutating containers and not replicating state when they should. However, we already replicate container state to disk (container.ToDisk()), then any of these cases should be treated as bugs and covered by tests.

- How I did it

All data that is necessary to serve queries (docker ps) is snapshotted during operations that mutate that data. Typically these mutations already hold a lock on the container object they are mutating. All places in the code calling container.ToDisk() and container.ToDiskLocking() are good candidates to also replicate state to the in-memory rendered consistent view.

Queries use a read-only transaction on the replicated in-memory DB, and don't need to grab locks on each individual container being inspected.

In the future, more read operations can be served from the in-memory ACID store, e.g.:

  • docker inspect (shouldn't be too hard, but will require deep copying some pointers currently being held by the container during Snapshot())
  • healthcheck probe results
  • stats collection
  • container.RWLayer
  • ExecConfigs

This way we can incrementally move things as needed/wanted and even potentially one day completely eliminating container locks (#30225). One day container.ToDisk() could also be replaced by checkpointing the in-memory DB to disk.

I explicitly left out some code paths that mutate parts of the container that queries don't currently care about:

  • container.RemovalInProgress
  • container.BaseFS (all calls to mount.Mount() may mutate the container: daemon.ContainerArchivePath, daemon.ContainerCopy, daemon.ContainerExtractToDir, daemon.ContainerStatPath, ... )
  • container.HasBeenManuallyStopped
  • container.HostsPath, container.ResolvConfPath: being mutated by some networking code paths

- How to verify it

All related docker ps and GetContainerApi cli-integration tests are passing. This should be treated as an internal refactoring and no functionality is being added or changed.

- Description for the changelog

lock-free docker ps, reducing the chances of daemon lockups during queries

- A picture of a cute animal (not mandatory but encouraged)

My dog, Jilly:

my dog: Jilly

@icecrime

This comment has been minimized.

Show comment
Hide comment
@icecrime

icecrime Feb 23, 2017

Contributor
Contributor

icecrime commented Feb 23, 2017

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Feb 23, 2017

Contributor

also related to #28754

Contributor

fabiokung commented Feb 23, 2017

also related to #28754

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Feb 23, 2017

Member

Thanks for working on this @fabiokung !

Member

thaJeztah commented Feb 23, 2017

Thanks for working on this @fabiokung !

Show outdated Hide outdated container/snapshot.go Outdated
@aaronlehmann

This comment has been minimized.

Show comment
Hide comment
@aaronlehmann

aaronlehmann Feb 23, 2017

Contributor

I like the idea. This looks way simpler than the original iteration that involved deep copies.

It looks like daemon.containersReplica.Save(c.Snapshot()) is a common pattern, and it needs to be protected by the container lock to avoid transactions overwriting each other. Maybe it would be good idea to define a method on Container that does this, and add a prominent comment that it should only be called with the lock held (or it could acquire the lock itself, if this pattern works for the call sites).

Contributor

aaronlehmann commented Feb 23, 2017

I like the idea. This looks way simpler than the original iteration that involved deep copies.

It looks like daemon.containersReplica.Save(c.Snapshot()) is a common pattern, and it needs to be protected by the container lock to avoid transactions overwriting each other. Maybe it would be good idea to define a method on Container that does this, and add a prominent comment that it should only be called with the lock held (or it could acquire the lock itself, if this pattern works for the call sites).

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Feb 23, 2017

Contributor

Good point on moving the replication to Container. I'll take a stab at it.

Contributor

fabiokung commented Feb 23, 2017

Good point on moving the replication to Container. I'll take a stab at it.

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Feb 23, 2017

Member

I'm bit torn about this. I stand by my earlier comment that we shouldn't be afraid of locks, just the wrong implementation of them. This problem doesn't only show in performance, most places we call IsRunning/IsPaused/IsRestarting are clear races. If we had proper synchronization between lifecycle states, a plain RWMutex should probably even perform better than this without the extra maintenance cost. But I don't see anyone taking on this refactor, at least before everything about containerd integration is cleared. This version does look simpler than the earlier one. Not going to block if other maintainers want to go forward with it.

Member

tonistiigi commented Feb 23, 2017

I'm bit torn about this. I stand by my earlier comment that we shouldn't be afraid of locks, just the wrong implementation of them. This problem doesn't only show in performance, most places we call IsRunning/IsPaused/IsRestarting are clear races. If we had proper synchronization between lifecycle states, a plain RWMutex should probably even perform better than this without the extra maintenance cost. But I don't see anyone taking on this refactor, at least before everything about containerd integration is cleared. This version does look simpler than the earlier one. Not going to block if other maintainers want to go forward with it.

Show outdated Hide outdated container/snapshot.go Outdated
@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Feb 23, 2017

Contributor

@tonistiigi fair point. This doesn't block going back to locking for reads when/if you are comfortable enough that all concurrency and locking is being properly handled across the codebase. It would be a bit of throwaway work, but at least it fixes the problem we are having today in the short term, until more long term solutions (what you propose with proper synchronization between lifecycle states, or "lock-free" transactional mutations everywhere, ...).

I'm not proposing we lock down on a decision on how concurrency will be handled long term, but this is a big pain right now and I don't feel we can wait much for a big re-architecture to happen.

Contributor

fabiokung commented Feb 23, 2017

@tonistiigi fair point. This doesn't block going back to locking for reads when/if you are comfortable enough that all concurrency and locking is being properly handled across the codebase. It would be a bit of throwaway work, but at least it fixes the problem we are having today in the short term, until more long term solutions (what you propose with proper synchronization between lifecycle states, or "lock-free" transactional mutations everywhere, ...).

I'm not proposing we lock down on a decision on how concurrency will be handled long term, but this is a big pain right now and I don't feel we can wait much for a big re-architecture to happen.

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Feb 23, 2017

Contributor

@aaronlehmann I moved the checkpointing/replication operation to *Container. How does it look now?

I liked it more, since now it is clearer that checkpointing should probably be done when saving state to disk. It also allowed container.snapshot() to not be public.

Contributor

fabiokung commented Feb 23, 2017

@aaronlehmann I moved the checkpointing/replication operation to *Container. How does it look now?

I liked it more, since now it is clearer that checkpointing should probably be done when saving state to disk. It also allowed container.snapshot() to not be public.

@aaronlehmann

This comment has been minimized.

Show comment
Hide comment
@aaronlehmann

aaronlehmann Feb 24, 2017

Contributor

Thanks, I think that's an improvement.

Contributor

aaronlehmann commented Feb 24, 2017

Thanks, I think that's an improvement.

@dnephin

This comment has been minimized.

Show comment
Hide comment
@dnephin

dnephin Mar 9, 2017

Member

This needs a rebase, but it looks like some of the requested changes were made, so could use another review as well

Member

dnephin commented Mar 9, 2017

This needs a rebase, but it looks like some of the requested changes were made, so could use another review as well

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Mar 10, 2017

Contributor

Rebased.

Contributor

fabiokung commented Mar 10, 2017

Rebased.

Show outdated Hide outdated container/snapshot.go Outdated
@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Mar 17, 2017

Contributor

I wonder if instead of having this intermediate object, we can store a duplicate of the container object like so:

(warning, pseudo-code follows)

func(c *Container) ToDisk() {
   // normal stuff

 snapshot := &Container{ID: c.ID}
 snapshot.FromDisk()
  c.snapshotsStore.Add(snapshot)
}
Contributor

cpuguy83 commented Mar 17, 2017

I wonder if instead of having this intermediate object, we can store a duplicate of the container object like so:

(warning, pseudo-code follows)

func(c *Container) ToDisk() {
   // normal stuff

 snapshot := &Container{ID: c.ID}
 snapshot.FromDisk()
  c.snapshotsStore.Add(snapshot)
}
@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Mar 20, 2017

Contributor

@cpuguy83 isn't it what we discussed here: #31273 (comment) ?

Or is it something else I'm missing?

Contributor

fabiokung commented Mar 20, 2017

@cpuguy83 isn't it what we discussed here: #31273 (comment) ?

Or is it something else I'm missing?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Mar 20, 2017

Contributor

@fabiokung Not the same, this would be the actual *container.Container type which gets stored in the snapshot which is fully-copied by definition since it'll be unmarshalled from the on-disk json.

Contributor

cpuguy83 commented Mar 20, 2017

@fabiokung Not the same, this would be the actual *container.Container type which gets stored in the snapshot which is fully-copied by definition since it'll be unmarshalled from the on-disk json.

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Mar 20, 2017

Contributor

@cpuguy83 ah, I see, I misunderstood.

It seems like a matter of taste or what you want to optimize for with this. Keeping the Snapshot struct means it will be easier to make durable persistence be done off the ACID in-memory store someday. i.e.: replace container.ToDisk() with containerstore.SnapshotToDisk(), or even make it transparent by using a persistent implementation of the ACID container store (sqllite3, leveldb, etc).

Feeding the in-memory store from the current json serialized files means a bit less duplication, but harder to move away from that serialization strategy. It would also mean putting objects with pointers in the store, which is a bit riskier since any code path can start modifying state kept by the store and break its consistency guarantees.

Contributor

fabiokung commented Mar 20, 2017

@cpuguy83 ah, I see, I misunderstood.

It seems like a matter of taste or what you want to optimize for with this. Keeping the Snapshot struct means it will be easier to make durable persistence be done off the ACID in-memory store someday. i.e.: replace container.ToDisk() with containerstore.SnapshotToDisk(), or even make it transparent by using a persistent implementation of the ACID container store (sqllite3, leveldb, etc).

Feeding the in-memory store from the current json serialized files means a bit less duplication, but harder to move away from that serialization strategy. It would also mean putting objects with pointers in the store, which is a bit riskier since any code path can start modifying state kept by the store and break its consistency guarantees.

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Mar 20, 2017

Contributor

... and my personal opinion is that risking breaking the consistency of the in-memory store is not worth the small amount of deduplication.

Contributor

fabiokung commented Mar 20, 2017

... and my personal opinion is that risking breaking the consistency of the in-memory store is not worth the small amount of deduplication.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Mar 20, 2017

Contributor

@fabiokung The snapshot object you created is storing reference types as it is, not to mention the real container object itself can be written to w/o proper locking and break consistency and potentially even crash the daemon.

I'm not sure I'm following how this intermediate object helps move us towards a different solution. The on-disk representation is the snapshot just not particularly optimized for random access like we'd get from a true database.

Contributor

cpuguy83 commented Mar 20, 2017

@fabiokung The snapshot object you created is storing reference types as it is, not to mention the real container object itself can be written to w/o proper locking and break consistency and potentially even crash the daemon.

I'm not sure I'm following how this intermediate object helps move us towards a different solution. The on-disk representation is the snapshot just not particularly optimized for random access like we'd get from a true database.

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Mar 21, 2017

Contributor

@cpuguy83 did you make your mind, or still thinking about it? This is a simple change I think, and I'd rather get this merged than try and force a strong opinion.

Contributor

fabiokung commented Mar 21, 2017

@cpuguy83 did you make your mind, or still thinking about it? This is a simple change I think, and I'd rather get this merged than try and force a strong opinion.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Mar 21, 2017

Contributor

I would prefer to simplest change for this intermediate solution which is to snapshot from unmarshalled json.
Later we can change ToDisk to actually go to something like bolt.

@tonistiigi @aaronlehmann

Contributor

cpuguy83 commented Mar 21, 2017

I would prefer to simplest change for this intermediate solution which is to snapshot from unmarshalled json.
Later we can change ToDisk to actually go to something like bolt.

@tonistiigi @aaronlehmann

@aaronlehmann

This comment has been minimized.

Show comment
Hide comment
@aaronlehmann

aaronlehmann Mar 21, 2017

Contributor

@cpuguy83: Is the idea to make a deep copy by marshalling to json and unmarshalling back into another object?

Contributor

aaronlehmann commented Mar 21, 2017

@cpuguy83: Is the idea to make a deep copy by marshalling to json and unmarshalling back into another object?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Mar 21, 2017

Contributor

@aaronlehmann Yes, particularly we're already marshalling it, it would just be unmarshalling back into a new object.

Contributor

cpuguy83 commented Mar 21, 2017

@aaronlehmann Yes, particularly we're already marshalling it, it would just be unmarshalling back into a new object.

@aaronlehmann

This comment has been minimized.

Show comment
Hide comment
@aaronlehmann

aaronlehmann Mar 21, 2017

Contributor

I guess. It feels less than ideal to rely on reflection for this copying, but if we're already marshalling the object for other reasons, it's (hopefully) not happening in any fast paths.

Avoiding hard-to-maintain code that creates a copy of the container for the store would be good.

There's still the issue of the store not being able to make copies on demand to return to callers. However, I don't think the current PR is doing this completely either. I notice that the struct it's returning has some slices, and the contents of these won't be copied by a struct copy.

Could we define an interface that provides read-only access to the applicable Container fields, and return this interface type from the store instead of an actual Container struct? Then it would be way harder to inadvertently make changes to an object in the store. This would also save the overhead of a deep copy on every access.

Contributor

aaronlehmann commented Mar 21, 2017

I guess. It feels less than ideal to rely on reflection for this copying, but if we're already marshalling the object for other reasons, it's (hopefully) not happening in any fast paths.

Avoiding hard-to-maintain code that creates a copy of the container for the store would be good.

There's still the issue of the store not being able to make copies on demand to return to callers. However, I don't think the current PR is doing this completely either. I notice that the struct it's returning has some slices, and the contents of these won't be copied by a struct copy.

Could we define an interface that provides read-only access to the applicable Container fields, and return this interface type from the store instead of an actual Container struct? Then it would be way harder to inadvertently make changes to an object in the store. This would also save the overhead of a deep copy on every access.

@gyliu513

This comment has been minimized.

Show comment
Hide comment
@gyliu513

gyliu513 Jun 29, 2017

Is it possible to provide a workaround patch for old versions? Many customers often hit docker ps hang due to a problematic container. In this condition, I cannot kill the problematic container, the docker daemon also cannot be started, but I have to restart my docker server. Sometimes it is not acceptable to restart a host in a production environment. @thaJeztah

gyliu513 commented Jun 29, 2017

Is it possible to provide a workaround patch for old versions? Many customers often hit docker ps hang due to a problematic container. In this condition, I cannot kill the problematic container, the docker daemon also cannot be started, but I have to restart my docker server. Sometimes it is not acceptable to restart a host in a production environment. @thaJeztah

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 29, 2017

Member

@gyliu513 there's no "one size fits all" workaround. As suggested in your other issue, I'd first recommend upgrading to a more recent version of docker. If the issue still remains, try to narrow down what actually causes the deadlock. Even though deadlocks can happen, they should really not be frequent. When hitting this situation, please open an issue, and send a SIGUSR1 to the docker daemon. This should create a stack dump in the logs, which may help investigating what's causing the deadlock.

Member

thaJeztah commented Jun 29, 2017

@gyliu513 there's no "one size fits all" workaround. As suggested in your other issue, I'd first recommend upgrading to a more recent version of docker. If the issue still remains, try to narrow down what actually causes the deadlock. Even though deadlocks can happen, they should really not be frequent. When hitting this situation, please open an issue, and send a SIGUSR1 to the docker daemon. This should create a stack dump in the logs, which may help investigating what's causing the deadlock.

@gyliu513

This comment has been minimized.

Show comment
Hide comment
@gyliu513

gyliu513 Jun 29, 2017

Thanks @thaJeztah , but the problem is with current 1.6.2, when I run sudo kill -SIGUSR1 <docker pid>, there is no output...

gyliu513 commented Jun 29, 2017

Thanks @thaJeztah , but the problem is with current 1.6.2, when I run sudo kill -SIGUSR1 <docker pid>, there is no output...

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 29, 2017

Member

The dump shows in the daemon logs, not on the command-iine. I'm not sure docker 1.6.2 had this already; possibly it needs debug to be enabled on the daemon

Member

thaJeztah commented Jun 29, 2017

The dump shows in the daemon logs, not on the command-iine. I'm not sure docker 1.6.2 had this already; possibly it needs debug to be enabled on the daemon

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 29, 2017

Member

That debugging functionality was added in docker 1.7 #10786

Anyway, this is not really the best location for this discussion - feel free to message me on Slack if you need assistance

Member

thaJeztah commented Jun 29, 2017

That debugging functionality was added in docker 1.7 #10786

Anyway, this is not really the best location for this discussion - feel free to message me on Slack if you need assistance

@gyliu513

This comment has been minimized.

Show comment
Hide comment
@gyliu513

gyliu513 Jun 29, 2017

I googled docker slack channel before, but no help info, what is the link of docker slack channel? @thaJeztah

gyliu513 commented Jun 29, 2017

I googled docker slack channel before, but no help info, what is the link of docker slack channel? @thaJeztah

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 29, 2017

Member

Here's more info about the slack channel; https://blog.docker.com/2016/11/introducing-docker-community-directory-docker-community-slack/, register here for the docker community, and to get an invitation for the Slack channel: http://dockr.ly/community

Member

thaJeztah commented Jun 29, 2017

Here's more info about the slack channel; https://blog.docker.com/2016/11/introducing-docker-community-directory-docker-community-slack/, register here for the docker community, and to get an invitation for the Slack channel: http://dockr.ly/community

@gyliu513

This comment has been minimized.

Show comment
Hide comment
@gyliu513

gyliu513 Jun 29, 2017

Thanks @thaJeztah , as I cannot be approved to the community, I want to ask you the last question here, hope it is OK. ;-)

Regarding the debug mode, it is really helpful for troubleshooting. The question is: is it good to enable the debug mode in a production env? For me, I do want to enable it, but not sure if there are any overhead for this especially for production.

gyliu513 commented Jun 29, 2017

Thanks @thaJeztah , as I cannot be approved to the community, I want to ask you the last question here, hope it is OK. ;-)

Regarding the debug mode, it is really helpful for troubleshooting. The question is: is it good to enable the debug mode in a production env? For me, I do want to enable it, but not sure if there are any overhead for this especially for production.

@gyliu513

This comment has been minimized.

Show comment
Hide comment
@gyliu513

gyliu513 Jun 29, 2017

Got the answer myself, we can turn on the debug mode automatically, cool!

gyliu513 commented Jun 29, 2017

Got the answer myself, we can turn on the debug mode automatically, cool!

@gyliu513

This comment has been minimized.

Show comment
Hide comment
@gyliu513

gyliu513 Jun 30, 2017

@fabiokung

Container queries (docker ps) currently try to grab a lock on every single container being inspected

One question for the issue, before your fix, why does docker ps need to grab a lock for each single container? Why read-only operations require a lock?

Another issue I want to mention is that when such issue happens, not only docker ps hang, but also docker images hang, does your fix include this? Also, do you know why docker images also hang? Other commands such as docker inspect <container id>, docker logs <container id> works fine.

gyliu513 commented Jun 30, 2017

@fabiokung

Container queries (docker ps) currently try to grab a lock on every single container being inspected

One question for the issue, before your fix, why does docker ps need to grab a lock for each single container? Why read-only operations require a lock?

Another issue I want to mention is that when such issue happens, not only docker ps hang, but also docker images hang, does your fix include this? Also, do you know why docker images also hang? Other commands such as docker inspect <container id>, docker logs <container id> works fine.

@fabiokung

This comment has been minimized.

Show comment
Hide comment
@fabiokung

fabiokung Jun 30, 2017

Contributor

@gyliu513 it uses locks to prevent partial reads and data corruption, since container data is being concurrently modified. Even in read-only operations, locking is how is ensures the state being read will be consistent.

This fix only applies to docker ps, and was a first step in fixing others you mentioned. I did mention them in the description. docker inspect will also hang, if you happen to hit one of the containers that has its lock held somewhere else.

Contributor

fabiokung commented Jun 30, 2017

@gyliu513 it uses locks to prevent partial reads and data corruption, since container data is being concurrently modified. Even in read-only operations, locking is how is ensures the state being read will be consistent.

This fix only applies to docker ps, and was a first step in fixing others you mentioned. I did mention them in the description. docker inspect will also hang, if you happen to hit one of the containers that has its lock held somewhere else.

@krmayankk

This comment has been minimized.

Show comment
Hide comment
@krmayankk

krmayankk Aug 1, 2017

@fabiokung this fix is available in what docker versions ?

krmayankk commented Aug 1, 2017

@fabiokung this fix is available in what docker versions ?

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 1, 2017

Member

@krnayankk this will be included in docker 17.07 and up

Member

thaJeztah commented Aug 1, 2017

@krnayankk this will be included in docker 17.07 and up

@thiagoalves

This comment has been minimized.

Show comment
Hide comment
@thiagoalves

thiagoalves Sep 6, 2017

@thaJeztah How about docker-ee? Will we have it in 17.06-ee3 or ee4? Or should we wait until 17.09?

thiagoalves commented Sep 6, 2017

@thaJeztah How about docker-ee? Will we have it in 17.06-ee3 or ee4? Or should we wait until 17.09?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Sep 6, 2017

Contributor

@thiagoalves It will not be in EE until the next major release. I do not see this one getting backported.

Contributor

cpuguy83 commented Sep 6, 2017

@thiagoalves It will not be in EE until the next major release. I do not see this one getting backported.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Sep 6, 2017

Contributor

@thiagoalves btw, if you have a specific case where you are seeing a deadlock, please report so it can be fixed. This patch is just making the deadlock less apparent, not actually fixing deadlocks.

Contributor

cpuguy83 commented Sep 6, 2017

@thiagoalves btw, if you have a specific case where you are seeing a deadlock, please report so it can be fixed. This patch is just making the deadlock less apparent, not actually fixing deadlocks.

@thiagoalves

This comment has been minimized.

Show comment
Hide comment
@thiagoalves

thiagoalves commented Sep 6, 2017

@robertglen

This comment has been minimized.

Show comment
Hide comment
@robertglen

robertglen Nov 21, 2017

17.07 sounds great, when can we expect to see it? I'm only seeing 17.05 with the last commit from May of this year. and oldschool docker 1.13.1 from Feb of this year.

How long do critical fixes affecting an entire ecosystem of projects typically bake before a release is cut?

robertglen commented Nov 21, 2017

17.07 sounds great, when can we expect to see it? I'm only seeing 17.05 with the last commit from May of this year. and oldschool docker 1.13.1 from Feb of this year.

How long do critical fixes affecting an entire ecosystem of projects typically bake before a release is cut?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 21, 2017

Contributor

@robertglen 17.07 was released in July....

Contributor

cpuguy83 commented Nov 21, 2017

@robertglen 17.07 was released in July....

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Nov 21, 2017

Member

@robertglen You probably need to update your apt/yum repositories from dockerproject.org to download.docker.com. https://docs.docker.com/engine/installation/

Member

tonistiigi commented Nov 21, 2017

@robertglen You probably need to update your apt/yum repositories from dockerproject.org to download.docker.com. https://docs.docker.com/engine/installation/

@robertglen

This comment has been minimized.

Show comment
Hide comment
@robertglen

robertglen Nov 22, 2017

heh, OK I just went to the code portion of this repo and looked at branches and tags and it stopped at 17.05 and had completely dismissed visiting docker's website :\ my bad.

robertglen commented Nov 22, 2017

heh, OK I just went to the code portion of this repo and looked at branches and tags and it stopped at 17.05 and had completely dismissed visiting docker's website :\ my bad.

@euank

This comment has been minimized.

Show comment
Hide comment
@euank

euank Nov 22, 2017

Contributor

@robertglen

The Moby project doesn't make releases of this repo, rather Docker Inc makes releases of the Docker Community Edition software (which includes code from this repo). Those releases are tagged in the docker-ce repository.

Contributor

euank commented Nov 22, 2017

@robertglen

The Moby project doesn't make releases of this repo, rather Docker Inc makes releases of the Docker Community Edition software (which includes code from this repo). Those releases are tagged in the docker-ce repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment