New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning message for lvm devmapper running on top of loopback devices #13266

Merged
merged 1 commit into from Aug 31, 2015

Conversation

Projects
None yet
@shishir-a412ed
Contributor

shishir-a412ed commented May 15, 2015

Signed-off-by: Shishir Mahajan shishir.mahajan@redhat.com

@shishir-a412ed

This comment has been minimized.

Show comment
Hide comment
@shishir-a412ed

shishir-a412ed May 15, 2015

Contributor

There are 2 changes in this PR:

  1. We want to warn users if they are running devmapper on loopback devices. This is not production ready and should be discouraged.
  2. Users should have the ability to suppress this warning, and not constantly pestered by it.

Docs are updated for more elaborate description.

Contributor

shishir-a412ed commented May 15, 2015

There are 2 changes in this PR:

  1. We want to warn users if they are running devmapper on loopback devices. This is not production ready and should be discouraged.
  2. Users should have the ability to suppress this warning, and not constantly pestered by it.

Docs are updated for more elaborate description.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah
Member

thaJeztah commented May 16, 2015

ping @vbatts

@vbatts

This comment has been minimized.

Show comment
Hide comment
@vbatts

vbatts May 17, 2015

Contributor

it doesn't say expressly say "it is not supported", but hopefully will once we decide to never fall back to devicemapper on loop-lvm, because upstream lvm absolutely does not support this loop-lvm setup.

LGTM

Contributor

vbatts commented May 17, 2015

it doesn't say expressly say "it is not supported", but hopefully will once we decide to never fall back to devicemapper on loop-lvm, because upstream lvm absolutely does not support this loop-lvm setup.

LGTM

@vbatts

This comment has been minimized.

Show comment
Hide comment
@vbatts
Contributor

vbatts commented May 17, 2015

@rhvgoyal

This comment has been minimized.

Show comment
Hide comment
@rhvgoyal

rhvgoyal May 18, 2015

Contributor

This patch takes warning all the way to client. Assumption here is that not many people might bother to read syslogs and notice warning.

LGTM.

Contributor

rhvgoyal commented May 18, 2015

This patch takes warning all the way to client. Assumption here is that not many people might bother to read syslogs and notice warning.

LGTM.

@shishir-a412ed

This comment has been minimized.

Show comment
Hide comment
@shishir-a412ed

shishir-a412ed May 18, 2015

Contributor

Following code and doc changes are taken from PR #12404

  1. daemon/graphdriver/devmapper/README.md: doc change.
  2. lvm devmapper on loopback devices check: daemon/graphdriver/devmapper/deviceset.go
Contributor

shishir-a412ed commented May 18, 2015

Following code and doc changes are taken from PR #12404

  1. daemon/graphdriver/devmapper/README.md: doc change.
  2. lvm devmapper on loopback devices check: daemon/graphdriver/devmapper/deviceset.go
Show outdated Hide outdated api/client/run.go
@@ -188,10 +188,15 @@ func (cli *DockerCli) CmdRun(args ...string) error {
}()
//start the container
if _, _, err = readBody(cli.call("POST", "/containers/"+createResponse.ID+"/start", nil, nil)); err != nil {
var obj []byte
if obj, _, err = readBody(cli.call("POST", "/containers/"+createResponse.ID+"/start", nil, nil)); err != nil {

This comment has been minimized.

@runcom

runcom May 18, 2015

Member

nit but can we avoid the var declaration and split in obj, _, err := ... and if err != nil {} otherwise obj?

@runcom

runcom May 18, 2015

Member

nit but can we avoid the var declaration and split in obj, _, err := ... and if err != nil {} otherwise obj?

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael May 18, 2015

Contributor

Why should this be returned to the client? It's a daemon option and the client should not know about this. The warning should be to the admin that configured and started docker and we should not have a specific flag to stop this warning.

Contributor

crosbymichael commented May 18, 2015

Why should this be returned to the client? It's a daemon option and the client should not know about this. The warning should be to the admin that configured and started docker and we should not have a specific flag to stop this warning.

@vbatts

This comment has been minimized.

Show comment
Hide comment
@vbatts

vbatts May 18, 2015

Contributor

@crosbymichael while I understand what you're saying, the fact that devicemapper loop-lvm is what so many distros allow users to default to, the fact is that this not an admin specifically configuring to use it. They install docker, and get an awful experience with devicemapper on loop-lvm, and have no idea why. We could hope for users that intentionally configure the storage driver for their optimal use case ...

Contributor

vbatts commented May 18, 2015

@crosbymichael while I understand what you're saying, the fact that devicemapper loop-lvm is what so many distros allow users to default to, the fact is that this not an admin specifically configuring to use it. They install docker, and get an awful experience with devicemapper on loop-lvm, and have no idea why. We could hope for users that intentionally configure the storage driver for their optimal use case ...

@cgwalters

This comment has been minimized.

Show comment
Hide comment
@cgwalters

cgwalters May 19, 2015

Contributor

Can we land #13323 first, then rebase this on top?

Contributor

cgwalters commented May 19, 2015

Can we land #13323 first, then rebase this on top?

@cgwalters

This comment has been minimized.

Show comment
Hide comment
@cgwalters

cgwalters Jun 10, 2015

Contributor

One issue with this is that for any usage of the client binary in scripts, all of a sudden we're injecting new output to stderr.

# docker run -ti centos echo hello |& wc -l
2

I hope not too many people are parsing stderr, but it's something to be aware of.

One compromise we might make is to only emit the warning when the client is connected to a tty.

Contributor

cgwalters commented Jun 10, 2015

One issue with this is that for any usage of the client binary in scripts, all of a sudden we're injecting new output to stderr.

# docker run -ti centos echo hello |& wc -l
2

I hope not too many people are parsing stderr, but it's something to be aware of.

One compromise we might make is to only emit the warning when the client is connected to a tty.

@icecrime

This comment has been minimized.

Show comment
Hide comment
@icecrime

icecrime Jun 26, 2015

Contributor

Agree with Michael: error message shouldn't be shown on the client side (and especially not mixed with regular output).

Contributor

icecrime commented Jun 26, 2015

Agree with Michael: error message shouldn't be shown on the client side (and especially not mixed with regular output).

Show outdated Hide outdated api/server/server.go
@@ -1047,6 +1048,11 @@ func (s *Server) postContainersStart(version version.Version, w http.ResponseWri
}
return err
}
if devmapper.WarnOnLoopback && devmapper.LoopbackInUse {
fmt.Fprintln(w, "Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.")

This comment has been minimized.

@icecrime

icecrime Jun 26, 2015

Contributor

This should be a log.

@icecrime

icecrime Jun 26, 2015

Contributor

This should be a log.

This comment has been minimized.

@rhatdan

rhatdan Jun 27, 2015

Contributor

I am not a big fan of this either, the problem is our file system team is wants it badly to make sure customers do not accidentally put this into production, and they feel that the log message will be ignored. I believe there is another patch that writes this message to the syslog system.

@rhatdan

rhatdan Jun 27, 2015

Contributor

I am not a big fan of this either, the problem is our file system team is wants it badly to make sure customers do not accidentally put this into production, and they feel that the log message will be ignored. I believe there is another patch that writes this message to the syslog system.

@shishir-a412ed

This comment has been minimized.

Show comment
Hide comment
@shishir-a412ed

shishir-a412ed Jun 27, 2015

Contributor

seems like we are going in an impasse :)
We understand that warning message on client side may not be the best place, but do we really want docker customers to be unhappy with the experience they get running docker ?

Scenario: I am a docker user, not a storage guy. I just started docker (which is running loopback devices by default). I didn't set any options. Now my docker is running slow. It didn't blew up, it's just running slow. I have no clue why but I am not happy with the experience !

Honestly, This is not the first time we are having a warning message on client side. Try updating an image with docker load command. you ll get a warning message that the old image is getting renamed to none:none. The load will still go through as it's a warning message. I think this one is much more critical.

Contributor

shishir-a412ed commented Jun 27, 2015

seems like we are going in an impasse :)
We understand that warning message on client side may not be the best place, but do we really want docker customers to be unhappy with the experience they get running docker ?

Scenario: I am a docker user, not a storage guy. I just started docker (which is running loopback devices by default). I didn't set any options. Now my docker is running slow. It didn't blew up, it's just running slow. I have no clue why but I am not happy with the experience !

Honestly, This is not the first time we are having a warning message on client side. Try updating an image with docker load command. you ll get a warning message that the old image is getting renamed to none:none. The load will still go through as it's a warning message. I think this one is much more critical.

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Jul 22, 2015

Contributor

@shishir-a412ed Based on the discussion I and @tiborvass had the warning definitely needs to be a part of daemon log and it can also be be part of docker info so users can see that straight away when they either see the syslog or docker info. Showing it on the client side is tricky because the user cannot distinguish if the message is coming from the daemon or from the application running in the container unless the container was started in detached mode.

Contributor

mrjana commented Jul 22, 2015

@shishir-a412ed Based on the discussion I and @tiborvass had the warning definitely needs to be a part of daemon log and it can also be be part of docker info so users can see that straight away when they either see the syslog or docker info. Showing it on the client side is tricky because the user cannot distinguish if the message is coming from the daemon or from the application running in the container unless the container was started in detached mode.

@rhatdan

This comment has been minimized.

Show comment
Hide comment
@rhatdan

rhatdan Jul 23, 2015

Contributor

I agree this should be put into two different pull requests. One to reveal the information in the docker logs, and docker info. And one to output the data to the client. If docker only accepted the first patch, I would be fine with it. We are also working on a tool to help users to migrate off of one back end to another, this would allow someone who went into production using loopback a way to migrate off without loosing their containers. It should also allow people to move from one back end to another. Say try out Overlayfs.

Contributor

rhatdan commented Jul 23, 2015

I agree this should be put into two different pull requests. One to reveal the information in the docker logs, and docker info. And one to output the data to the client. If docker only accepted the first patch, I would be fine with it. We are also working on a tool to help users to migrate off of one back end to another, this would allow someone who went into production using loopback a way to migrate off without loosing their containers. It should also allow people to move from one back end to another. Say try out Overlayfs.

@rhvgoyal

This comment has been minimized.

Show comment
Hide comment
@rhvgoyal

rhvgoyal Jul 23, 2015

Contributor

docker info has the information present that we are using loop devices.

Putting a warning in there too, feels little odd to me. I feel warning should be part of syslogs and if possible carry it all the way to the client.

Contributor

rhvgoyal commented Jul 23, 2015

docker info has the information present that we are using loop devices.

Putting a warning in there too, feels little odd to me. I feel warning should be part of syslogs and if possible carry it all the way to the client.

@rhatdan

This comment has been minimized.

Show comment
Hide comment
@rhatdan

rhatdan Jul 23, 2015

Contributor

I look at that data and I have no idea what it means, Adding (Developer Mode) or (No Production Mode) would be useful.

Contributor

rhatdan commented Jul 23, 2015

I look at that data and I have no idea what it means, Adding (Developer Mode) or (No Production Mode) would be useful.

@rhvgoyal

This comment has been minimized.

Show comment
Hide comment
@rhvgoyal

rhvgoyal Jul 23, 2015

Contributor

I am afraid then this mode field will start getting overloaded soon and one will have to go through logs anyway to figure out why their docker is in developer mode and not in production mode. And if one has to go through logs anyway, why to introduce more fields in docker info.

Contributor

rhvgoyal commented Jul 23, 2015

I am afraid then this mode field will start getting overloaded soon and one will have to go through logs anyway to figure out why their docker is in developer mode and not in production mode. And if one has to go through logs anyway, why to introduce more fields in docker info.

@rhvgoyal

This comment has been minimized.

Show comment
Hide comment
@rhvgoyal

rhvgoyal Jul 23, 2015

Contributor

And then their will be arguments about when something is considered developer mode and when does it go into production mode. For example.

  • Using overlay is developer mode or production mode.
  • Say there is some configuration of lvm thin pool which is not recommended and user has setup. Then should we transition docker into developer mode?

IOW, what I am trying to say is that trying to classify docker/storage configuration into "Developer/Production" mode is not going to be sufficient. There can be many reasons for docker to be in certain mode and user will have to walk through the logs anyway. It is not a replacement for that.

Contributor

rhvgoyal commented Jul 23, 2015

And then their will be arguments about when something is considered developer mode and when does it go into production mode. For example.

  • Using overlay is developer mode or production mode.
  • Say there is some configuration of lvm thin pool which is not recommended and user has setup. Then should we transition docker into developer mode?

IOW, what I am trying to say is that trying to classify docker/storage configuration into "Developer/Production" mode is not going to be sufficient. There can be many reasons for docker to be in certain mode and user will have to walk through the logs anyway. It is not a replacement for that.

@icecrime

This comment has been minimized.

Show comment
Hide comment
@icecrime

icecrime Aug 10, 2015

Contributor

@shishir-a412ed Some concerns have been raised some time ago by @crosbymichael and @mrjana (with @tiborvass), especially regarding the fact that this error is produced to the client, as part of a docker run. Can you please address them? Thanks.

Contributor

icecrime commented Aug 10, 2015

@shishir-a412ed Some concerns have been raised some time ago by @crosbymichael and @mrjana (with @tiborvass), especially regarding the fact that this error is produced to the client, as part of a docker run. Can you please address them? Thanks.

@shishir-a412ed

This comment has been minimized.

Show comment
Hide comment
@shishir-a412ed

shishir-a412ed Aug 10, 2015

Contributor

@rhvgoyal @rhatdan
What do you guys wanna do ? If you guys are okay with putting the message on the daemon log, I can quickly make the changes.

If not, I can just close this PR. I see a lot of back and forth happening on this one, with no consensus.

Contributor

shishir-a412ed commented Aug 10, 2015

@rhvgoyal @rhatdan
What do you guys wanna do ? If you guys are okay with putting the message on the daemon log, I can quickly make the changes.

If not, I can just close this PR. I see a lot of back and forth happening on this one, with no consensus.

@rhatdan

This comment has been minimized.

Show comment
Hide comment
@rhatdan

rhatdan Aug 11, 2015

Contributor

I think we should just log to the syslog upstream. Our file system teams insist on us logging to the client, so we need to carry the patch until we can change the default.

Contributor

rhatdan commented Aug 11, 2015

I think we should just log to the syslog upstream. Our file system teams insist on us logging to the client, so we need to carry the patch until we can change the default.

@rhvgoyal

This comment has been minimized.

Show comment
Hide comment
@rhvgoyal

rhvgoyal Aug 11, 2015

Contributor

Agreed. One step at a time. Let us first log it in syslog and then see what else can be done to make it more explicit.

Contributor

rhvgoyal commented Aug 11, 2015

Agreed. One step at a time. Let us first log it in syslog and then see what else can be done to make it more explicit.

@shishir-a412ed

This comment has been minimized.

Show comment
Hide comment
@shishir-a412ed

shishir-a412ed Aug 11, 2015

Contributor

@icecrime I have made the changes.

Contributor

shishir-a412ed commented Aug 11, 2015

@icecrime I have made the changes.

Show outdated Hide outdated daemon/graphdriver/devmapper/deviceset.go
@@ -1397,6 +1397,12 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
}
}
if devices.thinPoolDevice == "" {
if devices.metadataLoopFile != "" || devices.dataLoopFile != "" {
logrus.Warnf("Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev`.")

This comment has been minimized.

@tiborvass

tiborvass Aug 11, 2015

Collaborator

would be great to output a URL to the docs where users can read more about how to properly configure devmapper

@tiborvass

tiborvass Aug 11, 2015

Collaborator

would be great to output a URL to the docs where users can read more about how to properly configure devmapper

This comment has been minimized.

@shishir-a412ed

shishir-a412ed Aug 12, 2015

Contributor

@tiborvass I have made some changes. Take a look now and see if this is good.

@shishir-a412ed

shishir-a412ed Aug 12, 2015

Contributor

@tiborvass I have made some changes. Take a look now and see if this is good.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Aug 24, 2015

Contributor

LGTM

Contributor

cpuguy83 commented Aug 24, 2015

LGTM

@calavera

This comment has been minimized.

Show comment
Hide comment
@calavera

calavera Aug 25, 2015

Contributor

LGTM. Moving to docs review

Contributor

calavera commented Aug 25, 2015

LGTM. Moving to docs review

@bfirsh

This comment has been minimized.

Show comment
Hide comment
@bfirsh

bfirsh Aug 28, 2015

Contributor
Contributor

bfirsh commented Aug 28, 2015

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit
Contributor

SvenDowideit commented Aug 29, 2015

Show outdated Hide outdated daemon/graphdriver/devmapper/README.md
As a fallback if no thin pool is provided, loopback files will be
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is *strongly recommended* to not use

This comment has been minimized.

@jamtur01

jamtur01 Aug 29, 2015

Contributor

This phraseology is very awkward - perhaps: "It is strongly recommended that you do not use loopback in production." Ditto below.

@jamtur01

jamtur01 Aug 29, 2015

Contributor

This phraseology is very awkward - perhaps: "It is strongly recommended that you do not use loopback in production." Ditto below.

Show outdated Hide outdated daemon/graphdriver/devmapper/README.md
`--storage-opt dm.thinpooldev` argument provided.
In loopback, each devicemapper graph location (typically
`/var/lib/docker/devicemapper`, $graph below) a thin pool is created

This comment has been minimized.

@jamtur01

jamtur01 Aug 29, 2015

Contributor

The $graph reference isn't clear - I'd rewrite this sentence.

@jamtur01

jamtur01 Aug 29, 2015

Contributor

The $graph reference isn't clear - I'd rewrite this sentence.

Show outdated Hide outdated docs/reference/commandline/daemon.md
As a fallback if no thin pool is provided, loopback files will be
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is *strongly recommended* to not use
loopback in production. Ensure your docker daemon has a

This comment has been minimized.

@jamtur01

jamtur01 Aug 29, 2015

Contributor

Docker

@jamtur01

jamtur01 Aug 29, 2015

Contributor

Docker

Show outdated Hide outdated daemon/graphdriver/devmapper/README.md
As a fallback if no thin pool is provided, loopback files will be
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is *strongly recommended* to not use
loopback in production. Ensure your docker daemon has a

This comment has been minimized.

@jamtur01

jamtur01 Aug 29, 2015

Contributor

Docker

@jamtur01

jamtur01 Aug 29, 2015

Contributor

Docker

@shishir-a412ed

This comment has been minimized.

Show comment
Hide comment
@shishir-a412ed

shishir-a412ed Aug 31, 2015

Contributor

@jamtur01 Please check now.

Contributor

shishir-a412ed commented Aug 31, 2015

@jamtur01 Please check now.

Show outdated Hide outdated daemon/graphdriver/devmapper/README.md
one for metadata. By default these block devices are created
automatically by using loopback mounts of automatically created sparse
module (dm-thinp) to implement CoW snapshots. The preferred model is
to have a thin pool reserved outside of Docker, and passed to the

This comment has been minimized.

@jamtur01

jamtur01 Aug 31, 2015

Contributor

You don't need a comma here after Docker but no big deal.

@jamtur01

jamtur01 Aug 31, 2015

Contributor

You don't need a comma here after Docker but no big deal.

This comment has been minimized.

@shishir-a412ed
@shishir-a412ed
@jamtur01

This comment has been minimized.

Show comment
Hide comment
@jamtur01

jamtur01 Aug 31, 2015

Contributor

Docs broadly LGTM

Contributor

jamtur01 commented Aug 31, 2015

Docs broadly LGTM

Warning message for lvm devmapper running on top of loopback devices
Signed-off-by: Shishir Mahajan <shishir.mahajan@redhat.com>
@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 31, 2015

Member

docs LGTM, thanks @shishir-a412ed!

Member

thaJeztah commented Aug 31, 2015

docs LGTM, thanks @shishir-a412ed!

thaJeztah added a commit that referenced this pull request Aug 31, 2015

Merge pull request #13266 from shishir-a412ed/docker_lvm_devmapper_lo…
…opbackdevices

Warning message for lvm devmapper running on top of loopback devices

@thaJeztah thaJeztah merged commit 81d6f35 into moby:master Aug 31, 2015

@shishir-a412ed shishir-a412ed deleted the shishir-a412ed:docker_lvm_devmapper_loopbackdevices branch Oct 5, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment