Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Bind mount implementation and tests #602

Merged
merged 4 commits into from
@gabrtv

Despite the well-intentioned reluctance to this feature expressed in #111, being able to mount host directories on containers is critical for many use cases. I've completed a naive implementation (with tests). I hope we can get something like this in master soon!

Usage is as follows:

docker run -b /host base ls /host                     # mounts /host through to the container
docker run -b /host:/container base ls /container     # mounts /host to /container
docker run -b /host:/container:ro base ls /container  # mounts /host to /container in read-only mode

The bind mounts default to read-write but also support read-only mode. I worked this out by modifying lxc.mount.entry manually and working backwards. Given my unfamiliarity with the code-base, this should get close review before being merged.

@flaub

+1

@unclejack
Collaborator

There were some objections concerning how this feature works. I've written an article to document some needs concerning an alternative way to do this and a few things which would improve volumes to make them more useful when used during development and and when used in production:
Volumes & persistent data storage

@shykes shykes was assigned
@creack creack was assigned
@shykes
Owner

From an UI standpoint, I'm happy to merge this, except I would like to force /foo:/bar instead of allowing /foo, to make the workflow less host-specific.

@unclejack
Collaborator

@shykes Are you saying that you'd like /foo:/bar to work like the volumes? If so, we could save these bind mounts to the config of the image / container and reuse them.

I'm not sure whether this PR could be moved to be merged into a new branch so that it could be developed further there. Of course, it'd also be possible to merge it into master and develop it further in a feature branch.

@shykes
Owner
@niclashoyer

@shykes @gabrtv @unclejack is anyone actively working on this? Would love to see this in the next release :heart:

@heavenlyhash

Obligatory +1. Docker is gorgeous and I want to all-the-things with it, but this ticket and its predecessors are absolute blockers for me.

Breaking container portability a bit with external mounts is an acceptable tradeoff, I think. If I'm using this feature to keep a database on fast media for example, I don't necessarily expect the semantics of the process in the container to survive without that data anyway. As long as the container can either make it through docker run myimage /bin/bash, or give me an error messages telling me which directories it expects, I'm happy. (The former behavior could involve just mounting an unwritable empty filesystem in the container if it can't find the desired paths outside the container, perhaps?)

@solomonstre
@pasky
@solomonstre
@srobertson

Just playing around with this patch, as mounting an external location is important enough to me to try it without waiting for docker 0.5

Thought I'd share this note if anyone else decides to give it a spin.

You need to ensure the mount inside the container exists or else you'll get cryptic error.

For example if you execute

docker run -b /dir1/dir2 

The directory /dir1/dir2 must already exist in the container or else you'll receive

failed to mount '/dir1/dir2' on '/usr/lib/lxc/root//dir1/dir2'
lxc-start: failed to setup the mount entries for '24920f312c543a2cef6c76ab4fac7ae45345a54d938e3c0418ea71900a4dd42b'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn '24920f312c543a2cef6c76ab4fac7ae45345a54d938e3c0418ea71900a4dd42b'

It would be nice if the final versino of this feature was smart enough to create the mount points inside the container if they are missing.

@gabrtv

Glad to see this bumped in priority! I'm happy to continue working it. It sounds like there are four things we should be addressing:

  1. Force explicit source and destination arguments (e.g. /foo:/bar)
  2. Improve error handling around missing paths (both source and dest)
  3. Update the tests accordingly
  4. Add documentation

Let me know if you'd like me to continue on this, or you'd rather one of the core devs handle it. @shykes @unclejack

@unclejack
Collaborator

@gabrtv Please feel free to continue working on this. I've written above something about exploring other options concerning this PR in case you weren't interested in developing it further, but I'm happy to see that's not the case.

The list of changes you've proposed is reasonable.

We can also discuss on IRC if you need some input or just need to talk about something.

@gabrtv

@unclejack Sorry I've been incommunicado on the PR.. just got back from a trip abroad without Internet access (wipes brow). I hope to get back to this later this week after I catch up on vacation backlog.

@gabrtv

@unclejack I've cleaned up the PR as follows:

  • Merged current master (0.4.0+)
  • Now forcing /foo:/bar per @shykes request above
  • Added a comment and fixed newlines in the LXC template
  • Clarified usage in run docs

With regard to error handling on missing host/container directories, I find the errors produced by lxc-start to be sufficient, but I'll let others be the judge..

Missing host directory:

$ docker run -b /missing:/tmp -t -i lucid64 bash
lxc-start: No such file or directory - failed to mount '/missing' on '/usr/lib/lxc/root//tmp'
lxc-start: failed to setup the mount entries for '365878af00f71ee024692755c43a94d39c7bc7a0905186d9668f93897d564399'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn '365878af00f71ee024692755c43a94d39c7bc7a0905186d9668f93897d564399'

Missing container directory:

$ docker run -b /tmp:/missing -t -i lucid64 bash
lxc-start: No such file or directory - failed to mount '/tmp' on '/usr/lib/lxc/root//missing'
lxc-start: failed to setup the mount entries for 'ddb1888e82f3a7672f365123e67559bca4b707afe2ca7e06da36ee8cfb24c75b'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'ddb1888e82f3a7672f365123e67559bca4b707afe2ca7e06da36ee8cfb24c75b'

It's important to note running the container in daemon mode (-d) swallows the error:

$ docker run -b /tmp:/missing -d lucid64 sleep 60
193bab2528b4
$ docker ps -a
ID                  IMAGE               COMMAND                CREATED              STATUS              PORTS
193bab2528b4        lucid64:latest      sleep 60               3 seconds ago        Exit 255            

..however given the advanced nature of this feature, I can live with the current behavior.

@solomonstre
@gabrtv

Agreed. We certainly don't want this inherited. I'm happy to take a closer look, but in the meantime is there a different strategy you have in mind?

@gabrtv

@solomonstre After taking a closer look, I can't see a quick & easy way to remove binds from the Config object. The relevant code path seems to be:

CLI
  • func (cli *DockerCli) CmdRun(args ...string) error
    • func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet, error)
    • func (cli *DockerCli) call(method, path string, data interface{}) ([]byte, int, error)
API
  • func postContainersCreate(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string)
    • func (srv *Server) ContainerCreate(config *Config) (string, error)
      • func (builder *Builder) Create(config *Config) (*Container, error)
  • func postContainersStart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error
    • func (srv *Server) ContainerStart(name string) error
      • func (container *Container) Start() error

I can see two options for achieving the goal you outlined:

  1. Keep Binds in the Config, but try to exclude them during ContainerCommit
  2. Introduce a new "HostConfig" used during container start operations

The second option would involve:

  • Changing ParseRun to return a new HostConfig containing bind mounts and anything else
  • Changing CmdRun to POST to /containers/:id/start with the HostConfig body (instead of the current nil body)
  • Changing postContainersStart to deserialize the HostConfig and pass it to srv.ContainerStart
  • Make srv.ContainerStart and container.Start take a HostConfig

Thoughts? Am I missing something?

@shykes shykes was assigned
@shykes
Owner

I agree with option 2. It makes sense to have a HostConfig that is separate from the image's config.
Later this can be used as a foundation for a configuration system. HostConfig values could be defined
in a cascading way, much like CSS: in a top-level configuration file, as command-line options to the daemon,
as command-line options to the run (eg. the subject of this present conversation).

Other things that I see applying to HostConfig:

* DNS overrides (currently in 'docker run -dns')
* Passing physical interfaces to a container (eg. eth0)
* Exposing device files to a container (eg. /dev/fuse)
* Middleware hooks

Two points to discuss:

  1. Should HostConfig fields have their own command-line flags, eg '-b' for bind-mount, '-dns' for dns config etc.? Or
    should we define a single flag for setting the field of your choice, eg. '-o mounts/pgdata=/var/lib/postgres:/mnt/postgres-data'
    or '-o dns=8.8.8.8'. I'm thinking we stick to regular flags for now, and figure out a generic system later, when it's needed.

  2. Host mounts should always have a corresponding data volume (as created by -v). We can do that in 2 ways:

    a) -b must be preceded by an explicit -v, or it fails with "No data volume declared at /var/lib/postgres, cannot mount".
    b) -b creates the missing data volume on the fly when needed. This way listing a container's volumes will always yield
    the expected result (which is to show all "special" persistent directories), and the next commit will inherit
    the new data volume (without inheriting the host-specific part).

Let me know if you'd like to get some help on this, this PR is high on the list so if we can parallelize, we should.

Thanks!

@gabrtv

Should HostConfig fields have their own command-line flags

To me the distinction between what goes in HostConfig versus Config is an implementation detail that most users won't be concerned with. I lean toward leaving regular flags.

Host mounts should always have a corresponding data volume (as created by -v)

From a UX standpoint having -b create the volume seems friendlier, but is it too magical? To be honest, I'm not very familiar with the current volumes implementation. I'll defer to others on what makes the most sense here.

I understand the high priority and am happy to take a crack at the "MVPR", which to me excludes (for now):

  • Cascading configuration
  • Moving other Config items into HostConfig (e.g. -dns)

and includes:

Changing ParseRun to return a new HostConfig containing bind mounts
Changing CmdRun to POST to /containers/:id/start with the HostConfig body (instead of the current nil body)
Changing postContainersStart to deserialize the HostConfig and pass it to srv.ContainerStart
Make srv.ContainerStart and container.Start take a HostConfig

...as well as whatever we decide for '-v' requirements, plus fixing the multitude of tests that will be broken from changing function signatures for something like Container.Start.

With regard to parallelization, it seems like the HostConfig struct, function signatures, and new Start implementations need to happen first.

Thoughts @solomonstre?

@shykes
Owner

Sounds good to me, I think we're on the same page. Go for it! I would recommend asking for feedback frequently to avoid wasting your time if we start diverging :)

Re: magical interactions between -v and -b, what do you guys thing @creack @vieux @jpetazzo @unclejack (and anybody else with actual experience playing with docker + volumes)

@unclejack
Collaborator

@shykes I wouldn't worry about it too much right now, docker is not yet stable and production ready, so we'll be able to make fixes and changes.

in my opinion, -v and -b should be able to work together for now. We can add some special checks later to prevent the nesting of bind mounts within volumes if that becomes a problem.

@gabrtv

Merged with current master. The relevant changes are in this commit: ffe794f

This turned out to be fairly straightforward. HostConfig is now passed through all container Start operations. That's where bind mount configuration is stored. Should be fairly extensible. Over to you @shykes!

@vieux
Owner

This will require a bump of the remote API version (and the doc which goes with it)

I know this is an expected behaviour, but would be nice if -b could create the directory in the container ?
(have a working docker run -b /tmp:/missing -t -i lucid64 bash)

@gabrtv

@vieux that should handle creating missing directories. I added test coverage too.

With regard to bumping API version and docs, is that something you'd like me to do in this PR?

@vieux
Owner

@shykes your call about the api version,
I say we can merge this PR, merge your PR about build and version 1.3, and add the doc of this PR in version 1.3 (not need to have api 1.3 for about 1/2 day(s) and jumping to 1.4)

@vieux
Owner

@gabrtv could you bump to master and, regarding the doc:

Update the what's new section for the 1.3 api in docs/sources/api/docker_remote_api.rst
and update docs/sources/api/docker_remote_api_v1.3.rst

Thanks

@gabrtv

@vieux docs updated and latest master merged. Let me know if you need anything else on this PR.

@vieux
Owner

@gabrtv thank it looks very good, one thing docker run -b /:. -i -t base bash fails (expected) but you can't quit the client. Is that possible for you to prevent this ?

Otherwise LGTM

@vieux
Owner

awesome, thanks

api.go
@@ -551,11 +551,17 @@ func deleteImages(srv *Server, version float64, w http.ResponseWriter, r *http.R
}
func postContainersStart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
+ hostConfig := &HostConfig{}
+
+ if err := json.NewDecoder(r.Body).Decode(hostConfig); err != nil {
@shykes Owner
shykes added a note

I think we should continue accepting an empty body, in which case we would pass an empty HostConfig. That way we preserve compatibility with older clients.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@gabrtv

@shykes good catch. fixed and updated docs.

@shykes
Owner

When I run 'docker help run' I see the following output:

$ docker help run
[...]
  -b=[]: Bind mount a volume from the host
[...]

It's difficult to guess what to pass exactly. Note that other flags suffer from the same problem - but still, it would be nice to give a user-friendly hint.

(edited to show the right flag :)

@shykes
Owner

(I edited the previous comment to show the correct flag)

@gabrtv

@shykes just merged master again and double checked the test suite. let me know if there's anything else you need here.

@gabrtv

@shykes -b flags are now treated as -v flags early on in ParseRun. Seems to work on my end:

# docker run -t -i -b /tmp:/missing base sleep 2 
# docker inspect 3dac85f068de
[2013/06/24 23:51:09 GET /v1.3/containers/3dac85f068de/json
{
    "ID": "3dac85f068de8fa208f486134b008892c219f6f8c4a22f1491a8ee8cd99b4f7f",
    "Created": "2013-06-24T23:50:54.593031323Z",
    "Path": "sleep",
    "Args": [
        "2"
    ],
    "Config": {
        "Hostname": "3dac85f068de",
        "User": "",
        "Memory": 0,
        "MemorySwap": 0,
        "CpuShares": 0,
        "AttachStdin": true,
        "AttachStdout": true,
        "AttachStderr": true,
        "PortSpecs": null,
        "Tty": true,
        "OpenStdin": true,
        "StdinOnce": true,
        "Env": null,
        "Cmd": [
            "sleep",
            "2"
        ],
        "Dns": null,
        "Image": "base",
        "Volumes": {
            "/missing": {}
        },
        "VolumesFrom": ""
    },
    "State": {
        "Running": false,
        "Pid": 0,
        "ExitCode": 0,
        "StartedAt": "2013-06-24T23:50:54.596669666Z",
        "Ghost": false
    },
    "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
    "NetworkSettings": {
        "IPAddress": "",
        "IPPrefixLen": 0,
        "Gateway": "",
        "Bridge": "",
        "PortMapping": null
    },
    "SysInitPath": "/root/go/bin/docker",
    "ResolvConfPath": "/etc/resolv.conf",
    "Volumes": {
        "/missing": "047de3ada91608514da74f1eb9d76853c636c51a5483ad2983f7b829a80195a7"
    },
    "Binds": [
        {
            "SrcPath": "/tmp",
            "DstPath": "/missing",
            "Mode": "rw"
        }
    ]
}]```
@shykes
Owner

Hey @gabrtv, I'm pushing some amendments to this branch: https://github.com/dotcloud/docker/tree/111-bind-mounts-AMENDMENTS

I don't have permission to update your pull request, would you mind pulling them in? I have another commit or two coming. These fix the problem we talked about, eg. getting the binds and volumes to play nice with each other completely.

I still need to push 2 commits to fix 2 things:

1) Your test doesn't pass, but the functionality it tests actually works (from another out-of-band test). Trying to figure out what's wrong.

2) I temporarily disabled ro mode. It's already coded, but I'm breaking down commits for clarity.

@gabrtv

@shykes i'm out of office today. I'm happy to pick this up tonight or tomorrow. In the meantime I granted you write access to my fork. Feel free to update the PR.

Btw, I thought I fixed that broken test in: 1cf32c1

Thanks for helping take this across the finish line!

@shykes
Owner

Thanks @gabrtv for giving me access, I pushed my amendments to you branch, I will delete my other branch.

@vieux yes, that test fails and I haven't found out why yet - the feature it is testing seems to work fine. I think this is the last point keeping us from merging.

@vieux
Owner

@shykes there are 2 issues.

1) the whole functionality is broken, docker run -b /tmp:/tmp base ls /tmp doesn't work
2) You need to add volumes to the tests (it's done by parseRun) lines 1261 and 1293, like this :

 container, err := NewBuilder(runtime).Create(&Config{
                 Image:   GetTestImage(runtime).ID,
+                Volumes: map[string]struct{}{"/tmp": struct{}{}},
                 Cmd:     []string{"ls", "/tmp"},
         },
@solomonstre
@gabrtv

@shykes @vieux tests are now passing with a merge of current master. A couple notes:

  1. Some + typos ended up in the LXC template and were breaking the base functionality
  2. There was a dupe init of container.VolumesRW that was breaking RW (everything was RO)
  3. As @vieux noted, the test code needed to add Volumes to the container Config since ParseRun isn't being called

I'm not sure where @shykes test factoring code went -- it seemed like a better testing approach -- but this works.

@creack

I really don't like the unit-test refactoring. Now, where there is an issue, it always come from utils_test.go, so we need to look for the failed test.
Otherwise, LGTM.

@shykes
Owner

@creack I understand what you mean. It's a tradeoff either way: either we have long and repetitive tests with exact information on which line failed - or we have very short and readable tests with less precise line information.

I think the reason you need exact line information right now is precisely because our tests are too long and too repetitive. With a clean table-driven test, line information is not as important, and not sufficient anyway since the same line might be used for multiple tests in a loop. See build_test.go for an example.

@ptone ptone referenced this pull request in ptone/jiffylab
Open

setup lab-wide shared folder #12

@shykes
Owner

I'm rebasing for cleanness... Brace yourselves :)

@shykes
Owner

LGTM

@shykes shykes merged commit 3e29695 into from
@gabrtv gabrtv deleted the branch
@denibertovic

any plan on when the new release is going to be?

@solomonstre
@denibertovic

pure awesomeness! :+1:

@stigi

:+1: you guys rock! epic pull request :100:

@mindreframer

:thumbsup: great one!

@elbaschid

You guys rock so hard :thumbsup:

@kencochrane
Owner

I wish we added better docs before we pushed this live. There are no examples on how to use it, and the only references I could find were in the API docs and a small reference in the run command's flag list.

@jwmarshall

These binds/mounts are persisted after a restart of the container, but they seem to be broken. That is to say, I see them listed in mount on the container but the directory is empty.

Client version: 0.4.8
Server version: 0.4.8
Go version: go1.1

OS: Ubuntu 13.04

@vieux
Owner

@jwmarshall it'll be fixed bu #1102

@jwmarshall

@vieux Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
View
11 api.go
@@ -551,11 +551,20 @@ func deleteImages(srv *Server, version float64, w http.ResponseWriter, r *http.R
}
func postContainersStart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
+ hostConfig := &HostConfig{}
+
+ // allow a nil body for backwards compatibility
+ if r.Body != nil {
+ if err := json.NewDecoder(r.Body).Decode(hostConfig); err != nil {
+ return err
+ }
+ }
+
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
- if err := srv.ContainerStart(name); err != nil {
+ if err := srv.ContainerStart(name, hostConfig); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
View
26 api_test.go
@@ -873,7 +873,8 @@ func TestPostContainersKill(t *testing.T) {
}
defer runtime.Destroy(container)
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -917,7 +918,8 @@ func TestPostContainersRestart(t *testing.T) {
}
defer runtime.Destroy(container)
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -973,8 +975,15 @@ func TestPostContainersStart(t *testing.T) {
}
defer runtime.Destroy(container)
+ hostConfigJSON, err := json.Marshal(&HostConfig{})
+
+ req, err := http.NewRequest("POST", "/containers/"+container.ID+"/start", bytes.NewReader(hostConfigJSON))
+ if err != nil {
+ t.Fatal(err)
+ }
+
r := httptest.NewRecorder()
- if err := postContainersStart(srv, APIVERSION, r, nil, map[string]string{"name": container.ID}); err != nil {
+ if err := postContainersStart(srv, APIVERSION, r, req, map[string]string{"name": container.ID}); err != nil {
t.Fatal(err)
}
if r.Code != http.StatusNoContent {
@@ -989,7 +998,7 @@ func TestPostContainersStart(t *testing.T) {
}
r = httptest.NewRecorder()
- if err = postContainersStart(srv, APIVERSION, r, nil, map[string]string{"name": container.ID}); err == nil {
+ if err = postContainersStart(srv, APIVERSION, r, req, map[string]string{"name": container.ID}); err == nil {
t.Fatalf("A running containter should be able to be started")
}
@@ -1019,7 +1028,8 @@ func TestPostContainersStop(t *testing.T) {
}
defer runtime.Destroy(container)
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -1068,7 +1078,8 @@ func TestPostContainersWait(t *testing.T) {
}
defer runtime.Destroy(container)
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -1113,7 +1124,8 @@ func TestPostContainersAttach(t *testing.T) {
defer runtime.Destroy(container)
// Start the process
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
View
5 buildfile.go
@@ -87,7 +87,7 @@ func (b *buildFile) CmdRun(args string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to run")
}
- config, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
+ config, _, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
if err != nil {
return err
}
@@ -263,7 +263,8 @@ func (b *buildFile) run() (string, error) {
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(c.ID))
//start the container
- if err := c.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := c.Start(hostConfig); err != nil {
return "", err
}
View
4 commands.go
@@ -1235,7 +1235,7 @@ func (cli *DockerCli) CmdTag(args ...string) error {
}
func (cli *DockerCli) CmdRun(args ...string) error {
- config, cmd, err := ParseRun(args, nil)
+ config, hostConfig, cmd, err := ParseRun(args, nil)
if err != nil {
return err
}
@@ -1274,7 +1274,7 @@ func (cli *DockerCli) CmdRun(args ...string) error {
}
//start the container
- if _, _, err = cli.call("POST", "/containers/"+runResult.ID+"/start", nil); err != nil {
+ if _, _, err = cli.call("POST", "/containers/"+runResult.ID+"/start", hostConfig); err != nil {
return err
}
View
124 container.go
@@ -52,6 +52,9 @@ type Container struct {
waitLock chan struct{}
Volumes map[string]string
+ // Store rw/ro in a separate structure to preserve reserve-compatibility on-disk.
+ // Easier than migrating older container configs :)
+ VolumesRW map[string]bool
}
type Config struct {
@@ -75,7 +78,17 @@ type Config struct {
VolumesFrom string
}
-func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet, error) {
+type HostConfig struct {
+ Binds []string
+}
+
+type BindMap struct {
+ SrcPath string
+ DstPath string
+ Mode string
+}
+
+func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig, *flag.FlagSet, error) {
cmd := Subcmd("run", "[OPTIONS] IMAGE [COMMAND] [ARG...]", "Run a command in a new container")
if len(args) > 0 && args[0] != "--help" {
cmd.SetOutput(ioutil.Discard)
@@ -111,11 +124,14 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
flVolumesFrom := cmd.String("volumes-from", "", "Mount volumes from the specified container")
+ var flBinds ListOpts
+ cmd.Var(&flBinds, "b", "Bind mount a volume from the host (e.g. -b /host:/container)")
+
if err := cmd.Parse(args); err != nil {
- return nil, cmd, err
+ return nil, nil, cmd, err
}
if *flDetach && len(flAttach) > 0 {
- return nil, cmd, fmt.Errorf("Conflicting options: -a and -d")
+ return nil, nil, cmd, fmt.Errorf("Conflicting options: -a and -d")
}
// If neither -d or -a are set, attach to everything by default
if len(flAttach) == 0 && !*flDetach {
@@ -127,6 +143,14 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
}
}
}
+
+ // add any bind targets to the list of container volumes
+ for _, bind := range flBinds {
+ arr := strings.Split(bind, ":")
+ dstDir := arr[1]
+ flVolumes[dstDir] = struct{}{}
+ }
+
parsedArgs := cmd.Args()
runCmd := []string{}
image := ""
@@ -154,6 +178,9 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
Volumes: flVolumes,
VolumesFrom: *flVolumesFrom,
}
+ hostConfig := &HostConfig{
+ Binds: flBinds,
+ }
if capabilities != nil && *flMemory > 0 && !capabilities.SwapLimit {
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
@@ -164,7 +191,7 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
if config.OpenStdin && config.AttachStdin {
config.StdinOnce = true
}
- return config, cmd, nil
+ return config, hostConfig, cmd, nil
}
type NetworkSettings struct {
@@ -430,7 +457,7 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
})
}
-func (container *Container) Start() error {
+func (container *Container) Start(hostConfig *HostConfig) error {
container.State.lock()
defer container.State.unlock()
@@ -454,17 +481,71 @@ func (container *Container) Start() error {
container.Config.MemorySwap = -1
}
container.Volumes = make(map[string]string)
+ container.VolumesRW = make(map[string]bool)
+
+ // Create the requested bind mounts
+ binds := make(map[string]BindMap)
+ // Define illegal container destinations
+ illegal_dsts := []string{"/", "."}
+
+ for _, bind := range hostConfig.Binds {
+ // FIXME: factorize bind parsing in parseBind
+ var src, dst, mode string
+ arr := strings.Split(bind, ":")
+ if len(arr) == 2 {
+ src = arr[0]
+ dst = arr[1]
+ mode = "rw"
+ } else if len(arr) == 3 {
+ src = arr[0]
+ dst = arr[1]
+ mode = arr[2]
+ } else {
+ return fmt.Errorf("Invalid bind specification: %s", bind)
+ }
+
+ // Bail if trying to mount to an illegal destination
+ for _, illegal := range illegal_dsts {
+ if dst == illegal {
+ return fmt.Errorf("Illegal bind destination: %s", dst)
+ }
+ }
+
+ bindMap := BindMap{
+ SrcPath: src,
+ DstPath: dst,
+ Mode: mode,
+ }
+ binds[path.Clean(dst)] = bindMap
+ }
+ // FIXME: evaluate volumes-from before individual volumes, so that the latter can override the former.
// Create the requested volumes volumes
for volPath := range container.Config.Volumes {
- c, err := container.runtime.volumes.Create(nil, container, "", "", nil)
- if err != nil {
- return err
+ volPath = path.Clean(volPath)
+ // If an external bind is defined for this volume, use that as a source
+ if bindMap, exists := binds[volPath]; exists {
+ container.Volumes[volPath] = bindMap.SrcPath
+ if strings.ToLower(bindMap.Mode) == "rw" {
+ container.VolumesRW[volPath] = true
+ }
+ // Otherwise create an directory in $ROOT/volumes/ and use that
+ } else {
+ c, err := container.runtime.volumes.Create(nil, container, "", "", nil)
+ if err != nil {
+ return err
+ }
+ srcPath, err := c.layer()
+ if err != nil {
+ return err
+ }
+ container.Volumes[volPath] = srcPath
+ container.VolumesRW[volPath] = true // RW by default
}
+ // Create the mountpoint
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
}
- container.Volumes[volPath] = c.ID
}
if container.Config.VolumesFrom != "" {
@@ -552,7 +633,8 @@ func (container *Container) Start() error {
}
func (container *Container) Run() error {
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
return err
}
container.Wait()
@@ -565,7 +647,8 @@ func (container *Container) Output() (output []byte, err error) {
return nil, err
}
defer pipe.Close()
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
return nil, err
}
output, err = ioutil.ReadAll(pipe)
@@ -768,7 +851,8 @@ func (container *Container) Restart(seconds int) error {
if err := container.Stop(seconds); err != nil {
return err
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
return err
}
return nil
@@ -891,22 +975,6 @@ func (container *Container) RootfsPath() string {
return path.Join(container.root, "rootfs")
}
-func (container *Container) GetVolumes() (map[string]string, error) {
- ret := make(map[string]string)
- for volPath, id := range container.Volumes {
- volume, err := container.runtime.volumes.Get(id)
- if err != nil {
- return nil, err
- }
- root, err := volume.root()
- if err != nil {
- return nil, err
- }
- ret[volPath] = path.Join(root, "layer")
- }
- return ret, nil
-}
-
func (container *Container) rwPath() string {
return path.Join(container.root, "rw")
}
View
181 container_test.go
@@ -7,6 +7,7 @@ import (
"io/ioutil"
"math/rand"
"os"
+ "path"
"regexp"
"sort"
"strings"
@@ -15,10 +16,7 @@ import (
)
func TestIDFormat(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container1, err := NewBuilder(runtime).Create(
&Config{
@@ -39,10 +37,7 @@ func TestIDFormat(t *testing.T) {
}
func TestMultipleAttachRestart(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
@@ -70,7 +65,8 @@ func TestMultipleAttachRestart(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
l1, err := bufio.NewReader(stdout1).ReadString('\n')
@@ -111,7 +107,7 @@ func TestMultipleAttachRestart(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container.Start(); err != nil {
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -142,10 +138,7 @@ func TestMultipleAttachRestart(t *testing.T) {
}
func TestDiff(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
@@ -251,10 +244,7 @@ func TestDiff(t *testing.T) {
}
func TestCommitAutoRun(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
@@ -306,7 +296,8 @@ func TestCommitAutoRun(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container2.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
container2.Wait()
@@ -330,10 +321,7 @@ func TestCommitAutoRun(t *testing.T) {
}
func TestCommitRun(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
@@ -388,7 +376,8 @@ func TestCommitRun(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container2.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
container2.Wait()
@@ -412,10 +401,7 @@ func TestCommitRun(t *testing.T) {
}
func TestStart(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
@@ -436,7 +422,8 @@ func TestStart(t *testing.T) {
t.Fatal(err)
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -446,7 +433,7 @@ func TestStart(t *testing.T) {
if !container.State.Running {
t.Errorf("Container should be running")
}
- if err := container.Start(); err == nil {
+ if err := container.Start(hostConfig); err == nil {
t.Fatalf("A running containter should be able to be started")
}
@@ -456,10 +443,7 @@ func TestStart(t *testing.T) {
}
func TestRun(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
@@ -484,10 +468,7 @@ func TestRun(t *testing.T) {
}
func TestOutput(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(
&Config{
@@ -509,10 +490,7 @@ func TestOutput(t *testing.T) {
}
func TestKillDifferentUser(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
@@ -528,7 +506,8 @@ func TestKillDifferentUser(t *testing.T) {
if container.State.Running {
t.Errorf("Container shouldn't be running")
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -558,13 +537,10 @@ func TestKillDifferentUser(t *testing.T) {
// Test that creating a container with a volume doesn't crash. Regression test for #995.
func TestCreateVolume(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
- config, _, err := ParseRun([]string{"-v", "/var/lib/data", GetTestImage(runtime).ID, "echo", "hello", "world"}, nil)
+ config, hc, _, err := ParseRun([]string{"-v", "/var/lib/data", GetTestImage(runtime).ID, "echo", "hello", "world"}, nil)
if err != nil {
t.Fatal(err)
}
@@ -573,7 +549,7 @@ func TestCreateVolume(t *testing.T) {
t.Fatal(err)
}
defer runtime.Destroy(c)
- if err := c.Start(); err != nil {
+ if err := c.Start(hc); err != nil {
t.Fatal(err)
}
c.WaitTimeout(500 * time.Millisecond)
@@ -581,10 +557,7 @@ func TestCreateVolume(t *testing.T) {
}
func TestKill(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
@@ -599,7 +572,8 @@ func TestKill(t *testing.T) {
if container.State.Running {
t.Errorf("Container shouldn't be running")
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -626,10 +600,7 @@ func TestKill(t *testing.T) {
}
func TestExitCode(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
@@ -666,10 +637,7 @@ func TestExitCode(t *testing.T) {
}
func TestRestart(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
@@ -699,10 +667,7 @@ func TestRestart(t *testing.T) {
}
func TestRestartStdin(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
@@ -724,7 +689,8 @@ func TestRestartStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
if _, err := io.WriteString(stdin, "hello world"); err != nil {
@@ -754,7 +720,7 @@ func TestRestartStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container.Start(); err != nil {
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
if _, err := io.WriteString(stdin, "hello world #2"); err != nil {
@@ -777,10 +743,7 @@ func TestRestartStdin(t *testing.T) {
}
func TestUser(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
@@ -887,10 +850,7 @@ func TestUser(t *testing.T) {
}
func TestMultipleContainers(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
builder := NewBuilder(runtime)
@@ -916,10 +876,11 @@ func TestMultipleContainers(t *testing.T) {
defer runtime.Destroy(container2)
// Start both containers
- if err := container1.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container1.Start(hostConfig); err != nil {
t.Fatal(err)
}
- if err := container2.Start(); err != nil {
+ if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
@@ -946,10 +907,7 @@ func TestMultipleContainers(t *testing.T) {
}
func TestStdin(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
@@ -971,7 +929,8 @@ func TestStdin(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
defer stdin.Close()
@@ -993,10 +952,7 @@ func TestStdin(t *testing.T) {
}
func TestTty(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
@@ -1018,7 +974,8 @@ func TestTty(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
defer stdin.Close()
@@ -1040,10 +997,7 @@ func TestTty(t *testing.T) {
}
func TestEnv(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
container, err := NewBuilder(runtime).Create(&Config{
Image: GetTestImage(runtime).ID,
@@ -1060,7 +1014,8 @@ func TestEnv(t *testing.T) {
t.Fatal(err)
}
defer stdout.Close()
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
t.Fatal(err)
}
container.Wait()
@@ -1109,10 +1064,7 @@ func grepFile(t *testing.T, path string, pattern string) {
}
func TestLXCConfig(t *testing.T) {
- runtime, err := newTestRuntime()
- if err != nil {
- t.Fatal(err)
- }
+ runtime := mkRuntime(t)
defer nuke(runtime)
// Memory is allocated randomly for testing
rand.Seed(time.Now().UTC().UnixNano())
@@ -1196,7 +1148,8 @@ func BenchmarkRunParallel(b *testing.B) {
return
}
defer runtime.Destroy(container)
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
complete <- err
return
}
@@ -1225,3 +1178,35 @@ func BenchmarkRunParallel(b *testing.B) {
b.Fatal(errors)
}
}
+
+func tempDir(t *testing.T) string {
+ tmpDir, err := ioutil.TempDir("", "docker-test")
+ if err != nil {
+ t.Fatal(err)
+ }
+ return tmpDir
+}
+
+func TestBindMounts(t *testing.T) {
+ r := mkRuntime(t)
+ defer nuke(r)
+ tmpDir := tempDir(t)
+ defer os.RemoveAll(tmpDir)
+ writeFile(path.Join(tmpDir, "touch-me"), "", t)
+
+ // Test reading from a read-only bind mount
+ stdout, _ := runContainer(r, []string{"-b", fmt.Sprintf("%s:/tmp:ro", tmpDir), "_", "ls", "/tmp"}, t)
+ if !strings.Contains(stdout, "touch-me") {
+ t.Fatal("Container failed to read from bind mount")
+ }
+
+ // test writing to bind mount
+ runContainer(r, []string{"-b", fmt.Sprintf("%s:/tmp:rw", tmpDir), "_", "touch", "/tmp/holla"}, t)
+ readFile(path.Join(tmpDir, "holla"), t) // Will fail if the file doesn't exist
+
+ // test mounting to an illegal destination directory
+ if _, err := runContainer(r, []string{"-b", fmt.Sprintf("%s:.", tmpDir), "ls", "."}, nil); err == nil {
+ t.Fatal("Container bind mounted illegal directory")
+
+ }
+}
View
3  docs/sources/api/docker_remote_api.rst
@@ -42,6 +42,9 @@ List containers (/containers/json):
- You can use size=1 to get the size of the containers
+Start containers (/containers/<id>/start):
+
+- You can now pass host-specific configuration (e.g. bind mounts) in the POST body for start calls
:doc:`docker_remote_api_v1.2`
*****************************
View
31 docs/sources/api/docker_remote_api_v1.3.rst
@@ -294,23 +294,30 @@ Start a container
.. http:post:: /containers/(id)/start
- Start the container ``id``
+ Start the container ``id``
- **Example request**:
+ **Example request**:
- .. sourcecode:: http
+ .. sourcecode:: http
- POST /containers/e90e34656806/start HTTP/1.1
-
- **Example response**:
+ POST /containers/(id)/start HTTP/1.1
+ Content-Type: application/json
- .. sourcecode:: http
+ {
+ "Binds":["/tmp:/tmp"]
+ }
- HTTP/1.1 200 OK
-
- :statuscode 200: no error
- :statuscode 404: no such container
- :statuscode 500: server error
+ **Example response**:
+
+ .. sourcecode:: http
+
+ HTTP/1.1 204 No Content
+ Content-Type: text/plain
+
+ :jsonparam hostConfig: the container's host configuration (optional)
+ :statuscode 200: no error
+ :statuscode 404: no such container
+ :statuscode 500: server error
Stop a contaier
View
1  docs/sources/commandline/command/run.rst
@@ -25,3 +25,4 @@
-d=[]: Set custom dns servers for the container
-v=[]: Creates a new volume and mounts it at the specified path.
-volumes-from="": Mount all volumes from the given container.
+ -b=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]
View
5 lxc_template.go
@@ -84,8 +84,9 @@ lxc.mount.entry = {{.SysInitPath}} {{$ROOTFS}}/sbin/init none bind,ro 0 0
# In order to get a working DNS environment, mount bind (ro) the host's /etc/resolv.conf into the container
lxc.mount.entry = {{.ResolvConfPath}} {{$ROOTFS}}/etc/resolv.conf none bind,ro 0 0
{{if .Volumes}}
-{{range $virtualPath, $realPath := .GetVolumes}}
-lxc.mount.entry = {{$realPath}} {{$ROOTFS}}/{{$virtualPath}} none bind,rw 0 0
+{{ $rw := .VolumesRW }}
+{{range $virtualPath, $realPath := .Volumes}}
+lxc.mount.entry = {{$realPath}} {{$ROOTFS}}/{{$virtualPath}} none bind,{{ if index $rw $virtualPath }}rw{{else}}ro{{end}} 0 0
{{end}}
{{end}}
View
4 runtime.go
@@ -144,7 +144,9 @@ func (runtime *Runtime) Register(container *Container) error {
utils.Debugf("Restarting")
container.State.Ghost = false
container.State.setStopped(0)
- if err := container.Start(); err != nil {
+ // assume empty host config
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
return err
}
nomonitor = true
View
6 runtime_test.go
@@ -327,7 +327,8 @@ func findAvailalblePort(runtime *Runtime, port int) (*Container, error) {
if err != nil {
return nil, err
}
- if err := container.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container.Start(hostConfig); err != nil {
if strings.Contains(err.Error(), "address already in use") {
return nil, nil
}
@@ -437,7 +438,8 @@ func TestRestore(t *testing.T) {
defer runtime1.Destroy(container2)
// Start the container non blocking
- if err := container2.Start(); err != nil {
+ hostConfig := &HostConfig{}
+ if err := container2.Start(hostConfig); err != nil {
t.Fatal(err)
}
View
6 server.go
@@ -87,7 +87,7 @@ func (srv *Server) ImageInsert(name, url, path string, out io.Writer, sf *utils.
}
defer file.Body.Close()
- config, _, err := ParseRun([]string{img.ID, "echo", "insert", url, path}, srv.runtime.capabilities)
+ config, _, _, err := ParseRun([]string{img.ID, "echo", "insert", url, path}, srv.runtime.capabilities)
if err != nil {
return "", err
}
@@ -934,9 +934,9 @@ func (srv *Server) ImageGetCached(imgId string, config *Config) (*Image, error)
return nil, nil
}
-func (srv *Server) ContainerStart(name string) error {
+func (srv *Server) ContainerStart(name string, hostConfig *HostConfig) error {
if container := srv.runtime.Get(name); container != nil {
- if err := container.Start(); err != nil {
+ if err := container.Start(hostConfig); err != nil {
return fmt.Errorf("Error starting container %s: %s", name, err.Error())
}
} else {
View
8 server_test.go
@@ -65,7 +65,7 @@ func TestCreateRm(t *testing.T) {
srv := &Server{runtime: runtime}
- config, _, err := ParseRun([]string{GetTestImage(runtime).ID, "echo test"}, nil)
+ config, _, _, err := ParseRun([]string{GetTestImage(runtime).ID, "echo test"}, nil)
if err != nil {
t.Fatal(err)
}
@@ -98,7 +98,7 @@ func TestCreateStartRestartStopStartKillRm(t *testing.T) {
srv := &Server{runtime: runtime}
- config, _, err := ParseRun([]string{GetTestImage(runtime).ID, "/bin/cat"}, nil)
+ config, hostConfig, _, err := ParseRun([]string{GetTestImage(runtime).ID, "/bin/cat"}, nil)
if err != nil {
t.Fatal(err)
}
@@ -112,7 +112,7 @@ func TestCreateStartRestartStopStartKillRm(t *testing.T) {
t.Errorf("Expected 1 container, %v found", len(runtime.List()))
}
- err = srv.ContainerStart(id)
+ err = srv.ContainerStart(id, hostConfig)
if err != nil {
t.Fatal(err)
}
@@ -127,7 +127,7 @@ func TestCreateStartRestartStopStartKillRm(t *testing.T) {
t.Fatal(err)
}
- err = srv.ContainerStart(id)
+ err = srv.ContainerStart(id, hostConfig)
if err != nil {
t.Fatal(err)
}
View
103 utils_test.go
@@ -0,0 +1,103 @@
+package docker
+
+import (
+ "io"
+ "io/ioutil"
+ "os"
+ "path"
+ "strings"
+ "testing"
+)
+
+// This file contains utility functions for docker's unit test suite.
+// It has to be named XXX_test.go, apparently, in other to access private functions
+// from other XXX_test.go functions.
+
+// Create a temporary runtime suitable for unit testing.
+// Call t.Fatal() at the first error.
+func mkRuntime(t *testing.T) *Runtime {
+ runtime, err := newTestRuntime()
+ if err != nil {
+ t.Fatal(err)
+ }
+ return runtime
+}
+
+// Write `content` to the file at path `dst`, creating it if necessary,
+// as well as any missing directories.
+// The file is truncated if it already exists.
+// Call t.Fatal() at the first error.
+func writeFile(dst, content string, t *testing.T) {
+ // Create subdirectories if necessary
+ if err := os.MkdirAll(path.Dir(dst), 0700); err != nil && !os.IsExist(err) {
+ t.Fatal(err)
+ }
+ f, err := os.OpenFile(dst, os.O_CREATE|os.O_RDWR|os.O_TRUNC, 0700)
+ if err != nil {
+ t.Fatal(err)
+ }
+ // Write content (truncate if it exists)
+ if _, err := io.Copy(f, strings.NewReader(content)); err != nil {
+ t.Fatal(err)
+ }
+}
+
+// Return the contents of file at path `src`.
+// Call t.Fatal() at the first error (including if the file doesn't exist)
+func readFile(src string, t *testing.T) (content string) {
+ f, err := os.Open(src)
+ if err != nil {
+ t.Fatal(err)
+ }
+ data, err := ioutil.ReadAll(f)
+ if err != nil {
+ t.Fatal(err)
+ }
+ return string(data)
+}
+
+// Create a test container from the given runtime `r` and run arguments `args`.
+// The image name (eg. the XXX in []string{"-i", "-t", "XXX", "bash"}, is dynamically replaced by the current test image.
+// The caller is responsible for destroying the container.
+// Call t.Fatal() at the first error.
+func mkContainer(r *Runtime, args []string, t *testing.T) (*Container, *HostConfig) {
+ config, hostConfig, _, err := ParseRun(args, nil)
+ if err != nil {
+ t.Fatal(err)
+ }
+ config.Image = GetTestImage(r).ID
+ c, err := NewBuilder(r).Create(config)
+ if err != nil {
+ t.Fatal(err)
+ }
+ return c, hostConfig
+}
+
+// Create a test container, start it, wait for it to complete, destroy it,
+// and return its standard output as a string.
+// The image name (eg. the XXX in []string{"-i", "-t", "XXX", "bash"}, is dynamically replaced by the current test image.
+// If t is not nil, call t.Fatal() at the first error. Otherwise return errors normally.
+func runContainer(r *Runtime, args []string, t *testing.T) (output string, err error) {
+ defer func() {
+ if err != nil && t != nil {
+ t.Fatal(err)
+ }
+ }()
+ container, hostConfig := mkContainer(r, args, t)
+ defer r.Destroy(container)
+ stdout, err := container.StdoutPipe()
+ if err != nil {
+ return "", err
+ }
+ defer stdout.Close()
+ if err := container.Start(hostConfig); err != nil {
+ return "", err
+ }
+ container.Wait()
+ data, err := ioutil.ReadAll(stdout)
+ if err != nil {
+ return "", err
+ }
+ output = string(data)
+ return
+}
Something went wrong with that request. Please try again.