Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy to docker #114

Closed
crosbymichael opened this issue Feb 22, 2014 · 27 comments
Closed

Deploy to docker #114

crosbymichael opened this issue Feb 22, 2014 · 27 comments
Milestone

Comments

@crosbymichael
Copy link

So I'm creating an issue so that we can see if this is a good idea. I have already starting working on this but want to make sure the config looks right.

So what I want out of a docker based CI system:

  1. Get the git repo and do docker build, produce image
  2. Run whatever test in a container from the image built
  3. If tests pass, push image to a registry or the public index
  4. Instruct a docker host running somewhere to pull down the image and run the container with a specified config.
  5. Sit back and watch drone auto deploy services

This is what I'm thinking for the config.

deploy:
  docker:
     tag: crosbymichael/skydock
     host: tcp://192.168.56.10:4243
     replace: skydock
     name: skydock
     volumes: ["/var/run/docker.sock:/docker.sock"]
     links: ["skydns:skydns"]
     cmd: ["-ttl", "30", "-environment", "dev", "-s" ,"/docker.sock", "-domain", "docker"]

A few problems that I see with drone is that it removes the images and containers after the build and that depoloyment scripts are just bash scripts that execute stuff and dont have information about the container and image in the build. I would also like to consume the docker api via the client (in go) and not shell out on the cli.

@afex
Copy link

afex commented Feb 22, 2014

one thing you may want to consider, is that the environment and dependencies needed to build/test an application are different than those needed to run it in production.

i'm with you that automatically uploading docker images after a build succeeds is a good goal, but have you thought about the details needed to support shipping a different image? i'm also working towards this goal, with the assumption that the deployable image can be promoted from staging to production without being re-created for config changes.

@bradrydzewski
Copy link

@crosbymichael thanks for the feedback. This is a use case I definitely want to handle:

  1. Get the git repo and do docker build, produce image
  2. Run whatever test in a container from the image built
  3. If tests pass, push image to a registry or the public index
  4. Instruct a docker host running somewhere to pull down the image and run the container with a specified config.
  5. Sit back and watch drone auto deploy services

Could we achieve this today with our existing design? Could we build and test the Docker container inside a build environment that is a running Docker container?

Here are use cases I'd like to consider as part of our design:

  • Do we need to support running arbitrary commands before building an image? For example, running jsmin or lessc to support the ADD command in the Dockerfile.
  • Do I want to install testing tools inside my production image if I don't need them to run my application? For example, Selenium and JVM alone could increase image size by 600MB. Or would I run my selenium tests outside the container, accessing the running application inside the container?
  • Do I care if tests leave artifacts inside my production container? For example, test files, test results, coverage reports, sqlite database files, etc
  • Do I want to execute tests in parallel to minimize execution time?
  • Do I want to execute tests against multiple environments? For example, I'm building a library and want to test multiple versions of the JVM.
  • Do I have multiple Dockerfiles in a single repository?

We would also need to scrub private data from containers. For example, your GitHub private ssh key is injected into the container in order to clone private repositories. You probably don't want this key getting shipped around inside a production image.

This also impacts things like database integration. Drone currently links your build container to service containers (mysql, redis, rabbitmq, etc) and provides environment variables with connection details (ie REDIS_PORT_6379_TCP_ADDR). It is likely that the environment variable names used by Drone would mismatch your production environment. Heroku, for example, uses branded service names (ie REDISTOGO_PORT).

What do you think? I definitely want to hash out a design for this and get this functionality built into Drone as soon as possible

@cggaurav
Copy link

+1

A few things
[1] What do you multiple Dockerfiles in a rep mean? Don't they lead themselves into different apps? Unless you are running standalone services, which we don't need CI for.
[2] If you do have a Dockerfile in the git repo, I think we can avoid the push of the image to another index, but rather build the image both on Drone and the (same|other) docker instance.

@antonlindstrom
Copy link

For my use cases, the best way would be to build a new container after the testing has been performed. Testing and production environment are often not the same. For instance when using Go, we only need to ship the binary artifact in a production container and there's no need to have Go development tools installed.

I think the build pipeline would look something like this:

  1. Bring up initial testing container, same as now
  2. Build a docker image for production environment
  3. Deploy the image built in step 2 to a registry or start it locally

For the purposes I see, the ideally thing would be to have a new step in between the testing container and the deployment phase that could build Docker containers and that the deployment/publish step can treat the built container as an artifact.

@xetorthio
Copy link

I think that allowing any drone docker container to talk to the local host docker api is enough to cover all the use cases.
I could:

  1. Run a script to test my code
  2. Run a script to compile my code if 1 succeeded.
  3. Run docker build to create my image if 2 succeeded
  4. Run docker push with the recently created image to a any docker registry that I like if 3 succeeded
  5. Use ssh deploy mechanism to actually connect to any host that I want to deploy and do docker run (specifying the recently created image).

I don't think there is need to support docker special configuration in the yaml file, because it will only make things more complex. Docker is evolving, which means that the yaml file options will have to be updated all the time.

So if drone allows to mount /var/run/docker.sock inside the container, we should be able to do all these things.

@bradrydzewski
Copy link

This was from a private email exchange that I wanted to make sure was documented.

I'm not in a huge rush to implement this feature, but I would like something in place in time for DockerCon when (presumably) Docker hosting solutions might be announced. I'll probably start working on this in a month or two.

Option 1

Link the host Docker installation (tcp://127.0.0.1:4243) inside the build container, exposing the Docker daemon and allowing Docker commands as part of the build script.

Pros:

  • User has full control over which Docker commands are run and how
  • User can run commands outside of the Dockerfile, and use the ADD directive. See Deploy to docker #114 (comment)

Cons:

  • Build image must have Docker client installed (or must be downloaded ad-hoc, which we already do for S3)
  • Any concerns exposing the host machines Docker client inside a container? What if docker login is invoked allowing the next build to access the prior build's docker account?

Option 2

Currently the .drone.yml has a script section that executes build and test commands. We could split this into multiple steps, for example:

setup:
  - sudo apt-get install libsqlite3-dev
  - go get github.com/mattn/gosqlite3
  - go build
script
  - go test -v

In the above example, we could snapshot an image after the setup step, before the test commands are executed.

Pros:

  • Guarantees a clean(er) image, with no testing artifacts
  • Guarantees the tests are executed in the same image that is pushed to the index
  • Docker client on host is not exposed to the build container

Cons:

  • Would break when used with our caching feature (see Implement proper Caching #147). If we bind mount a cache of libraries (for example, the .npm directory) this would get excluded from the Docker image
  • Would still require testing-related software to be installed in the image (java + selenium + firefox, for example) that would not be required by the run-time, and would increase the overall image size. I do think this might be an acceptable compromise if we can guarantee parity of CI and production containers.

@xetorthio
Copy link

I think that option 2 is a bad approach.
Languages like scala work the exact opposite.
You first run the tests and when they are all ok, you then create the jar
or war that you'll be distributing.
Which means that during the build phase you need scala, sbt, etc. But once
it is built you only need java.

Regard option 1, how do you plan to expose docker remote api from the host?
And why do you want to implement dockerfile commands outside of the docker
file? I don't understand how it can be used.
On Apr 7, 2014 6:27 PM, "Brad Rydzewski" notifications@github.com wrote:

This was from a private email exchange that I wanted to make sure was
documented.

I'm not in a huge rush to implement this feature, but I would like
something in place in time for DockerCon when (presumably) Docker hosting
solutions might be announced. I'll probably start working on this in a
month or two.
Option 1

Link the host Docker installation (tcp://127.0.0.1:4243) inside the build
container, exposing the Docker daemon and allowing Docker commands as part
of the build script.

Pros:

Cons:

  • Build image must have Docker client installed (or must be downloaded
    ad-hoc, which we already do for S3)
  • Any concerns exposing the host machines Docker client inside a
    container? What if docker login is invoked allowing the next build to
    access the prior build's docker account?

Option 2

Currently the .drone.yml has a script section that executes build and
test commands. We could split this into multiple steps, for example:

setup:

  • sudo apt-get install libsqlite3-dev
  • go get github.com/mattn/gosqlite3
  • go build
    script
  • go test -v

In the above example, we could snapshot an image after the setup step,
before the test commands are executed.

Pros:

  • Guarantees a clean(er) image, with no testing artifacts
  • Guarantees the tests are executed in the same image that is pushed
    to the index
  • Docker client on host is not exposed to the build container

Cons:

  • Would break when used with our caching feature (see Implement proper Caching #147Implement proper Caching #147).
    If we bind mount a cache of libraries (for example, the .npm directory)
    this would get excluded from the Docker image
  • Would still require testing-related software to be installed in the
    image (java + selenium + firefox, for example) that would not be required
    by the run-time, and would increase the overall image size. I do think this
    might be an acceptable compromise if we can guarantee parity of CI and
    production containers.

Reply to this email directly or view it on GitHubhttps://github.com//issues/114#issuecomment-39791642
.

@bradrydzewski
Copy link

I think that option 2 is a bad approach

Option 2 would allow you to push the exact same image that you ran your tests in. For some people this is very important. Option 1 has security issues as I mentioned above. Neither option is perfect.

Regarding option 1, how do you plan to expose docker remote api from the host?

Either by bind mounting /var/run/docker.sock (like you suggested) or by linking tcp://127.0.0.1:4243 using Docker's built-in linking functionality.

Either way, this poses a security issue that would need to be addressed because all builds would be accessing the host machines Docker installation, via either tcp or mounted socket.

And why do you want to implement dockerfile commands outside of the docker file

I don't think I discussed the exact implementation in my prior comments. It could be part of the Dockerfile (as suggested in issue #43) or maybe the would be executed inside the container. They definitely won't be executed on the host machine.

@xetorthio
Copy link

May I ask which are the use cases for pushing the exact same container that
the test ran in?

On Mon, Apr 7, 2014 at 8:34 PM, Brad Rydzewski notifications@github.comwrote:

I think that option 2 is a bad approach

Option 2 would allow you to push the exact same container that you run
your tests in. For some people this is very important. Option 1 has
security issues as I mentioned above. Neither option is perfect.

Regarding option 1, how do you plan to expose docker remote api from the
host?

Either by bind mounting /var/run/docker.sock (like you suggested) or by
linking tcp://127.0.0.1:4243 using Docker's built-in linking
functionality.

Either way, this poses a security issue that would need to be addressed
because all builds would be accessing the host machines Docker
installation, via either tcp or mounted socket.

And why do you want to implement dockerfile commands outside of the docker
file

I don't think I discussed the exact implementation. It could be part of
the Dockerfile (as suggested in issue #43#43)
or maybe not. I'm not sure yet.

Reply to this email directly or view it on GitHubhttps://github.com//issues/114#issuecomment-39800101
.

@bradrydzewski
Copy link

The use case is knowing that your code will run in your Docker image. Consider this scenario:

  1. I run my unit tests and generate a Go binary (or a .war file or whatever)
  2. I build a Docker image from a Dockerfile and ADD the Go binary
  3. Oh no! I forgot to install libsqlite3 in the Docker image
  4. My unit tests passed, so I end up publishing a broken Docker image

And then consider the use case for testing inside the same image you are going to publish:

  1. I create a container that has my compiled code, dependencies, etc
  2. I snapshot that container as an image
  3. I start a new container, from the image in step 2, and run my tests
  4. Oh no! I forgot to install libsqlite3 in the Docker image
  5. My unit tests fail because libsqlite3 isn't installed, so I don't publish an image

Option # 2 wasn't my idea, by the way, but I can certainly see the appeal

@xetorthio
Copy link

As far as I understand, drone generates an image before running thr build.
Which mean that if we follow this approach, the final image will have lots
of unnecessary stuff. Like the injected private key for the repo, drone
binary, and even an entrypoint.
If I didn't miss anything, I don't even think this approach will work.

On the other hand, by only giving access to the docker remote api, you
could create your image, run it and test it within the build.
This support both use cases.
On Apr 7, 2014 10:51 PM, "Brad Rydzewski" notifications@github.com wrote:

The use case is knowing that your code will run in your Docker image.
Consider this scenario:

  1. I run my unit tests and generate a Go binary (or a .war file or
    whatever)
  2. I build a Docker image from a Dockerfile and ADD the Go binary
  3. Oh no! I forgot to install libsqlite3 in the Docker image
  4. My unit tests passed, so I end up publishing a broken Docker image

And then consider what the use case for testing inside the same image you
are going to publish:

  1. I create an container that has my compiled code, dependencies, etc
  2. I snapshot that container as an image
  3. I start a new container, from the image in step 2, and run my tests
  4. Oh no! I forgot to install libsqlite3 in the Docker image
  5. My unit tests fail because libsqlite3 isn't installed, so I don't
    publish an image

Option # 2 wasn't my idea, by the way, but I can certainly see the appeal

Reply to this email directly or view it on GitHubhttps://github.com//issues/114#issuecomment-39806849
.

@bradrydzewski
Copy link

Yes, that is how Drone works today, however, the proposal for option 2 is to alter the behavior. It would split the .drone.yml commands into two sections.

The build process would work like this:

  1. create and start build container using the image from the .drone.yml as the base
  2. mount build script, ssh key and other data we don't want persisted in a layer
  3. clone code into container
  4. run setup, installation and compilation commands from the setup section of the .drone.yml
  5. container exits
  6. snapshot image (before running tests)
  7. create and start a build container using the image from step 6
  8. run test commands from the script section of the .drone.yml
  9. container exits
  10. publish image from step 6

There is not a lot of precedent for building and testing Docker containers, so we are going to experiment with multiple options. We will probably end up implementing multiple options in different branches, and will choose whichever option works best.

If you have another proposal, other than the two previously mentioned, that you want considered feel free to add to this thread. Short of writing the actual code I'm not sure there is much else to discuss here. I will implement both options and the community can choose which they like best.

@sionsmith
Copy link

Following the progress of this, is there any update or current work around to get this working? If so could some point point me in the right direction of how I would get this set up / post their .drone.yml file.

Thanks
Sion

@bradrydzewski
Copy link

Once I have a branch to share I'll update the thread, however, I haven't started working on this yet

@dansowter
Copy link

@bradrydzewski and @xetorthio, here's a little more about the use-case we have in mind. I'm pretty new to Drone, but keen to get involved if I can help see this feature over the line.

Our ideal workflow

drone receives commit-hook.
drone executes setup block

  • builds new container from Dockerfile in git repo
  • tags the built image based upon info from the commit hook.
    • something like "repo:branch:sha"
  • executes several "docker run" commands which set up services around the new container.
    • service discovery is baked into our containers, via http://www.consul.io/ and DNS, which I believe can be configured via the docker run cli, so we don't really care about linking etc.
  • one of the setup commands will be to run the new container as a daemon, such that other containers can be booted up to test it from the outside. In our example, we'd be booting our app (a ruby JSON api), then booting up a node container to run http://frisbyjs.com/ specs against it from the outside.

drone executes testing block

  • some of these tests will be of the style 'docker run my-new-container /sbin/run_unit_tests'
  • some of these tests will be of the style 'docker run my-testing-container /sbin/run_acceptance_tests'

drone executes teardown block

  • stop all the services I no longer require
  • let me manually execute docker rm / rmi to manage disk space.

drone executes deploy block

  • here's where you choose to push (or not) the new tag to your (private) docker registry
  • then we'd want to boot some other container responsible for deploying, passing it relevant variables based upon the commit-hook and outcomes.

This way you can include enough to run your unit tests inside your production container (which in ruby, we're likely to do anyway), but you can also avoid installing all the dependencies for your acceptance tests, because a separate conatiner can run these against your new container, running in an "acceptance tests" mode of some kind.

@bradrydzewski
Copy link

Thanks @dansowter for the detailed feedback.

builds new container from Dockerfile in git repo

There is an important constraint. Drone will create a Docker container and clone your repository directly inside that container.

@dansowter
Copy link

@bradrydzewski - If the host's docker daemon is exposed, as per "option 1" above, how much of a constraint would this really be?

Wouldn't the container booted by drone, into which the git repo is cloned, mostly act an orchestrator? If you look at http://docs.docker.io/reference/api/docker_remote_api_v1.11/ you can POST to the /build endpoint as part of your existing scripts, for example.

@bradrydzewski
Copy link

yes, option 1 would do nothing more than expose Docker to the build container, allowing you to script the workflow described above.

@dansowter
Copy link

@bradrydzewski - Anything new to share on this?

@daviddyball
Copy link

This is something that I'd love to see too. TBH, when I first started looking at Drone, I assumed, being based around Docker, that this was default behaviour to push a completed image to a repository.

At the moment I just have an additional local email address on notify that triggers a build on my Drone box via bash scripts.

Would something like this work:

deploy:
  docker_image:
    repo: myrepo/app-release
    tag: [git_hash_here]
    dockerfile: Dockerfiles/release  # Optional override of Dockerfile

This would do the following on the Drone host machine

  1. Pull a copy of the codebase (perhaps in /var/cache/drone/deploy/....)
  2. cd to codebase directory
  3. Build image
    • If dockerfile: defined in drone.yaml ln -s {dockerfile} ./Dockerfile
    • docker build -t {repo}:{tag} .
  4. Push image docker push {repo}:{tag}

Anything fancy that needs to be done to make the image ready for release can be part of the Dockerfile (e.g. git archive to get rid of .git structures in code or remove build artifacts).

I don't see any specific benefit to running docker-in-docker to do the image builds, as a fair amount of trust has already been established between Drone<->Code-Repo (the exception being shared/hosted CI servers)

I've not written go before, but I'll give it a shot and see what I can come up with (might be out of my depth though).

@bradrydzewski
Copy link

@daviddyball there is a pull request that enables pushing to docker. We are planning to merge, once it has been tested a bit more. You can try it out here:
#361

This was referenced Nov 6, 2014
@davidwindell
Copy link

I should think this issue can be closed now? #361 works a treat - even better when using the host's daemon to build the images which provides caching between builds.

@bradrydzewski
Copy link

@davidwindell I've kept this open because our support for building Docker images and publishing to the registry could be much better. The biggest issue is that you have to run your build in privileged mode. The good news is we've devised a solution and Docker and registry support should be seamless. Stay tuned.

@sunnysingh1985
Copy link

Hey @bradrydzewski , I have a couple of questions related to this issue, it would be great if could you please help me in understanding these:

  1. Does current version of drone (I mean offficial version) have a way to build a docker container without me worrying about installing docker in docker?
  2. I am understanding that you are saying that the current design needs to be improved but as of today I should atleast be able to get a container built using drone. Is this correct? And once the new design is implemented, I would just need to upgrade drone and things would not change much for me.
  3. Is there a way currently to push the built docker images to dockerhub (given that you answer "yes" to my questions above :))

@davidwindell
Copy link

You can run drone on docker using our image https://registry.hub.docker.com/u/outeredge/edge-docker-drone/ if you like. All you need to do then is mount /var/run/docker.sock (no need for privileged mode). Your .drone.yml file will look something like this:

image: outeredge/edge-docker-drone-base
docker:
  net: host
script:
  - git-timestamp composer.json
  - git-timestamp composer.lock
publish:
    docker:
        docker_host: tcp://127.0.0.1:2377
        registry_login_url: $REGISTRY_URL
        registry_login: true
        keep_build: true
        username: $REGISTRY_USER
        password: $REGISTRY_PASS
        image_name: $REGISTRY_URL/$IMAGE_NAME
        tags: [$DRONE_BRANCH, $(git rev-parse --short HEAD)]
        force_tags: true

@davidwindell
Copy link

One thing that's missing in this workflow is the ability to "test" the build image before pushing it to the registry.

@bradrydzewski bradrydzewski added this to the v0.4.0 milestone Aug 18, 2015
@bradrydzewski
Copy link

Just merged the 0.4 branch into master (first major Drone release in a year, and more to follow). This release includes a pretty awesome container-based plugin model. We have an offiicial Docker plugin (see http://addons.drone.io/docker/) that will build and push your image to a public registry.

Enjoy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests