Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker images versioning for Continuous Integration / Continuous Deployment #13928

Closed
vitalyisaev2 opened this issue Jun 13, 2015 · 19 comments
Closed

Comments

@vitalyisaev2
Copy link

Hello! (I wasn't able to get a feedback on SO so I'd like to repeat my question here).

Currently I'm working on the implementation of Continuous Integration and Continuous Deployment processes in one small team. I would like to use two well-known concepts: Linux binary packages and Docker images.

The most part of the work is already done: we take the code from GitLab repo, compile it and put the resulting binaries into the deb packages that stored in Aptly; then we create Docker images for every service we have and push the images into a private Docker Registry server. Afterwards these images are rolled to the testing environment. Finally we start the services and perform the acceptance testing. This is a continuous process and it starts every time when someone pushes commits to origin/master.

our workflow

What's still not clear is how to distinguish stable images stored in the Docker Registry?

We have to track a state of every image because we need to perform the periodical updates of stable sever. Obviously some releases (i.e. versions of images) will not pass the acceptance tests and must be marked as unusable and filtered out on every next iteration of Continuous Delivery.

I thought about realization of this workflow by several Docker features, but from my point of view each of them has own drawbacks:

  1. Default image's name (repo/image:tag) is a trivial plain string that cannot hold both version number, build date and QA marks.
  2. Labels (introduced in 1.6) could be a good starting point for a workaround, but we were not able to find the opportunity of relabeling the existing images (note that we need to update the image's "metadata" taking into account the results of QA). However, it seems like there is no usable method of querying the image by the label values in SQL-like way.

So what is a proper way of assigning of versions to the Docker Images? Where the QA-related information can be stored? How can we "highlight" the stable image builds? If somebody has faced a similair use case, please share your experience.

@thaJeztah
Copy link
Member

Thanks for asking. The GitHub issue tracker isn't really intended for questions (it's for tracking bugs and feature requests), so this may have to be closed at some point.

However, it's a very well written question, and I'd feel like a "bully" if I closed immediately :)

I'm no expert in this area, but let's ping @dnephin and @burke, perhaps they are willing to share some of their secrets.

Also feel free to discuss this in the #docker IRC channel or the "docker-users" Google Group.

Again, thanks for asking (and don't be offended if this gets closed because of the reasons mentioned earlier) 👍

@vitalyisaev2
Copy link
Author

@thaJeztah , thanks for your patience. I feel like I need to close it immediately on my own :) but I will very appreciate you if you let this question stay here for a while...

@thaJeztah
Copy link
Member

I'll just "forget" to close it for now 😉 (people are still able to comment after its closed, so don't panic yet :))

@burke
Copy link
Contributor

burke commented Jun 14, 2015

For what it's worth, we (Shopify) tag our images just with the git SHA of the source repo that generated the image (e.g. like our.registry/some-api-thing:1234deadbeefcafe). Any additional metadata is handled by tooling that we consider to be one level of abstraction above docker.

We solve this problem by having our container build server propagate build status results via webhooks to our deployment service. The deployment service lists all the commits on master and will only allow us to deploy when it's received success webhooks for container builds and CI.

Technically, we use the GitHub commit status API for this, but that's beside the point -- our solution has just been to have the deployment service communicate out-of-band of the docker registry with the container build service. It has served us well.

@thaJeztah
Copy link
Member

❤️ thanks @burke that's very informative. I think we should consider adding some example workflows to the site/docs at some point as I doubt @vitalyisaev2 is the only user struggling to find the best approach doing this.

@vitalyisaev2
Copy link
Author

Thank you @burke. Tagging the image with a SHA is a nice approach. Unfortunately it doesn't suite us because the most our Docker Images contain binaries built from several distinct Git projects.

Please let me make sure I've got this right: you implemented your own deployment service that stores the metadata about every Docker image you build, and both build and CI servers post some info about each image to the deployment service? If it's not a secret, what are these (build / CI) servers?

@dnephin
Copy link
Member

dnephin commented Jun 14, 2015

We (at Yelp) also use the git sha as a unique tag for images, but that's mostly for convenience (it's easy to figure out what code is running in the container). There are lots of other options for a unique tag.

Since you're using Jenkins, $BUILD_TAG is a good option. It should always be unique, and it lets you track the image back to the job that built it.

We would use image name and tag to identify the state of each image. During the first docker build step:

docker build -t ${package}:${BUILD_TAG}
docker tag ${package}:${BUILD_TAG} ${package}:unstable

Pass the ${BUILD_TAG} value along to the following jobs in the jenkins pipeline, so they know which unique id to deploy and test. After the tests pass

docker tag ${package}:${BUILD_TAG} ${package}:stable

After deployment succeeds:

docker tag ${package}:${BUILD_TAG} ${package}:live

That way you can operate on the unique id, and you also get labels for the "latest" image that has passed each phase of the pipeline. If you need more than latest, I suppose you could use :${BUILD_TAG}-stable, :${BUILD_TAG}-live, etc, to keep track of state.

@burke
Copy link
Contributor

burke commented Jun 14, 2015

Yep, our pipeline looks sort of like this:

  1. Commit is pushed to GitHub
  2. GitHub delivers push webhook to CircleCI, which we use for CI
  3. GitHub delivers push webhook to locutus, which is our own image builder service.
  4. Locutus and CircleCI report success/fail status to GitHub via Commit Status API.
  5. GitHub delivers commit-status webhooks to (ShipIt), our deployment service
  6. When ShipIt sees that Circle and Locutus have gone green for a SHA, the "deploy" button unlocks.

If I had to extend our strategy to an image synthesized from multiple source repositories, I would probably have a git repo that represents the integration of the various components, specifying versions of those components based on source git SHAs -- the repo would be auto-committed by some service each time the component .deb builds were completed. Circle and locutus would run on this repo, posting back to the commit status API on GitHub for the repo, and we would "mark as stable" by "deploying" that synthesized repo with ShipIt -- which really just runs an arbitrary script with a SHA as input. In this case, it would be the docker image tag (or git SHA) of the final image (or the synthesized repo).

@thaJeztah
Copy link
Member

Thank you too, @dnephin, really appreciated!

@vitalyisaev2
Copy link
Author

Thank you @dnephin @burke @thaJeztah! I found very useful information for me here.

@thaJeztah
Copy link
Member

You're more than welcome, @vitalyisaev2 perhaps you can write up an article on this subject for the docker blog, or to be intergrated in the documentation? I'm sure other people are interested in this as well

@vitalyisaev2
Copy link
Author

@thaJeztah thank you very much! I will do it with a great pleasure when I complete the described process and understand all the details.

@isubuz
Copy link
Contributor

isubuz commented Sep 14, 2015

@vitalyisaev2 Did you finally manage to have the desired setup? If so, can you please share details as this is a very interesting use case.

@vitalyisaev2
Copy link
Author

@isubuz - unfortunately we could neither find nor implement a suitable tool for Docker image versioning. We realize that we need to track Docker image and store the metadata somewhere (it's a critical feature for a rollback opportunity). Recently I had a conversation with site reliability engeneer from Badoo.com, and it appears that they develop their own control tool (similair to what was described above) and probably will open source it this year.

@dmitrym0
Copy link

Apologies for reviving an old thread. Is there anything out there off the shelf now or something that I can base my efforts on? I'm looking for a similar versioning workflow.

@ysangkok
Copy link

ysangkok commented Nov 7, 2016

@vitalyisaev2 Was the Badoo control tool open sourced?

@vitalyisaev2
Copy link
Author

@ysangkok I've just checked it now, I'm afraid, it wasn't.

@ehginanjar
Copy link

Hope this will give anyone here a little help about docker versioning https://medium.com/travis-on-docker/how-to-version-your-docker-images-1d5c577ebf54

@backtorod
Copy link

Hi guys, nice comments! How do you guys handle this image tagging (either by hashing or using semantic versioning) through the hole lifecycle pipeline?

What I mean by that is, are you using the same image from dev > prod or a commit to dev build your image, then if all goes well we could PR > QA to create another image? Until we get to prod.

Did I make myself clear? Can someone share its own experience?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants