-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split dockerfile to build container and runtime container #561
Conversation
Smaller image size is always a big plus. I know our version of this image has gotten pretty large I think almost up to 1G now. It's a little strange to me to have the container depend on the output of another container but I think as long as it has a script or makefile for generating the image it should be fine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will break the docker build on the hub, because the target zip won't be compiled. How can this be changed to not break existing automated builds on dockerhub, and maintain the oneshot behavior of "docker build ." doing the needful?
support | ||
vendor | ||
project/target | ||
project/project | ||
target/collins | ||
target/scala-2.11 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just glob this?
https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds seems to be the new hotness but not sure how bleeding edge you want to go with a solution. |
@michaeljs1990 if docker hub supports multistage builds, I'd rather do that! Keep in mind, we need to ensure we don't break other downstreams that may depend on tooling we pull into the image during build (ex for runtime tools like curl) that may change when you strip down the build to remove the sdk |
Good point about the automated builds! I'll check the multi-stage build stuff out. I haven't used that yet, but it looks pretty neat. I'm not quite sure how to handle downstream images that use tools that just happen to be in our current image since we install a lot of stuff for building. If we have a more streamlined image for deployment, should we just include some of the tools that seems like nice-to-haves? Say, curl/wget, netcat, ipmitool and friends? EDIT:
Oh wow, that's newer than i thought :P EDIT 2: Looks like multistage support on the docker hub isn't that far away: docker/hub-feedback#1039 |
Closing in favor of #562 which seems more promising :) |
So, after rebuilding the container for the Nth time i started toying with the idea of breaking out the building parts of the process into it's own container since we don't really need activator/sbt/etc. to run collins once the zip has been produced.
This creates a
Dockerfile.build
which sets up the basic build toolchain and should in my mind make it easier and quicker to get started with building collins. For instance, after creating the build container we can build the zip by runningwhich should be a lot quicker than building from scratch since we already have the dependencies and toolchain. If we're making a lot of changes we can have a container re-compiling on changes by running
The final "runtime" container becomes a lot simpler too.
I haven't really tried all different use-cases here, but I thought I'd ask for some feedback. Does this make sense at all?
@tumblr/collins @byxorna @michaeljs1990 Thoughts?