-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dockerfile refresh 2021 #3
Conversation
I would also like to note the Personally, I like locating Dockerfiles with the project they are packaging. I noticed there was an unfinished PR that relates to this and wouldn't mind taking a crack picking up where that left off - and possibly moving the Dockerfile over into in-toto. Cutting a release to Dockerhub with a tag for the release version is pretty common and might be a nice thing to have. |
Damn nice! I think the merge/separation issue can be treated as an aside. For now this helps revitalize things :) |
@trishankatdatadog @adityasaky I added you as reviewers as I believe you had some ideas on how to move forward with this back then. THank you again, @kevholmes ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Email change aside, I like this. Thanks for working on this @kevholmes! However, I do think we should soon consider checkpointing them along side the implementation, so that we don't have any mismatch / confusion.
Co-authored-by: Aditya Sirish <8928778+adityasaky@users.noreply.github.com>
Updated w/ your change to email address. Definitely. When a release is cut in GitHub it's much more clear what was built if they are tightly coupled in the same repo. Can help with troubleshooting issues, and you're trying to figure out what exactly changed between releases beyond just the source code for the app itself. |
# by doing this via '--user' we keep all libs/wheels in ~/.local | ||
# which can be easily copied into our second stage of the image build | ||
RUN wget https://github.com/in-toto/in-toto/raw/develop/requirements-pinned.txt | ||
RUN pip install --user -r requirements-pinned.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why dis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @trishankatdatadog,
We could skip getting and using the pinned requirements in those two RUN
steps you've highlighted, and just let pip attempt to auto-resolve these dependencies on its own when we RUN pip install --user in-toto
but it seemed prudent to leverage those pinned python resources that are already being generated within the in-toto project upstream.
It's purely a way to ensure there's more visibility/control over exactly what dependencies are being installed into the container build and ensuring we're remaining consistent. Otherwise we might pull a newer dependency down from pip that hasn't been through the CI process successfully yet (the thing that generates that pinned-requirements.txt file) and perhaps there's a bug which causes some kind of breakage that has now snuck into the docker build because of it pulling in a not-yet-vetted version of a dependency.
Thanks @kevholmes! |
The base/Dockerfile has been refactored to use a multi-stage build as suggested by @SantiagoTorres. This has cut the size of the image (approximately) in half down to 73.3 MB.
I have also updated the alpine image to the latest stable version available (3.14). In order to automate this in the future, I have also included a GitHub Dependabot config that will scan the /base directory on a weekly basis looking for updates to alpine's image used within this Dockerfile for building and packaging the final container image.