-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically build and publish to Github Container Registry #213
base: master
Are you sure you want to change the base?
Automatically build and publish to Github Container Registry #213
Conversation
I think this is a great improvement and is the 'standard' i've seen across many repos these days. Builds can still be done locally for testing if needed. |
would be helpful for a project who has daily pushes |
This PR would affect teams using uptime-kuma behind corporate proxies, since |
True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future. I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach. |
I'd say the workflow could publish the final image to both ghcr.io and docker.io 👍🏻 I like this PR cause it makes the release process also open source, right now it isn't. |
context: ./ | ||
file: ./dockerfile | ||
push: true | ||
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use multiline array here (or list like for branches
)?
Related issue: #440 |
@proffalken Can you add support for pushing to both |
@gaby I'll see what I can do in the next week, work and life are reasonably busy right now! |
My preference is still the Not saying that this PR is not good, but I just don't want to maintain more things in the future. For example, a few week ago, I added Debian + Alpine based images support, I believe that this yaml file have to be updated too. Also, the build time is very good on my local machine while Github Action is not always good. |
But maybe we can just setup CI for only build (not publish) in GitHub? I bet you don't build Docker image after each commit. If we have CI, we could start fixing build issues right after pushing wrong commit/PR |
^ this! Also if we build a |
FWIW, the code in this PR automatically generates a number of images with each run, including the following:
I think there's a couple of other tags it builds to as well, so all of this is already in there. If you take a look at the MVentory setup that I took this from you can see the expected output. |
with: | ||
context: ./ | ||
file: ./dockerfile | ||
push: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be:
${{ github.event_name != 'pull_request' }}
Else every pull-request is going to try to Push an image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, that's deliberate, it means we can test the code for each PR in a dedicated docker image rather than waiting for the code to reach main/master before we know if the container works properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that building and push an image per PR will create a lot of unused tags in the registry. The latest changes made by @louislam are now testing/linting the code using Github Actions which should cover your use-case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, so let's say I'm working on a PR, I push that code up, it lints/tests fine via Github actions, and we merge to master.
We then find that there's an issue with the container setup rather than the code (libc version change or something equally obscure), but we only find that out after it's been released to the wider world and causes a slew of github issues to be created.
If we create a container on each PR (and note that this is each PR, not each commit, it is rebuilt with the same tags each time) then we can test that the container itself works as well.
I'm really not convinced that "too many tags in the registry" is a strong enough argument when the alternative is a falling deployment for users of the application?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, we do this against a private registry instead of a public one. I do get your point.
@louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images? My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to It's a minor irritation, but it would be nice to have ;) |
It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future. |
I completely missed this! :D Thanks, and yeah, understand the issues with v2, this is for my test network so I'd expect stuff there to break every now and again with new releases of the various things running on it. |
@proffalken With the recent changes of moving everything under |
Yeah, just spotted that, should be good to go now :) |
push: true | ||
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64 | ||
tags: ${{ steps.meta.outputs.tags }} | ||
labels: ${{ steps.meta.outputs.labels }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a docker layer caching like this?
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
This should save some time on pulling.
Reference: Cache eviction policy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cache policy is experimental though
@proffalken The build is having issues again, related to the base images not found. This can be fixed by adding a step for building the Ex.
The same would need to be duplicated for Alpine. Caching between jobs is covered here: https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md |
- 'master' | ||
pull_request: | ||
branches: | ||
- master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For every commit on a branch for which there is a pull request to the master, a new image is built.
Shouldn't be like this, only when pushing commits to the master (merge commits)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we should do it, because build on Windows is different from building on Linux container
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Building the container only on specific branches (ie master/main/dev) means that there is very exact control over when a new image gets built, and anyone attempting to put up a devious image would have to have get it to pass code review. Much harder for someone to accidentally get a devious image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Images are deliberately built for each PR so that they can be tested before releasing.
Failure to do so could result in underlying issues with the container (glibc / nodejs upgrade at the is level etc) breaking the deployment and unless we test at PR level then we would only find this out after the "production" container has been released.
Building and testing on branches is an established pattern within the ci/cd community because it brings value without adding unnecessary complication.
To specifically address your point about "bad" images, all images built from a PR would tagged with the PR number and result in the format uptime-kuma:PR-3
, this means that someone would have to deliberately update their configuration to pull these images, and would never see them otherwise, even if their setup was configured for uptime-kuma:latest
.
As the code would not be merged until after a review, the chances of a "bad" container making it through to release is just as likely as "bad" code making it through, at which point it stops being an issue with how we package and publish and becomes an issue with how we review the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm used to a PR -> dev/nightly -> master/main process. All steps are still done before prod, in dev/nightly. Everything is still fully tested, including the container builds. Some things however, like steps accessing keys or pushing test images to public, wait for code review & approval/merging to dev before running.
I don't like the possibility of ANY container potentially containing un-reviewed code being public. Its unlikely, but we shouldn't assume someone won't just sort by newest tag and use that. Imagine someone un-experienced, who doesn't know git[hub/lab] reversing PR to mean Public Release instead of Pull Request. The more I think on it, the more I am against blindly pushing PRs to the registry.
an established pattern within the ci/cd community
Can you link some other projects that are using this process please? I will still be paranoid, but I will withdraw my comments if this is indeed a commonly accepted pattern.
Sorry if I'm overly security paranoid, but it pays to be so nowadays unfortunately.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Co-authored-by: Adam Stachowicz <saibamenppl@gmail.com>
This PR compliments the sentiment behind #53 however it does the following:
It also creates tagged images for the following scenarios:
latest
for the most recent versionThis is a "copy&paste" from https://github.com/MakeMonmouth/mmbot/blob/main/.github/workflows/container_build.yml and will require
packages
to be enabled for this repo, however it already uses the new dynamic GITHUB_TOKEN setting to authenticate.