Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically build and publish to Github Container Registry #213

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

proffalken
Copy link
Contributor

This PR compliments the sentiment behind #53 however it does the following:

  1. Build for multiple architectures using the Github Actions free workflow builders
  2. Release to GITHUB CONTAINER REGISTRY instead of hub.docker.com to avoid the container download restrictions

It also creates tagged images for the following scenarios:

  • Each new PR that is created, tagged with the PR number and SHA-1
  • Each merge to master, tagged with the SHA-1, auto-incremented version number, and latest for the most recent version

This is a "copy&paste" from https://github.com/MakeMonmouth/mmbot/blob/main/.github/workflows/container_build.yml and will require packages to be enabled for this repo, however it already uses the new dynamic GITHUB_TOKEN setting to authenticate.

@bsord
Copy link

bsord commented Aug 15, 2021

I think this is a great improvement and is the 'standard' i've seen across many repos these days. Builds can still be done locally for testing if needed.

@louislam louislam added feature-request Request for new features to be added priority:low Low Priority labels Aug 17, 2021
@MichelBaie
Copy link
Contributor

would be helpful for a project who has daily pushes

@gaby
Copy link
Contributor

gaby commented Sep 4, 2021

This PR would affect teams using uptime-kuma behind corporate proxies, since ghcr.io is not usually added to those.

@proffalken
Copy link
Contributor Author

This PR would affect teams using uptime-kuma behind corporate proxies, since ghcr.io is not usually added to those.

True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future.

I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach.

@gaby
Copy link
Contributor

gaby commented Sep 12, 2021

This PR would affect teams using uptime-kuma behind corporate proxies, since ghcr.io is not usually added to those.

True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future.

I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach.

I'd say the workflow could publish the final image to both ghcr.io and docker.io 👍🏻

I like this PR cause it makes the release process also open source, right now it isn't.

context: ./
file: ./dockerfile
push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use multiline array here (or list like for branches)?

@Saibamen
Copy link
Contributor

Related issue: #440

@gaby
Copy link
Contributor

gaby commented Sep 18, 2021

@proffalken Can you add support for pushing to both ghcr.io and hub.docker.com ?

@proffalken
Copy link
Contributor Author

@gaby I'll see what I can do in the next week, work and life are reasonably busy right now!

@louislam
Copy link
Owner

louislam commented Sep 20, 2021

My preference is still the npm run build-docker command on my local machine.

Not saying that this PR is not good, but I just don't want to maintain more things in the future. For example, a few week ago, I added Debian + Alpine based images support, I believe that this yaml file have to be updated too.

Also, the build time is very good on my local machine while Github Action is not always good.

@Saibamen
Copy link
Contributor

But maybe we can just setup CI for only build (not publish) in GitHub?

I bet you don't build Docker image after each commit.

If we have CI, we could start fixing build issues right after pushing wrong commit/PR

@gaby
Copy link
Contributor

gaby commented Sep 20, 2021

But maybe we can just setup CI for only build (not publish) in GitHub?

I bet you don't build Docker image after each commit.

If we have CI, we could start fixing build issues right after pushing wrong commit/PR

^ this! Also if we build a :dev image on each merge, etc. Its super easy to create a release since that would only take retagging the image.

@proffalken
Copy link
Contributor Author

^ this! Also if we build a :dev image on each merge, etc. Its super easy to create a release since that would only take retagging the image.

FWIW, the code in this PR automatically generates a number of images with each run, including the following:

  • Git Tag when building from master/main (so uptime-kuma:1.2.3 when a commit is tagged v1.2.3 etc.)
  • latest when building from master/main (uptime-kuma:latest)
  • PR ID (uptime-kuma:PR-213 for example)
  • Checksum (uptime-kuma:<SHA SUM>)

I think there's a couple of other tags it builds to as well, so all of this is already in there.

If you take a look at the MVentory setup that I took this from you can see the expected output.

with:
context: ./
file: ./dockerfile
push: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be:

${{ github.event_name != 'pull_request' }}

Else every pull-request is going to try to Push an image.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, that's deliberate, it means we can test the code for each PR in a dedicated docker image rather than waiting for the code to reach main/master before we know if the container works properly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that building and push an image per PR will create a lot of unused tags in the registry. The latest changes made by @louislam are now testing/linting the code using Github Actions which should cover your use-case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so let's say I'm working on a PR, I push that code up, it lints/tests fine via Github actions, and we merge to master.

We then find that there's an issue with the container setup rather than the code (libc version change or something equally obscure), but we only find that out after it's been released to the wider world and causes a slew of github issues to be created.

If we create a container on each PR (and note that this is each PR, not each commit, it is rebuilt with the same tags each time) then we can test that the container itself works as well.

I'm really not convinced that "too many tags in the registry" is a strong enough argument when the alternative is a falling deployment for users of the application?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, we do this against a private registry instead of a public one. I do get your point.

@proffalken
Copy link
Contributor Author

@louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images?

My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to :latest.

It's a minor irritation, but it would be nice to have ;)

@louislam
Copy link
Owner

@louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images?

My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to :latest.

It's a minor irritation, but it would be nice to have ;)

It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future.

https://hub.docker.com/layers/louislam/uptime-kuma/latest/images/sha256-d4947d0d9ed82b22b6364b55f52932c79ee4dbf22a330fb6b92bd09eda234cdc?context=explore

@proffalken
Copy link
Contributor Author

It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future.

I completely missed this! :D

Thanks, and yeah, understand the issues with v2, this is for my test network so I'd expect stuff there to break every now and again with new releases of the various things running on it.

@gaby
Copy link
Contributor

gaby commented Oct 13, 2021

@proffalken With the recent changes of moving everything under docker/ all the checks on this PR are going to fail.

@proffalken
Copy link
Contributor Author

@proffalken With the recent changes of moving everything under docker/ all the checks on this PR are going to fail.

Yeah, just spotted that, should be good to go now :)

push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
Copy link

@TonyRL TonyRL Oct 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a docker layer caching like this?

cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}

This should save some time on pulling.
Reference: Cache eviction policy

Copy link
Contributor

@gaby gaby Oct 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache policy is experimental though

@gaby
Copy link
Contributor

gaby commented Oct 18, 2021

@proffalken The build is having issues again, related to the base images not found.

This can be fixed by adding a step for building the base images before building the final image. The actions you are using support Caching so it will just use the cache from the previous stage.

Ex.

      - name: Build Base and push
        id: docker_build
        uses: docker/build-push-action@v2
        with:
          context: ./
          file: ./docker/debian-base.dockerfile

      - name: Build Final and push
        id: docker_build
        uses: docker/build-push-action@v2
        with:
          context: ./
          file: ./docker/dockerfile

The same would need to be duplicated for Alpine.

Caching between jobs is covered here: https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md

- 'master'
pull_request:
branches:
- master

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For every commit on a branch for which there is a pull request to the master, a new image is built.
Shouldn't be like this, only when pushing commits to the master (merge commits)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should do it, because build on Windows is different from building on Linux container

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Building the container only on specific branches (ie master/main/dev) means that there is very exact control over when a new image gets built, and anyone attempting to put up a devious image would have to have get it to pass code review. Much harder for someone to accidentally get a devious image.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Images are deliberately built for each PR so that they can be tested before releasing.

Failure to do so could result in underlying issues with the container (glibc / nodejs upgrade at the is level etc) breaking the deployment and unless we test at PR level then we would only find this out after the "production" container has been released.

Building and testing on branches is an established pattern within the ci/cd community because it brings value without adding unnecessary complication.

To specifically address your point about "bad" images, all images built from a PR would tagged with the PR number and result in the format uptime-kuma:PR-3, this means that someone would have to deliberately update their configuration to pull these images, and would never see them otherwise, even if their setup was configured for uptime-kuma:latest.

As the code would not be merged until after a review, the chances of a "bad" container making it through to release is just as likely as "bad" code making it through, at which point it stops being an issue with how we package and publish and becomes an issue with how we review the code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm used to a PR -> dev/nightly -> master/main process. All steps are still done before prod, in dev/nightly. Everything is still fully tested, including the container builds. Some things however, like steps accessing keys or pushing test images to public, wait for code review & approval/merging to dev before running.

I don't like the possibility of ANY container potentially containing un-reviewed code being public. Its unlikely, but we shouldn't assume someone won't just sort by newest tag and use that. Imagine someone un-experienced, who doesn't know git[hub/lab] reversing PR to mean Public Release instead of Pull Request. The more I think on it, the more I am against blindly pushing PRs to the registry.

an established pattern within the ci/cd community

Can you link some other projects that are using this process please? I will still be paranoid, but I will withdraw my comments if this is indeed a commonly accepted pattern.

Sorry if I'm overly security paranoid, but it pays to be so nowadays unfortunately.

@gaby

This comment was marked as resolved.

@louislam

This comment was marked as resolved.

@proffalken

This comment was marked as resolved.

Co-authored-by: Adam Stachowicz <saibamenppl@gmail.com>
@CommanderStorm CommanderStorm added the area:core issues describing changes to the core of uptime kuma label Dec 8, 2023
@CommanderStorm CommanderStorm added the needs:review this PR needs a review by maintainers or other community members label May 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core issues describing changes to the core of uptime kuma feature-request Request for new features to be added needs:review this PR needs a review by maintainers or other community members priority:low Low Priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet