New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Patch for CI #2996
Patch for CI #2996
Conversation
965d2c8
to
17846b2
Compare
with: | ||
github-token: ${{ steps.app-token.outputs.token }} | ||
script: | | ||
const ghaAppId = 15368; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this value be a secret? Also, const names are not concise. Would be good to rename to something more descriptive (ghaAppId, ghaName ,myAppId, myName)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. and since we use this check in other repos, I think we should keep it consistent between them in terms of naming
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jgiannuzzi do you know more about how to pick this ID?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh this is the public app ID of GitHub Actions 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. and since we use this check in other repos, I think we should keep it consistent between them in terms of naming
If this is the case, then the best thing would be for this workflow to exist in a dedicated "reusable" repo, and then called (if such a thing is acceptable for Armada). What happens when we have 10 repos all having the same check-required.yml workflow, and something changes in one of them? Are we going to track down the 9 other repos that don't have the latest change and create a PR?
@jgiannuzzi @Sharpz7 feel free to tell me i'm nitpicking
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a great point @pavlovic-ivan
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you agree, i think we should create an action out of this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! You may want to first prototype it in some repo and have your forks of armada and fml use it. If it works well, we can probably create a GR/actions repo to collect all of our shared actions/pipelines, starting with this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are reading my mind (with GR/actions)!
I don't quite understand the purpose of the checks-required workflow, together with the change made to the |
@pavlovic-ivan This is something jonathan created that is used in other GR-OSS repos. Basically, the echo you are refering to is not the one you use. You make your required job the one that prints "All required checks succeeded". To the best of my knowledge, this is a work around for Github Actions treating skipped jobs in a way we do not want. |
# cache-primary-key - Cache primary key passed in the input to use in subsequent steps of the workflow. | ||
# cache-matched-key - Key of the cache that was restored, it could either be the primary key on cache-hit or a partial/complete match of one of the restore keys. | ||
- uses: actions/cache/save@v3 | ||
if: steps.cache-restore.outputs.cache-hit == 'false' || steps.cache-restore.outputs.cache-primary-key != steps.cache-restore.outputs.cache-matched-key |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to try and save the 20-90s we lose from the cache compressing, and then checking if the key exists. That should really happen the other way round.
Currently Broken:
No idea how to fix any of them currently, I will investigate. |
@Sharpz7 please add information in your PR description about what needs to happen before and after your PR is merged for |
@@ -90,15 +90,27 @@ jobs: | |||
- uses: actions/checkout@v3.3.0 | |||
- run: docker buildx create --name ${DOCKER_BUILDX_BUILDER} --driver docker-container --use | |||
- run: docker buildx install | |||
- uses: actions/setup-go@v4 | |||
|
|||
- name: Set up Go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the test, lint, and release workflows, we use the magnetikonline/action-galng-cache
plugin, which functionally replaces the functionality of actions/setup-go and actions/cache/restore together - can that work here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes - Now that I deleted the extra folder we cache, this can potentially change. But I do think having separate save and cache steps (Which is not possible with magnetikonline/action-galng-cache
) is better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the advantage of "separate save and cache steps" with using actions/cache/restore
- I believe the magnetikonline
cache plugin automatically sets up the build and module cache paths as needed before, and persists those when the job is done.
@@ -54,10 +54,23 @@ jobs: | |||
- name: Fetch Git tags | |||
run: git fetch --force --tags | |||
|
|||
- name: Setup Golang with Cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And here we have the inverse - does magnetikonline/action-golang-cache
not work, is buggy, or...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above comment
Have a look at the work done in G-Research/fasttrackml#552 (and its sibling PRs: G-Research/fasttrackml#553 and G-Research/fasttrackml#554). This should show you a rationale for what to save/restore from the cache in an optimised way. |
FYI @dave-gantenbein @Sharpz7 @jgiannuzzi @richscott : i will be creating several PRs from now on, for this entire thing, split up into smaller pieces, and refer co-authors in those PRs (depending of course on the change itself). So i guess this PR can be closed, as others will contain changes from this one, + other necessary changes. |
Closing. Good luck @pavlovic-ivan ! |
Strange Temp Note
Sometimes (Like on commit ad47bf5), the cache will be hit, but not used. I have no idea what causes this.
Partially mentioned in #2987
I tested it by merging it into my forks main and:
The Integration Tests on master create the cache, and that is shared everywhere, including with new PR's (i.e no more long waits on your first push)
No Cache: https://photos.app.goo.gl/MGSWW51YY3BSwvPG9 (Integration Tests: 17m)
Old Cache: https://photos.app.goo.gl/CtuqvmgoKLfWNJpc6 (Integration Tests: 14m)
New Cache: https://photos.app.goo.gl/qyWcsKxUtN6RYdFk8 (Integration Tests: 7m) Half!!!
IMPORTANT (Taken from G-Research/fasttrackml#458 (comment))
This PR improves how we create the required check "All required checks succeeded" in the following ways:
This is needed so that the check is reset when a new run attempt is made or a PR is reopened.
The following tasks need to be completed before merging this PR:
The following tasks need to be completed after merging this PR:
update branch protection (or ruleset, if G-Research/fasttrackml#452 has been merged in the meantime) to enforce the required check to originate from the new GitHub app