Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: managing Scorecard releases #651

Closed
azeemshaikh38 opened this issue Jul 3, 2021 · 9 comments · Fixed by #850
Closed

Feature: managing Scorecard releases #651

azeemshaikh38 opened this issue Jul 3, 2021 · 9 comments · Fixed by #850
Assignees
Labels
kind/enhancement New feature or request

Comments

@azeemshaikh38
Copy link
Contributor

We need either an automated process or manual guideline for releasing new versions of Scorecard. Basically, when users run the below command, we need to be sure to a high degree of confidence that this is well tested code that users are downloading and running:

GO111MODULE=on go get github.com/ossf/scorecard
GITHUB_AUTH_TOKEN="xyz" scorecard --repo=<repo-name> --show-details 
@azeemshaikh38 azeemshaikh38 added kind/enhancement New feature or request help wanted Community contributions welcome, maintainers supportive of idea but not a high priority good first issue Good for newcomers labels Jul 3, 2021
@azeemshaikh38
Copy link
Contributor Author

Will also help with the cron job data consistency. We can make sure that cron job only runs with the latest release binary and a single cron job run will only use a single binary.

@azeemshaikh38
Copy link
Contributor Author

Now that we have a framework to run e2e tests on a smaller set of repos, here is my proposal for the release process:

  • The release tests will trigger every 24 hours and use the main:latest binaries.
  • The tests will run on a sample of the repos used during the weekly runs. There will be a separate BQ table and a successful data transfer to this table will be considered as test “passing”. This will not test the result quality, but will signify (to some degree of confidence) that the weekly cron job can successfully run end-to-end with this release.
  • If the release tests are “successful” then: a new Git tag is created for the commit which was checked out and tested. This will serve as our minor release. The production cron job uses main:tag binaries. After a successful creation of a Git tag, a PR updating the k8s file needs to be submitted and pushed.

2 open questions I still need to work out are:

  • How do we find the corresponding commit SHA for the successful run and automate git tag creation?
  • How easy would it be to automate PR creation for updating the cron job binaries?

@inferno-chromium @oliverchang @naveensrinivasan for comments and feedback.

@naveensrinivasan
Copy link
Member

Now that we have a framework to run e2e tests on a smaller set of repos, here is my proposal for the release process:

  • The release tests will trigger every 24 hours and use the main:latest binaries.
  • The tests will run on a sample of the repos used during the weekly runs. There will be a separate BQ table and a successful data transfer to this table will be considered as test “passing”. This will not test the result quality, but will signify (to some degree of confidence) that the weekly cron job can successfully run end-to-end with this release.
  • If the release tests are “successful” then: a new Git tag is created for the commit which was checked out and tested. This will serve as our minor release. The production cron job uses main:tag binaries. After a successful creation of a Git tag, a PR updating the k8s file needs to be submitted and pushed.

2 open questions I still need to work out are:

  • How do we find the corresponding commit SHA for the successful run and automate git tag creation?

Getting the commit SHA into the container is easy https://artsy.github.io/blog/2018/09/10/Dockerhub-Stamping-Commits/

Why are we creating a git tag? Instead on every container build along with the latest, we could add a tag with the SHA and use that tag to do the docker pull part of the deployment.

  • How easy would it be to automate PR creation for updating the cron job binaries?

I am guessing you are referring to k8s YAML with the docker image version to be pulled. We could probably use https://kustomize.io/ for updating the docker image version and for PR creation we could use something like this
This could help with that https://github.com/peter-evans/create-pull-request

@inferno-chromium @oliverchang @naveensrinivasan for comments and feedback.

@oliverchang
Copy link
Contributor

Now that we have a framework to run e2e tests on a smaller set of repos, here is my proposal for the release process:

  • The release tests will trigger every 24 hours and use the main:latest binaries.
  • The tests will run on a sample of the repos used during the weekly runs. There will be a separate BQ table and a successful data transfer to this table will be considered as test “passing”. This will not test the result quality, but will signify (to some degree of confidence) that the weekly cron job can successfully run end-to-end with this release.

Can we have some more basic sanity tests here, like making sure there's no runtime errors in any of the checks? I think this isn't exposed in any of the JSON/BQ fields today though, so we might need some other way to distinguish this.

  • If the release tests are “successful” then: a new Git tag is created for the commit which was checked out and tested. This will serve as our minor release. The production cron job uses main:tag binaries. After a successful creation of a Git tag, a PR updating the k8s file needs to be submitted and pushed.

Do you mean a vX.X.<minor> which is updated every 24 hours? This a bit too frequent to me and will generate a lot of noise. Perhaps we want to have a separate less frequent release process, especially if the e2e test doesn't test result quality.

How easy would it be to automate PR creation for updating the cron job binaries?

We could avoid this complexity by just having a ":stable" docker tag that always contains the latest passing image.

@azeemshaikh38
Copy link
Contributor Author

Can we have some more basic sanity tests here, like making sure there's no runtime errors in any of the checks? I think this isn't exposed in any of the JSON/BQ fields today though, so we might need some other way to distinguish this.

+1, we should. I think this is doable with some minor changes to the worker code. Will look into it.

Why are we creating a git tag? Instead on every container build along with the latest, we could add a tag with the SHA and use that tag to do the docker pull part of the deployment.
Do you mean a vX.X.<minor> which is updated every 24 hours? This a bit too frequent to me and will generate a lot of noise. Perhaps we want to have a separate less frequent release process, especially if the e2e test doesn't test result quality.

These are both good points. I'm not a fan of the Git tags either. This was the best way I could think of differentiating between regular commits and commits for prod release.

I really like the idea of tagging docker images with .stable and pulling those. Just to confirm, when there are multiple images present with the same tag, does Docker ensure that the latest image with tag .stable gets pulled?

@oliverchang
Copy link
Contributor

Can we have some more basic sanity tests here, like making sure there's no runtime errors in any of the checks? I think this isn't exposed in any of the JSON/BQ fields today though, so we might need some other way to distinguish this.

+1, we should. I think this is doable with some minor changes to the worker code. Will look into it.

Why are we creating a git tag? Instead on every container build along with the latest, we could add a tag with the SHA and use that tag to do the docker pull part of the deployment.
Do you mean a vX.X.<minor> which is updated every 24 hours? This a bit too frequent to me and will generate a lot of noise. Perhaps we want to have a separate less frequent release process, especially if the e2e test doesn't test result quality.

These are both good points. I'm not a fan of the Git tags either. This was the best way I could think of differentiating between regular commits and commits for prod release.

I really like the idea of tagging docker images with .stable and pulling those. Just to confirm, when there are multiple images present with the same tag, does Docker ensure that the latest image with tag .stable gets pulled?

Yeah, when you push a docker image with a tag, it overwrites the last one. This is how the current setup works too -- we use "latest", which isn't special and just the default tag if nothing is specified.

@azeemshaikh38
Copy link
Contributor Author

Discussed with Oliver offline. Some updates:

  1. CloudBuild to tag container images with both COMMIT_SHA and latest tags.
  2. .shard_num file will be replaced with .shard_metadata which along with num_shards will contain the COMMIT_SHA used to generate the shards.
  3. A successful completion of release tests will trigger a webhook which is backed by a CloudRun application. This will tag the docker image corresponding to COMMIT_SHA tag with stable.
  4. The same webhook on successful completion of weekly cron job run will update Git tag versions. It'll be useful to automate Git tag releases, so that when users run go get on Scorecard, they get the updated code.

@azeemshaikh38 azeemshaikh38 removed needs discussion help wanted Community contributions welcome, maintainers supportive of idea but not a high priority good first issue Good for newcomers labels Aug 6, 2021
@06kellyjac
Copy link
Contributor

As a note (you're probably already aware) the 2.1.1 release for linux x86_64 doesnt have version info. I havent checked other architectures or versions 🙂

@azeemshaikh38
Copy link
Contributor Author

As a note (you're probably already aware) the 2.1.1 release for linux x86_64 doesnt have version info. I havent checked other architectures or versions 🙂

Ah interesting. I wasn't aware of this, thanks for bringing this up. Creating a new issue for tracking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants