From 991b8f05d7c09357e32e49526bc5127f04d13311 Mon Sep 17 00:00:00 2001 From: nikimanoledaki Date: Fri, 10 May 2024 13:20:45 +0200 Subject: [PATCH] Add Proposal 2: Run Co-authored-by: locomundo Signed-off-by: nikimanoledaki --- .github/workflows/run.yml | 0 .../docs/proposals/proposal-002-run.md | 223 ++++++++++++++++++ 2 files changed, 223 insertions(+) create mode 100644 .github/workflows/run.yml create mode 100644 website/content/docs/proposals/proposal-002-run.md diff --git a/.github/workflows/run.yml b/.github/workflows/run.yml new file mode 100644 index 0000000..e69de29 diff --git a/website/content/docs/proposals/proposal-002-run.md b/website/content/docs/proposals/proposal-002-run.md new file mode 100644 index 0000000..d494292 --- /dev/null +++ b/website/content/docs/proposals/proposal-002-run.md @@ -0,0 +1,223 @@ +# Run the bechmark tests as part of the automated pipeline + +## Authors + +- @locomundo +- @nikimanoledaki + +## Status + +Draft + +## Table of Contents + +- [Run the bechmark tests as part of the automated pipeline](#run-the-bechmark-tests-as-part-of-the-automated-pipeline) + - [Authors](#authors) + - [Status](#status) + - [Table of Contents](#table-of-contents) + - [Summary](#summary) + - [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) + - [Linked Docs](#linked-docs) + - [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) + - [Design Details](#design-details) + - [Graduation Criteria (Optional)](#graduation-criteria-optional) + - [Drawbacks (Optional)](#drawbacks-optional) + - [Alternatives](#alternatives) + - [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Summary + + + +## Motivation + + + +This proposal is part of the pipeline automation of the Green Review for Falco. Currently, we are using Flux to watch the upstream Falco repository and run the benchmark tests constantly. For example, [this benchmark test](https://github.com/falcosecurity/cncf-green-review-testing/blob/main/kustomize/falco-driver/ebpf/stress-ng.yaml#L27-L32) is setup as a Kubernetes Deployment that runs an endless loop of [`stress-ng`](https://wiki.ubuntu.com/Kernel/Reference/stress-ng), which applies stress to the kernel. Instead, this proposal aims to provide a solution for how to deploy the benchmark tests only when they are needed. + +Secondly, automating the way we run benchmark tests in this pipeline will help to make it easier and faster to add new benchmark tests. It will enable both the WG Green Reviews and CNCF Project Maintainers to come up with new benchmark tests and run them to get feedback faster. + +### Goals + + + + - Describe the actions to take immediately after the trigger from Proposal 1 https://github.com/cncf-tags/green-reviews-tooling/issues/84 + - Describe how the pipeline should _fetch_ the benchmark tests either from this repository (`cncf-tags/green-reviews-tooling`) or from an upstream repository (Falco's [`falcosecurity/cncf-green-review-testing`](https://github.com/falcosecurity/cncf-green-review-testing)). + - Describe how the pipeline should _run_ the benchmark tests through GitHub Actions for a specific project e.g. Falco + - Communicate the changes needed to be made by the Falco team to change the benchmark test to a GitHub Action file. + - Provide _modularity_ for the benchmark tests. + +### Non-Goals + + + +* Defining or designing the content of benchmark tests themselves, or assigning responsibility for who should write them. + +### Linked Docs + + + +* [Slack discussion on benchmark test framework](https://cloud-native.slack.com/archives/C060EDHN431/p1708416918423089?thread_ts=1708348336.259699&cid=C060EDHN431) + +## Proposal + + + +### User Stories (Optional) + + + + +* As a CNCF Project Maintainer, + +* As a Green Review WG reviewer, + +* As a Green Review WG cluster maintainer, + +### Notes/Constraints/Caveats (Optional) + + + +### Risks and Mitigations + + + +## Design Details + + + +Workflows can be fetched from other GitHuborganizations and repositories using the `jobs..uses` syntax defined here: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_iduses + +``` +jobs: + call-workflow-in-another-repo: # job_id + uses: octo-org/another-repo/.github/workflows/workflow.yml@v1 # refer to self or other repo +``` + +This syntax can be configured to use `@main` or `@another-branch` which would be nice for versioning and testing specific releases. + +This means there is no need to authenticate to any repository as long as the external repositories are public. + +How will the job run in the cluster? We assume that the workflow already contains a kubeconfig to authenticate with the test cluster and Falco has already been deployed to it. + +The next steps describe how to deploy a benchmark test to the cluster, taking `stress-ng` as an example. + +The workflow can use an existing job that contains a reference for how to deploy the benchmark test to the cluster: `octo-org/another-repo/.github/workflows/workflow.yml@v1` + +For example, Falco has define their `stress-ng` test in a Deployment manifest which is ready to be applied to a cluster. This Deployment manifest can be applied in a similar way to how Falco was deployed to the cluster, using infrastructure-as-code in an _ad hoc_ rather than _continuous_ way. For example, using the FLux CLI: + +``` +flux create kustomization stress-ng-benchmark-test \ +--target-namespace=falco \ +--source= +``` + +Essentially, the command above would deploy the benchmark test ad hoc. The CLI arguments would be similar to how Falco is currently deployed in [clusters/projects/falco/falco.yaml](https://github.com/cncf-tags/green-reviews-tooling/blob/main/clusters/projects/falco/falco.yaml). The main difference is that, using the CLI, it is deployed _ad hoc_ rather than _continuously_. + +For future benchmark tests, it is not necessary to deploy the test using Flux. This framework works for now but other ways could be considered if other CNCF Project Maintainers would like to suggest a different way to do this. + +In addition, The test should run the test on a cluster which is configurable. The workflow receives parameters to indicate and authenticate on the cluster where it should e.g. define which namespace, pass the kubeconfig to the workflow. + +### Graduation Criteria (Optional) + + + +## Drawbacks (Optional) + + + +## Alternatives + + + +## Infrastructure Needed (Optional) + +