Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[KOGITO-7840] - Add validation on Workflow Metadata with admission we… #239

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

spolti
Copy link
Member

@spolti spolti commented Aug 22, 2023

…bhooks

Description of the change:

Motivation for the change:

Checklist

  • Add or Modify a unit test for your change
  • Have you verified that tall the tests are passing?
How to backport a pull request to a different branch?

In order to automatically create a backporting pull request please add one or more labels having the following format backport-<branch-name>, where <branch-name> is the name of the branch where the pull request must be backported to (e.g., backport-7.67.x to backport the original PR to the 7.67.x branch).

NOTE: backporting is an action aiming to move a change (usually a commit) from a branch (usually the main one) to another one, which is generally referring to a still maintained release branch. Keeping it simple: it is about to move a specific change or a set of them from one branch to another.

Once the original pull request is successfully merged, the automated action will create one backporting pull request per each label (with the previous format) that has been added.

If something goes wrong, the author will be notified and at this point a manual backporting is needed.

NOTE: this automated backporting is triggered whenever a pull request on main branch is labeled or closed, but both conditions must be satisfied to get the new PR created.

@kie-ci
Copy link

kie-ci commented Aug 22, 2023

Openshift tests job #429 was: FAILURE
Possible explanation: Pipeline failure or project build failure

Please look here: https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/429/display/redirect
See console log:

Console Logs /home/jenkins/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/testing/controlplane/plane.go:97 +0x49
sigs.k8s.io/controller-runtime/pkg/envtest.(*Environment).Stop(0xc0003c6000)
/home/jenkins/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/envtest/server.go:193 +0x114
github.com/kiegroup/kogito-serverless-operator/api/v1alpha08.glob..func2()
/home/jenkins/agent/workspace/kogito/kogito-serverless-operator-PR-check/api/v1alpha08/webhook_suite_test.go:132 +0x46
�[38;5;243m------------------------------�[0m

�[38;5;9m�[1mSummarizing 2 Failures:�[0m
�[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[BeforeSuite] �[0m
�[38;5;243m/home/jenkins/agent/workspace/kogito/kogito-serverless-operator-PR-check/api/v1alpha08/webhook_suite_test.go:76�[0m
�[38;5;13m[PANICKED!]�[0m �[38;5;13m�[1m[AfterSuite] �[0m
�[38;5;243m/opt/tools/golang/1.19/go/src/runtime/panic.go:260�[0m

�[38;5;9m�[1mRan 0 of 2 Specs in 0.006 seconds�[0m
�[38;5;9m�[1mFAIL!�[0m -- �[38;5;14m�[1mA BeforeSuite node failed so all tests were skipped.�[0m
--- FAIL: TestAPIs (0.01s)
FAIL
coverage: 11.6% of statements
FAIL github.com/kiegroup/kogito-serverless-operator/api/v1alpha08 0.048s
FAIL
make[1]: *** [Makefile:3: test] Error 1
make[1]: Leaving directory '/home/jenkins/agent/workspace/kogito/kogito-serverless-operator-PR-check/api'
make: *** [Makefile:122: test-api] Error 2
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Push image to Openshift Registry)
Stage "Push image to Openshift Registry" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run e2e tests)
Stage "Run e2e tests" skipped due to earlier failure(s)
[Pipeline] stage
[Pipeline] { (Running e2e tests)
Stage "Run e2e tests" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ wget --no-check-certificate -qO - https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/429/api/json
[Pipeline] readJSON
[Pipeline] sh

@kie-ci
Copy link

kie-ci commented Aug 22, 2023

Openshift tests job #432 was: FAILURE
Possible explanation: Pipeline failure or project build failure

Please look here: https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/432/display/redirect
See console log:

Console Logs /home/jenkins/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/envtest/server.go:193 +0x114
github.com/kiegroup/kogito-serverless-operator/api/v1alpha08.glob..func2()
/home/jenkins/agent/workspace/kogito/kogito-serverless-operator-PR-check/api/v1alpha08/webhook_suite_test.go:132 +0x46
�[38;5;243m------------------------------�[0m

�[38;5;9m�[1mSummarizing 2 Failures:�[0m
�[38;5;9m[FAIL]�[0m �[38;5;9m�[1m[BeforeSuite] �[0m
�[38;5;243m/home/jenkins/agent/workspace/kogito/kogito-serverless-operator-PR-check/api/v1alpha08/webhook_suite_test.go:76�[0m
�[38;5;13m[PANICKED!]�[0m �[38;5;13m�[1m[AfterSuite] �[0m
�[38;5;243m/opt/tools/golang/1.19/go/src/runtime/panic.go:260�[0m

�[38;5;9m�[1mRan 0 of 2 Specs in 0.096 seconds�[0m
�[38;5;9m�[1mFAIL!�[0m -- �[38;5;14m�[1mA BeforeSuite node failed so all tests were skipped.�[0m
--- FAIL: TestAPIs (0.10s)
FAIL
coverage: 11.6% of statements
FAIL github.com/kiegroup/kogito-serverless-operator/api/v1alpha08 0.257s
FAIL
make[1]: *** [Makefile:3: test] Error 1
make[1]: Leaving directory '/home/jenkins/agent/workspace/kogito/kogito-serverless-operator-PR-check/api'
make: *** [Makefile:122: test-api] Error 2
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Push image to Openshift Registry)
Stage "Push image to Openshift Registry" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run e2e tests)
Stage "Run e2e tests" skipped due to earlier failure(s)
[Pipeline] stage
[Pipeline] { (Running e2e tests)
Stage "Run e2e tests" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ wget --no-check-certificate -qO - https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/432/api/json
[Pipeline] readJSON
[Pipeline] sh
+ tail -n 50
+ wget --no-check-certificate -qO - https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/432/consoleText

@kie-ci
Copy link

kie-ci commented Aug 22, 2023

Openshift tests job #436 was: UNSTABLE
Possible explanation: This should be test failures

Please look here: https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/436/display/redirect
See console log:

Console Logs openshift-network-diagnostics network-check-target-5rn7j 1/1 Running 0 19d
openshift-network-operator network-operator-7f48cb8b6d-s957r 1/1 Running 0 20d
openshift-operator-lifecycle-manager catalog-operator-574b86796-tbgfc 1/1 Running 0 20d
openshift-operator-lifecycle-manager collect-profiles-28212300-gn9cm 0/1 Completed 0 40m
openshift-operator-lifecycle-manager collect-profiles-28212315-w6znp 0/1 Completed 0 25m
openshift-operator-lifecycle-manager collect-profiles-28212330-nflmw 0/1 Completed 0 10m
openshift-operator-lifecycle-manager olm-operator-779dfbf5b7-qlqdw 1/1 Running 0 20d
openshift-operator-lifecycle-manager package-server-manager-56c5446c7-72jvf 1/1 Running 0 20d
openshift-operator-lifecycle-manager packageserver-6c9788bdd5-knnb9 1/1 Running 0 19d
openshift-operator-lifecycle-manager packageserver-6c9788bdd5-snhx6 1/1 Running 0 19d
openshift-roks-metrics metrics-6467c65d7-mrm5p 1/1 Running 0 20d
openshift-roks-metrics push-gateway-6f9c7688c5-hhrtc 1/1 Running 0 20d
openshift-service-ca-operator service-ca-operator-599cdfbbc9-lrfb8 1/1 Running 0 20d
openshift-service-ca service-ca-7fbb6558f7-w4ctq 1/1 Running 0 20d
sonataflow-operator-system sonataflow-operator-controller-manager-dcdfdd888-jh8js 0/2 Terminating 0 13s
teste httpd-sample-7bdf74f5cd-b2rbf 1/1 Running 0 140d
tigera-operator tigera-operator-7dc6fddb9c-nnj9p 1/1 Running 0 54d
[Pipeline] }
[Pipeline] // script
Post stage
[Pipeline] archiveArtifacts
Archiving artifacts
‘test/logs/**/*.log’ doesn’t match anything: ‘test’ exists but not ‘test/logs/**/*.log’
No artifacts found that match the file pattern "test/logs/**/*.log". Configuration error?
[Pipeline] junit
Recording test results
None of the test reports contained any result
[Checks API] No suitable checks publisher found.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
Lock released on resource [Sonataflow Operator OpenShift tests https://c107-e.us-south.containers.cloud.ibm.com:30353]
[Pipeline] // lock
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ wget --no-check-certificate -qO - https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/436/api/json
[Pipeline] readJSON
[Pipeline] sh
No test report files were found. Configuration error?

@kie-ci
Copy link

kie-ci commented Aug 23, 2023

Openshift tests job #437 was: UNSTABLE
Possible explanation: This should be test failures

Please look here: https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/437/display/redirect
See console log:

Console Logs openshift-network-diagnostics network-check-target-5rn7j 1/1 Running 0 20d
openshift-network-operator network-operator-7f48cb8b6d-s957r 1/1 Running 0 20d
openshift-operator-lifecycle-manager catalog-operator-574b86796-tbgfc 1/1 Running 0 20d
openshift-operator-lifecycle-manager collect-profiles-28213245-4s8k6 0/1 Completed 0 35m
openshift-operator-lifecycle-manager collect-profiles-28213260-cwjpz 0/1 Completed 0 20m
openshift-operator-lifecycle-manager collect-profiles-28213275-r88q5 0/1 Completed 0 5m2s
openshift-operator-lifecycle-manager olm-operator-779dfbf5b7-qlqdw 1/1 Running 0 20d
openshift-operator-lifecycle-manager package-server-manager-56c5446c7-72jvf 1/1 Running 0 20d
openshift-operator-lifecycle-manager packageserver-6c9788bdd5-knnb9 1/1 Running 0 20d
openshift-operator-lifecycle-manager packageserver-6c9788bdd5-snhx6 1/1 Running 0 20d
openshift-roks-metrics metrics-6467c65d7-mrm5p 1/1 Running 0 20d
openshift-roks-metrics push-gateway-6f9c7688c5-hhrtc 1/1 Running 0 20d
openshift-service-ca-operator service-ca-operator-599cdfbbc9-lrfb8 1/1 Running 0 20d
openshift-service-ca service-ca-7fbb6558f7-w4ctq 1/1 Running 0 20d
sonataflow-operator-system sonataflow-operator-controller-manager-5bd5cbd59-8zzb8 0/2 Terminating 0 12s
teste httpd-sample-7bdf74f5cd-b2rbf 1/1 Running 0 141d
tigera-operator tigera-operator-7dc6fddb9c-nnj9p 1/1 Running 0 54d
[Pipeline] }
[Pipeline] // script
Post stage
[Pipeline] archiveArtifacts
Archiving artifacts
‘test/logs/**/*.log’ doesn’t match anything: ‘test’ exists but not ‘test/logs/**/*.log’
No artifacts found that match the file pattern "test/logs/**/*.log". Configuration error?
[Pipeline] junit
Recording test results
None of the test reports contained any result
[Checks API] No suitable checks publisher found.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
Lock released on resource [Sonataflow Operator OpenShift tests https://c107-e.us-south.containers.cloud.ibm.com:30353]
[Pipeline] // lock
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ wget --no-check-certificate -qO - https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/437/api/json
[Pipeline] readJSON
[Pipeline] sh
No test report files were found. Configuration error?

@kie-ci
Copy link

kie-ci commented Aug 23, 2023

Openshift tests job #438 was: UNSTABLE
Possible explanation: This should be test failures

Please look here: https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/438/display/redirect
See console log:

Console Logs openshift-network-diagnostics network-check-target-5rn7j 1/1 Running 0 20d
openshift-network-operator network-operator-7f48cb8b6d-s957r 1/1 Running 0 20d
openshift-operator-lifecycle-manager catalog-operator-574b86796-tbgfc 1/1 Running 0 20d
openshift-operator-lifecycle-manager collect-profiles-28213275-r88q5 0/1 Completed 0 31m
openshift-operator-lifecycle-manager collect-profiles-28213290-tr9kf 0/1 Completed 0 16m
openshift-operator-lifecycle-manager collect-profiles-28213305-84vn5 0/1 Completed 0 82s
openshift-operator-lifecycle-manager olm-operator-779dfbf5b7-qlqdw 1/1 Running 0 20d
openshift-operator-lifecycle-manager package-server-manager-56c5446c7-72jvf 1/1 Running 0 20d
openshift-operator-lifecycle-manager packageserver-6c9788bdd5-knnb9 1/1 Running 0 20d
openshift-operator-lifecycle-manager packageserver-6c9788bdd5-snhx6 1/1 Running 0 20d
openshift-roks-metrics metrics-6467c65d7-mrm5p 1/1 Running 0 20d
openshift-roks-metrics push-gateway-6f9c7688c5-hhrtc 1/1 Running 0 20d
openshift-service-ca-operator service-ca-operator-599cdfbbc9-lrfb8 1/1 Running 0 20d
openshift-service-ca service-ca-7fbb6558f7-w4ctq 1/1 Running 0 20d
sonataflow-operator-system sonataflow-operator-controller-manager-59769f4fb6-jb9j4 0/2 Terminating 0 14s
teste httpd-sample-7bdf74f5cd-b2rbf 1/1 Running 0 141d
tigera-operator tigera-operator-7dc6fddb9c-nnj9p 1/1 Running 0 54d
[Pipeline] }
[Pipeline] // script
Post stage
[Pipeline] archiveArtifacts
Archiving artifacts
‘test/logs/**/*.log’ doesn’t match anything: ‘test’ exists but not ‘test/logs/**/*.log’
No artifacts found that match the file pattern "test/logs/**/*.log". Configuration error?
[Pipeline] junit
Recording test results
None of the test reports contained any result
[Checks API] No suitable checks publisher found.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
Lock released on resource [Sonataflow Operator OpenShift tests https://c107-e.us-south.containers.cloud.ibm.com:30353]
[Pipeline] // lock
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ wget --no-check-certificate -qO - https://jenkins-kogito-tools.kogito-cluster-0ad6762cc85bcef5745bb684498c2436-0000.us-south.containers.appdomain.cloud/job/kogito/job/kogito-serverless-operator-PR-check/438/api/json
[Pipeline] readJSON
[Pipeline] sh
No test report files were found. Configuration error?

Copy link
Member

@ricardozanini ricardozanini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Do we have any updates in the docs to do? Maybe a section under Operator with links to cert-manager and OLM guides. Plus, a brief description of what we do validate.

api/v1alpha08/sonataflow_webhook.go Outdated Show resolved Hide resolved
api/v1alpha08/sonataflow_webhook.go Outdated Show resolved Hide resolved
api/v1alpha08/sonataflow_webhook.go Outdated Show resolved Hide resolved
# See the License for the specific language governing permissions and
# limitations under the License.

CERT_MANAGER_INSTALLER="https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this version linked to the Operator SDK?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operator-sdk docs does not have any information about the version, it seems to be related with the k8s version.

@spolti
Copy link
Member Author

spolti commented Aug 24, 2023

Thank you! Do we have any updates in the docs to do? Maybe a section under Operator with links to cert-manager and OLM guides. Plus, a brief description of what we do validate.

there are some instructions here https://github.com/kiegroup/kogito-docs/pull/361

@spolti
Copy link
Member Author

spolti commented Aug 25, 2023

@radtriste do you want to take a look before merging?

Copy link
Contributor

@radtriste radtriste left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the cert-manager becomes mandatory for sonataflow operator ?

@ricardozanini
Copy link
Member

ricardozanini commented Aug 25, 2023

So the cert-manager becomes mandatory for sonataflow operator ?

Yes, OLM will manage it for us.

Locally we can disable it via an ENV. @spolti maybe adding this env by default in our hack/local scripts.

Also please, verify if our manual installation guide still relevant or we need to update it.

@spolti
Copy link
Member Author

spolti commented Aug 25, 2023

So the cert-manager becomes mandatory for sonataflow operator ?

Yes, OLM will manage it for us.

Locally we can disable it via an ENV. @spolti maybe adding this env by default in our hack/local scripts.

It is disabled by default for local execution. ( make run)
it can be enabled with make run ENABLE_WEBHOOKS=true
When deploying it, make deploy take care of the cert-manager operator installation.

Also please, verify if our manual installation guide still relevant or we need to update it.

I did follow the docs to install the operator from there, didn't find outdated stuff.

@radtriste
Copy link
Contributor

So the cert-manager becomes mandatory for sonataflow operator ?

Yes, OLM will manage it for us.
Locally we can disable it via an ENV. @spolti maybe adding this env by default in our hack/local scripts.

It is disabled by default for local execution. ( make run) it can be enabled with make run ENABLE_WEBHOOKS=true When deploying it, make deploy take care of the cert-manager operator installation.

Also please, verify if our manual installation guide still relevant or we need to update it.

I did follow the docs to install the operator from there, didn't find outdated stuff.

Some changes are needed because our documentation won't work: https://sonataflow.org/serverlessworkflow/latest/cloud/operator/install-serverless-operator.html#_install

On Minikube, my operator pod is stuck in ContainerCreating due to this:

$ kget events
LAST SEEN   TYPE      REASON              OBJECT                                                         MESSAGE
15m         Normal    Scheduled           pod/sonataflow-operator-controller-manager-7f66df8485-cstc5    Successfully assigned sonataflow-operator-system/sonataflow-operator-controller-manager-7f66df8485-cstc5 to minikube
73s         Warning   FailedMount         pod/sonataflow-operator-controller-manager-7f66df8485-cstc5    MountVolume.SetUp failed for volume "cert" : secret "webhook-server-cert" not found
2m10s       Warning   FailedMount         pod/sonataflow-operator-controller-manager-7f66df8485-cstc5    Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[cert kube-api-access-p8knw]: timed out waiting for the condition
4m27s       Warning   FailedMount         pod/sonataflow-operator-controller-manager-7f66df8485-cstc5    Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[kube-api-access-p8knw cert]: timed out waiting for the condition
15m         Normal    SuccessfulCreate    replicaset/sonataflow-operator-controller-manager-7f66df8485   Created pod: sonataflow-operator-controller-manager-7f66df8485-cstc5
15m         Normal    ScalingReplicaSet   deployment/sonataflow-operator-controller-manager              Scaled up replica set sonataflow-operator-controller-manager-7f66df8485 to 1

Reproducer:

minikube start --cpus 4 --memory 4096 --addons registry --addons metrics-server --insecure-registry "10.0.0.0/24" --insecure-registry "localhost:5000"

kubectl apply -f operator.yaml

@ricardozanini
Copy link
Member

@spolti the local installation must disable the certmanager by default. But users ought to have the option to enable it to mimic real cluster features. So our docs should reflect that. Thanks for verifying, @radtriste.

@spolti
Copy link
Member Author

spolti commented Aug 28, 2023

@spolti the local installation must disable the certmanager by default. But users ought to have the option to enable it to mimic real cluster features. So our docs should reflect that. Thanks for verifying, @radtriste.

I am now getting confused, should we have or not it by default?
To have it disabled in the manual installation, we might have two options, keep two versions of the operator.yaml,

or, if the user don't want to install certmanager nor install the certificates manually, instruct they to remove things from the operator.yaml before running the kubectl create -f operator.yaml

@ricardozanini
Copy link
Member

@spolti my takes are:

  1. Disable it by default with minikube/kind/local installations to avoid pressuring resources
  2. Enable it by default in any other means

If users wish to add cert-manager locally to mimic production workloads, they could

Isn't the env var you introduced in main.go doing precisely that? Like dropping the webhook if set? If so, we can instruct users to install the operator.yaml via kustomize where we have a basis with and without certmanager.

@radtriste
Copy link
Contributor

@radtriste did you see this: https://github.com/kiegroup/kogito-docs/pull/361/files#diff-905eed5405143ff0b5ef4025b84da595bbc0770b2c1353909979fb8204ac1d23R88

Well I did not see it because you did not link it in the description :)
Let me review that

@radtriste
Copy link
Contributor

@spolti my takes are:

  1. Disable it by default with minikube/kind/local installations to avoid pressuring resources
  2. Enable it by default in any other means

If users wish to add cert-manager locally to mimic production workloads, they could

Isn't the env var you introduced in main.go doing precisely that? Like dropping the webhook if set? If so, we can instruct users to install the operator.yaml via kustomize where we have a basis with and without certmanager.

I think we should keep the enabled by defaut into the operator.yaml
but we need to document how to deactivate it if needed. A simple sed/awk before kubectl apply command should do it IMO :)

@spolti spolti force-pushed the KOGITO-7840-1 branch 7 times, most recently from 099c2b7 to abc1d1a Compare September 22, 2023 21:03
Signed-off-by: Spolti <fspolti@redhat.com>
@spolti
Copy link
Member Author

spolti commented Sep 25, 2023

@ricardozanini @MarianMacik finally green :)
Feel free to merge.

config/manager/kustomization.yaml Outdated Show resolved Hide resolved
test/e2e/workflow_nowebhooks_test.go Outdated Show resolved Hide resolved

BeforeAll(func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be retained in a BeforeAll block as it is just a setup and not a part of the test?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, as I changed the main e2e to be serial, instead, each e2e test will have its own environment preparation running in order.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@spolti Well, the BeforeAll function would do the same, it runs as the first function in the ordered container before all tests are run. Unless you wanted to have the setup as dedicated test.

Signed-off-by: Spolti <fspolti@redhat.com>
MarianMacik pushed a commit to MarianMacik/kogito-serverless-operator that referenced this pull request Oct 3, 2023
Signed-off-by: Karel Suta <ksuta@redhat.com>
Copy link
Member

@MarianMacik MarianMacik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@spolti Looks good now. About the e2e tests, we could have the tests setup in the BeforeAll function unless your intention was to have the applying of the CRDs as a dedicated test.

@spolti
Copy link
Member Author

spolti commented Oct 4, 2023

@spolti Looks good now. About the e2e tests, we could have the tests setup in the BeforeAll function unless your intention was to have the applying of the CRDs as a dedicated test.

The way it was before was causing some sort of race condition between the main test and the no-webhooks test, if A started before, then when it finished the B would fail and vice-verse, it because the namespace was not getting cleaned after the first test was finished, the beforeAll in this case, would only clean the env after everything was finished, and it was not what was needed in this case.

@MarianMacik
Copy link
Member

The way it was before was causing some sort of race condition between the main test and the no-webhooks test, if A started before, then when it finished the B would fail and vice-verse, it because the namespace was not getting cleaned after the first test was finished, the beforeAll in this case, would only clean the env after everything was finished, and it was not what was needed in this case.

@spolti
Yes, before everything was in one ordered section, now each of those is in one serial section, so they are mutually exclusive. I was thinking about putting the setup inside the ordered section now, so basically It("Prepare the environment", func() { would be changed to BeforeAll(func() {, nothing else. The serial part now guarantees that the tests will not run concurrently and BeforeAll would be run just inside the context of the ordered section, same as AfterAll is now being executed.

@spolti
Copy link
Member Author

spolti commented Oct 4, 2023

@spolti Looks good now. About the e2e tests, we could have the tests setup in the BeforeAll function unless your intention was to have the applying of the CRDs as a dedicated test.

The way it was before was causing some sort of race condition between the main test and the no-webhooks test, if A started before, then when it finished the B would fail and vice-verse, it because the namespace was not getting cleaned after the first test was finished, the beforeAll in this case, would only clean the env after everything was finished, and it was not what was needed in this case.

I meant, the AfterAll, not the BeforeAll :)
sry for the typo

@MarianMacik
Copy link
Member

@spolti so do we want to change it or are we ok with the current state?

@spolti
Copy link
Member Author

spolti commented Oct 4, 2023

@spolti so do we want to change it or are we ok with the current state?

I am ok with the current state.
but let's hear from others.

@ricardozanini
Copy link
Member

@MarianMacik @spolti I think we can merge as it is to unlock and later @MarianMacik can you send the PR to replace with BDD?

@MarianMacik
Copy link
Member

@ricardozanini @spolti Yes, let's do it like that.

@domhanak
Copy link
Contributor

@ricardozanini anything to do with this PR?

@ricardozanini
Copy link
Member

Another case of many features/collaborators working on the code base. This introduces a fairly need and complex setup for local dev.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants