Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add aggregationRule for spec compliance #25

Merged
merged 2 commits into from
Aug 25, 2021

Conversation

baijum
Copy link
Member

@baijum baijum commented Aug 24, 2021

@baijum baijum force-pushed the aggregation-rule branch 2 times, most recently from de681f1 to b42b9df Compare August 24, 2021 09:58
@baijum baijum marked this pull request as draft August 24, 2021 10:15
Copy link
Contributor

@scothis scothis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ClusterRole with the aggregationRule needs to be an independent resource. It's implemented by overwriting the rules on itself.

@baijum
Copy link
Member Author

baijum commented Aug 24, 2021

The ClusterRole with the aggregationRule needs to be an independent resource. It's implemented by overwriting the rules on itself.

Thanks for the suggestion. I will explore how to implement this.

@scothis
Copy link
Contributor

scothis commented Aug 24, 2021

Thanks for the suggestion. I will explore how to implement this.

Create a new ClusterRole resource, and connect it to the ServiceAccount with a ClusterRoleBinding. For the most part, you can copy the pattern of the existing ClusterRole/Binding pair.

@baijum baijum marked this pull request as ready for review August 25, 2021 02:11
Makefile Outdated
@@ -72,7 +72,7 @@ uninstall: manifests ## Uninstall CRDs from the K8s cluster specified in ~/.kube
$(KUSTOMIZE) build config/crd | $(KO) delete -f -

deploy: manifests ## Deploy controller to the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/default | $(KO) apply -f -
$(KUSTOMIZE) build config/default | $(KO) apply --local -f -
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The --local flag is required for building the image locally and deploy it. Otherwise, ko expects KO_DOCKER_REPO variable set with credentials to push the image to that location. Since the deploy target is only used locally, I think it is safe to set the --local flag. Before using ko, the configuration was like this:

deploy: manifests kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
	cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
	$(KUSTOMIZE) build config/default | kubectl apply -f -

With that configuration, the image was built using the local docker daemon but not pushed. So, the equivalent of that configuration is using ko with --local flag. Also, we have separate CI job steps to publish OCI images.

P.S: This change could be a separate PR. But I noticed this issue while working on the current PR.

Copy link
Contributor

@scothis scothis Aug 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding --local forces all callers to write to a docker daemon. This same behavior can be achieved by setting KO_DOCKER_REPO=ko.local, without impacting all users. Likewise kind users can set the car to kind.local and the build image will be side loaded into kind before the resources are applied to the cluster.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my bad! Reverted this change.

Signed-off-by: Baiju Muthukadan <baiju.m.mail@gmail.com>
@scothis scothis merged commit d187d83 into servicebinding:main Aug 25, 2021
@baijum baijum deleted the aggregation-rule branch August 26, 2021 01:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants