Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend namespace filtering to all operations on namespaced resources #1668

Merged
merged 4 commits into from Mar 27, 2019

Conversation

@2opremio
Copy link
Collaborator

2opremio commented Jan 17, 2019

Addresses part of #1471

  • Deprecate and rename flag --kubernetes-namespace-whitelist to --kubernetes-allow-namespace --k8s-allow-namespace
  • Honor --k8s-allow-namespace for namespaced kubernetes resources in all flux operations.
  • cluster-global resources cluster-global resources (i.e. resources without an associated namespace) won't be filtered out by this change (they will keep being synced).
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch 2 times, most recently from f060e01 to 752d7b2 Jan 17, 2019
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch 2 times, most recently from e3f9e83 to dda60d7 Jan 17, 2019
@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Jan 17, 2019

I ran into a problem when filtering resources by the namespace in their identifier.

Namespace filtering cannot distinguish cluster-wide resources and resources in the default namespace, just like mentioned at #1442 (comment) .

When the namespace of a resource identifier comes from ...

  1. ... a manifest file, cluster-wide resources are not distinguishable from resources mapped (explicitly or implicitly) to the default namespace. This is because empty namespaces are replaced with default
  2. ... a cluster resource, cluster-wide resources are not distinguishable from namespaced resources missing an explicit namespace (they are part of default namespace).

(2) is slightly less problematic because, AFAIU, the only cluster resources we currently read are controllers, which are all namespaced.

My suggestion would be to create the following invariant for namespaces in resource identifiers:

  • "" indicates cluster-wide resource
  • "default" indicates mapped to the default namespace, either explicitly (the yaml in git or from the cluster contains it) or implicitly (the yaml in git or the cluster doesn't include a namespace)

More specifically, the changes needed are: when constructing resource identifiers ...

  1. ... of a manifest file in git. Stop using "default" as the namespace fo cluster-wide resources
  2. ... of a cluster resource. Start using "default" as the namespace of resources in the default namespace which don't state it explicitly
@squaremo

This comment has been minimized.

Copy link
Member

squaremo commented Jan 17, 2019

  • "default" indicates mapped to the default namespace, either explicitly (the yaml in git or from the cluster contains it) or implicitly (the yaml in git or the cluster doesn't include a namespace)

default is the name of a namespace, so you would not be able to tell whether this meant "implicitly mapped to the default namespace" (which might not be default) or "explicitly in default".

@squaremo

This comment has been minimized.

Copy link
Member

squaremo commented Jan 17, 2019

For those two cases you've put above: in (2), I think the API server will always give you a namespace if the resource is namespaced, and never if it's not. So an empty namespace is a reliable guide to cluster-scoped resources.

But for 1), the only way to know is to check with the API server whether the particular GroupVersionKind is namespaced or not; something which I was able to dodge in #1442 since I don't actually care what the namespace is, so long I can use it to relate resources to manifests (though it would be cleaner if I could give things accurate IDs).

(CODA: #1442 did end up querying the API for resource namespacedness)

@2opremio 2opremio mentioned this pull request Jan 21, 2019
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch 2 times, most recently from 00570c5 to 84a3850 Jan 22, 2019
http/client/client.go Outdated Show resolved Hide resolved
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch 2 times, most recently from 18cbe9f to a936989 Jan 22, 2019
@2opremio 2opremio changed the title [WIP] Extend namespace filtering to all operations on namespaced resources Extend namespace filtering to all operations on namespaced resources Jan 23, 2019
@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Jan 23, 2019

I am done with the code. I want to do some further high-level testing on a Kubernetes cluster, but it's ready to review.

cluster/kubernetes/scoper.go Outdated Show resolved Hide resolved
cluster/mock.go Outdated Show resolved Hide resolved
cluster/scoper.go Outdated Show resolved Hide resolved
@2opremio 2opremio mentioned this pull request Jan 28, 2019
0 of 11 tasks complete
@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Feb 8, 2019

This PR is blocked by #1442 (they have a lot in common and I agreed with @squaremo to get #1442 merged first)

@2opremio 2opremio added the blocked label Feb 8, 2019
@2opremio 2opremio removed the blocked label Feb 27, 2019
@2opremio 2opremio changed the title Extend namespace filtering to all operations on namespaced resources [WIP] Extend namespace filtering to all operations on namespaced resources Feb 27, 2019
@2opremio 2opremio requested review from hiddeco and removed request for hiddeco Feb 28, 2019
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch from 759d954 to e71b52c Mar 6, 2019
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch from c5706cc to cb4ad60 Mar 6, 2019
@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Mar 6, 2019

@squaremo this is ready again, PTAL

@2opremio 2opremio requested a review from squaremo Mar 6, 2019
Copy link
Member

squaremo left a comment

I need to have a think about this, so only comments for now -- sorry 🌵

cluster/kubernetes/kubernetes.go Outdated Show resolved Hide resolved
cluster/kubernetes/kubernetes.go Show resolved Hide resolved
cluster/cluster.go Show resolved Hide resolved
daemon/daemon.go Outdated Show resolved Hide resolved
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch 2 times, most recently from 1ea6920 to 47d75cf Mar 7, 2019
@squaremo

This comment has been minimized.

Copy link
Member

squaremo commented Mar 14, 2019

@2opremio That commit is what I had in mind (hopefully clearer in code than in comments!). See what you think ..

@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Mar 14, 2019

The "lowest" we can filter resources by namespace, the better, since it will cover more code paths.

My initial proposal was very similar (to filter out disallowed namespaces when loading the manifests) but it was discarded, see #1668 (comment) .

I recall you ended up coming with a scenario in which it could be a problem. Now I don't recall which one and I can't find it, maybe you shared it privately.

At the very least, it will lead to confusing error messages when trying to edit policies from workload belonging to disallowed namespaces. If we remove the manifests when loading, the error will be workload not found whereas if we don't we can provide a proper message such as workload belongs to disallowed namespace.

PS: We should also merge the code of NamespaceAllowed and IsAllowedResource

@squaremo

This comment has been minimized.

Copy link
Member

squaremo commented Mar 18, 2019

My initial proposal was very similar (to filter out disallowed namespaces when loading the manifests) but it was discarded, see #1668 (comment) .

I recall you ended up coming with a scenario in which it could be a problem. Now I don't recall and I can't find it, maybe you shared it privately.

Where we ended up on that (in #1442) was that it was an acceptable mid-way point to leave the parsing as generic (that's cluster/kubernetes/resource/load.go and friends), but do post-hoc filtering on the result (that's cluster/kubernetes/manifest.go). Mind you, the waters are muddy enough now that there may not be much of a distinction to make.

At the very least, it will lead to confusing error messages when trying to edit policies from workload belonging to disallowed namespaces. If we remove the manifests when loading, the error will be workload not found whereas if we don't we can provide a proper message such as workload belongs to disallowed namespace.

That's a fair objection. It's the usual trade-off of reliable enforcement versus being helpful to users. In this case, I don't think it's important to hide from a user the fact of namespaces being blocked, and I do think we can do better at closing down opportunities for accidentally side-stepping the check.

What about if the check against the allowed namespaces happened in cluster/kubernetes/Manifest.UpdateImage and cluster/kubernetes/Manifest.UpdatePolicy? Then it can return a more helpful error, while still being in the narrow bit of the interface.

I'll have another go at my amendment ..

@squaremo squaremo force-pushed the 2opremio:1471-extend-ns-filtering branch from 5bf61b4 to 92db72a Mar 20, 2019
@squaremo

This comment has been minimized.

Copy link
Member

squaremo commented Mar 20, 2019

Rebased on master, and removed my extra commit. I've tried out a few scenarios:

  1. Start from scratch with repo and --k8s-allow-namespace=hello

    • only hello namespace and things in hello namespace get created, even though there are manifests for other things
  2. changed --k8s-allow-namespace=default and ran with --sync-garbage-collection

    • new resources are created, old resources are left, as they aren't considered for GC
  3. list-images and list-workloads both show just the namespaces allowed

  4. release --all acts on only the namespaces allowed, even if others use the image in question

So all good so far! Will think of some tricksier tests, next ..

@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Mar 20, 2019

I did some tests myself, but it's great that you are being thorough. Thanks!!

@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Mar 22, 2019

@squaremo I think I've dealt with the comments (except for #1668 (comment) which I don't know how to address).

PTAL

@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Mar 25, 2019

@squaremo I tested this PR against https://github.com/2opremio/locked-down-flux to see how it behaved with a namespace-locking RBAC and everything worked as expected (no errors in the logs and the example workload was created as expected)

$ kubectl -n flux-system logs flux-58fd7cbd99-n5sjn 
Flag --k8s-namespace-whitelist has been deprecated, changed to --k8s-allow-namespace, use that instead
ts=2019-03-25T17:00:39.817983756Z caller=main.go:167 version=1471-extend-ns-filtering-cb4ad60c
ts=2019-03-25T17:00:39.882144008Z caller=main.go:297 component=cluster identity=/var/fluxd/keygen/identity
ts=2019-03-25T17:00:39.882313951Z caller=main.go:298 component=cluster identity.pub="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7OhlwhU0Hwp3kxyUe+FUWIQ3S127QIlQFnj2m9HUDfb1bpkRDK4gRVBM6JiYBTFj4/YAfGcWck4mPt88SgFIPLQzUYbJC1NoKjMVjH/VbAxP1XoJIZMlzHMd2MztcX2/q0QrCFAYhF60OhbTFu3tuZpqSMeZ0DK/KAmCVevXslaKztpayUqatBnmxhm3C5EM22jAwQZ/yxdN9+s8DZNiJS+2g5YwWVJqWB3WisjT9/p6MT/wjW+fiBbuQDpr3xAp7z6tAnr1/yChqmY7RAvTVsyXaNDtTEgXS9+FAgNmo3wf6m4qpT2H42/E6mY4rjDlPYxlf1bgkpHk8eR4I7qbJ"
ts=2019-03-25T17:00:39.882461841Z caller=main.go:299 component=cluster host=https://10.96.0.1:443 version=kubernetes-v1.13.0
ts=2019-03-25T17:00:39.88267732Z caller=main.go:311 component=cluster kubectl=/usr/local/bin/kubectl
ts=2019-03-25T17:00:39.883574849Z caller=main.go:322 component=cluster ping=true
ts=2019-03-25T17:00:45.150716883Z caller=aws.go:69 component=aws warn="no AWS region configured, or detected as cluster region" err="RequestError: send request failed\ncaused by: Get http://169.254.169.254/latest/meta-data/placement/availability-zone: dial tcp 169.254.169.254:80: connect: connection refused"
ts=2019-03-25T17:00:45.151020082Z caller=main.go:347 warning="AWS authorization not used; pre-flight check failed"
ts=2019-03-25T17:00:45.160827588Z caller=main.go:452 url=git@github.com:2opremio/locked-down-flux.git user="Weave Flux" email=support@weave.works signing-key= sync-tag=flux-sync notes-ref=flux set-author=false
ts=2019-03-25T17:00:45.160897014Z caller=main.go:509 upstream="no upstream URL given"
ts=2019-03-25T17:00:45.16101119Z caller=main.go:538 metrics-addr=:3031
ts=2019-03-25T17:00:45.162781302Z caller=loop.go:90 component=sync-loop err="git repo not ready: git repo has not been cloned yet"
ts=2019-03-25T17:00:45.16283077Z caller=images.go:17 component=sync-loop msg="polling images"
ts=2019-03-25T17:00:45.162848493Z caller=images.go:27 component=sync-loop msg="no automated services"
ts=2019-03-25T17:00:45.16403101Z caller=main.go:530 addr=:3030
ts=2019-03-25T17:00:45.728458176Z caller=checkpoint.go:24 component=checkpoint msg="up to date" latest=1.11.0
ts=2019-03-25T17:05:44.941893242Z caller=loop.go:90 component=sync-loop err="git repo not ready: git clone --mirror: fatal: Could not read from remote repository."
ts=2019-03-25T17:05:44.942947035Z caller=images.go:17 component=sync-loop msg="polling images"
ts=2019-03-25T17:05:44.943062658Z caller=images.go:27 component=sync-loop msg="no automated services"
ts=2019-03-25T17:09:42.250149846Z caller=loop.go:103 component=sync-loop event=refreshed url=git@github.com:2opremio/locked-down-flux.git branch=deploy HEAD=723b91549434575348e4b67affdc857478c5894f
ts=2019-03-25T17:09:42.494560317Z caller=sync.go:416 component=cluster method=Sync cmd=apply args= count=3
ts=2019-03-25T17:09:43.034708957Z caller=sync.go:482 component=cluster method=Sync cmd="kubectl apply -f -" took=539.883683ms err=null output="service/echoserver created\ndeployment.apps/echoserver created\nnetworkpolicy.networking.k8s.io/echoserver created"
ts=2019-03-25T17:09:45.680724239Z caller=warming.go:268 component=warmer info="refreshing image" image=gcr.io/google-samples/hello-app tag_count=2 to_update=2 of_which_refresh=0 of_which_missing=2
ts=2019-03-25T17:09:46.424009606Z caller=warming.go:364 component=warmer updated=gcr.io/google-samples/hello-app successful=2 attempted=2
ts=2019-03-25T17:09:46.424208427Z caller=images.go:17 component=sync-loop msg="polling images"
$ kubectl -n helloworld get all
NAME                             READY   STATUS    RESTARTS   AGE
pod/echoserver-bf95b6849-dp4kg   1/1     Running   0          10m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/echoserver   ClusterIP   10.110.52.118   <none>        8080/TCP   10m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/echoserver   1/1     1            1           10m

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/echoserver-bf95b6849   1         1         1       10m
@squaremo

This comment has been minimized.

Copy link
Member

squaremo commented Mar 25, 2019

I tested this PR against https://github.com/2opremio/locked-down-flux

Lovely! What happens if you tell it to allow a namespace it doesn't have RBAC access to?

@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Mar 26, 2019

Good suggestion. After creating a namespace forbidden by RBAC and included in the whitelist I noticed that flux was creating everything in the namespace anyways.

After quite some digging, it turns out that the Kubernetes installation provided by Docker For Mac makes all service accounts cluster-admins, sigh:

$ kubectl get clusterrolebinding -o yaml docker-for-desktop-binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2019-03-25T16:29:55Z"
  name: docker-for-desktop-binding
  resourceVersion: "436"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/docker-for-desktop-binding
  uid: 38fe7238-4f1b-11e9-b66a-025000000001
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts

I will retest tomorrow after removing that role binding.

PS: It's really frustrating that the RBAC groups are second-class citizens and are only documented through examples.

@2opremio

This comment has been minimized.

Copy link
Collaborator Author

2opremio commented Mar 26, 2019

I bit the bullet and everything works as expected:

  1. The resources from a RBAC-allowed and whitelisted namespace (helloworld) are created just fine
  2. The resources from a RBAC-disallowed and whitelisted namespace (helloworkd2) are not created and it doesn't abort the sync

That said, although we give a sensible error about not being able to access the helloworld2, subsequent kubectl apply errors are pretty horrible:

$ kubectl logs --namespace=flux-system flux-9b5fb66b7-526rz 
Flag --k8s-namespace-whitelist has been deprecated, changed to --k8s-allow-namespace, use that instead
ts=2019-03-26T01:16:01.725052284Z caller=main.go:169 version=1471-extend-ns-filtering-92db72a4
ts=2019-03-26T01:16:01.764536902Z caller=main.go:312 component=cluster identity=/etc/fluxd/ssh/identity
ts=2019-03-26T01:16:01.76731935Z caller=main.go:313 component=cluster identity.pub="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7OhlwhU0Hwp3kxyUe+FUWIQ3S127QIlQFnj2m9HUDfb1bpkRDK4gRVBM6JiYBTFj4/YAfGcWck4mPt88SgFIPLQzUYbJC1NoKjMVjH/VbAxP1XoJIZMlzHMd2MztcX2/q0QrCFAYhF60OhbTFu3tuZpqSMeZ0DK/KAmCVevXslaKztpayUqatBnmxhm3C5EM22jAwQZ/yxdN9+s8DZNiJS+2g5YwWVJqWB3WisjT9/p6MT/wjW+fiBbuQDpr3xAp7z6tAnr1/yChqmY7RAvTVsyXaNDtTEgXS9+FAgNmo3wf6m4qpT2H42/E6mY4rjDlPYxlf1bgkpHk8eR4I7qbJ"
ts=2019-03-26T01:16:01.767403888Z caller=main.go:314 component=cluster host=https://10.96.0.1:443 version=kubernetes-v1.13.0
ts=2019-03-26T01:16:01.76744863Z caller=main.go:326 component=cluster kubectl=/usr/local/bin/kubectl
ts=2019-03-26T01:16:01.769259121Z caller=main.go:337 component=cluster ping=true
ts=2019-03-26T01:16:07.132247759Z caller=aws.go:69 component=aws warn="no AWS region configured, or detected as cluster region" err="RequestError: send request failed\ncaused by: Get http://169.254.169.254/latest/meta-data/placement/availability-zone: dial tcp 169.254.169.254:80: connect: connection refused"
ts=2019-03-26T01:16:07.132330617Z caller=main.go:362 warning="AWS authorization not used; pre-flight check failed"
ts=2019-03-26T01:16:07.133919183Z caller=main.go:467 url=git@github.com:2opremio/locked-down-flux.git user="Weave Flux" email=support@weave.works signing-key= sync-tag=flux-sync notes-ref=flux set-author=false
ts=2019-03-26T01:16:07.14674934Z caller=main.go:524 upstream="no upstream URL given"
ts=2019-03-26T01:16:07.14715081Z caller=main.go:553 metrics-addr=:3031
ts=2019-03-26T01:16:07.154751956Z caller=images.go:17 component=sync-loop msg="polling images"
ts=2019-03-26T01:16:07.154995676Z caller=images.go:27 component=sync-loop msg="no automated workloads"
ts=2019-03-26T01:16:07.155227009Z caller=loop.go:90 component=sync-loop err="git repo not ready: git repo has not been cloned yet"
ts=2019-03-26T01:16:07.15584015Z caller=main.go:545 addr=:3030
ts=2019-03-26T01:16:07.203392013Z caller=kubernetes.go:284 component=cluster warning="cannot access allowed namespace" namespace=helloworld2 err="namespaces \"helloworld2\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"helloworld2\""
ts=2019-03-26T01:16:07.601941976Z caller=checkpoint.go:24 component=checkpoint msg="up to date" latest=1.11.0
ts=2019-03-26T01:16:15.779525806Z caller=loop.go:103 component=sync-loop event=refreshed url=git@github.com:2opremio/locked-down-flux.git branch=master HEAD=c89a1b3362d904300f10a74c6daf3f2cdf900a18
ts=2019-03-26T01:16:15.871514063Z caller=sync.go:455 component=cluster method=Sync cmd=apply args= count=7
ts=2019-03-26T01:16:16.26625243Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=394.659317ms err="running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"helloworld2\", Namespace: \"\"\nObject: &{map[\"apiVersion\":\"v1\" \"kind\":\"Namespace\" \"metadata\":map[\"annotations\":map[\"flux.weave.works/sync-checksum\":\"eff788baf5f8906b3879d666e60b5407f7b9f294\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"flux.weave.works/sync-gc-mark\":\"sha256.cFRnOwOXCB_oNpi52QLlUkjvODSLBIo1ntBRfzoPqp0\"] \"name\":\"helloworld2\" \"namespace\":\"\"]]}\nfrom server for: \"STDIN\": namespaces \"helloworld2\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"helloworld2\"\nError from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=services\", GroupVersionKind: \"/v1, Kind=Service\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"apiVersion\":\"v1\" \"kind\":\"Service\" \"metadata\":map[\"labels\":map[\"app\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.3KBCW83dRp2s10Zkls_vLADf8Wl0E9Agzz8sIT3wHkU\"] \"name\":\"echoserver\" \"namespace\":\"helloworld2\" \"annotations\":map[\"flux.weave.works/sync-checksum\":\"24d87352fb027d306ce7b03676b6d723dc59a799\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"]] \"spec\":map[\"ports\":[map[\"protocol\":\"TCP\" \"targetPort\":'\\u1f90' \"name\":\"http\" \"port\":'\\u1f90']] \"selector\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"] \"type\":\"ClusterIP\"]]}\nfrom server for: \"STDIN\": services \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"services\" in API group \"\" in the namespace \"helloworld2\"\nError from server (Forbidden): error when retrieving current configuration of:\nResource: \"apps/v1, Resource=deployments\", GroupVersionKind: \"apps/v1, Kind=Deployment\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"apiVersion\":\"apps/v1\" \"kind\":\"Deployment\" \"metadata\":map[\"annotations\":map[\"flux.weave.works/sync-checksum\":\"191df8fec32a1deda61e32f995d8ffb7ea160781\" \"flux.weave.works/tag.echoserver\":\"semver:*\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.b3ah4IFHE0JJUNAUMz8Q7ikY7f9dDREvkAYx40-0FTA\"] \"name\":\"echoserver\" \"namespace\":\"helloworld2\"] \"spec\":map[\"replicas\":'\\x01' \"selector\":map[\"matchLabels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"]] \"strategy\":map[\"type\":\"RollingUpdate\"] \"template\":map[\"metadata\":map[\"labels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"]] \"spec\":map[\"automountServiceAccountToken\":%!q(bool=false) \"containers\":[map[\"securityContext\":map[\"allowPrivilegeEscalation\":%!q(bool=false)] \"image\":\"gcr.io/google-samples/hello-app:1.0\" \"name\":\"echoserver\" \"ports\":[map[\"protocol\":\"TCP\" \"containerPort\":'\\u1f90' \"name\":\"web\"]]]]]]]]}\nfrom server for: \"STDIN\": deployments.apps \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"deployments\" in API group \"apps\" in the namespace \"helloworld2\"\nError from server (Forbidden): error when retrieving current configuration of:\nResource: \"networking.k8s.io/v1, Resource=networkpolicies\", GroupVersionKind: \"networking.k8s.io/v1, Kind=NetworkPolicy\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"apiVersion\":\"networking.k8s.io/v1\" \"kind\":\"NetworkPolicy\" \"metadata\":map[\"annotations\":map[\"kubectl.kubernetes.io/last-applied-configuration\":\"\" \"flux.weave.works/sync-checksum\":\"d98993cf8b5516b417533ebeeb0cfabe5c7ff6fa\"] \"labels\":map[\"app\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.yxyBpR0YQllFHVEBWXu_LxyedQX59j9IYMah3mi6rL4\"] \"name\":\"echoserver\" \"namespace\":\"helloworld2\"] \"spec\":map[\"ingress\":[map[\"ports\":[map[\"port\":'\\u1f90' \"protocol\":\"TCP\"]]]] \"podSelector\":map[\"matchLabels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"]] \"policyTypes\":[\"Ingress\" \"Egress\"]]]}\nfrom server for: \"STDIN\": networkpolicies.networking.k8s.io \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"networkpolicies\" in API group \"networking.k8s.io\" in the namespace \"helloworld2\"" output="service/echoserver created\ndeployment.apps/echoserver created\nnetworkpolicy.networking.k8s.io/echoserver created"
ts=2019-03-26T01:16:16.523525929Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=256.930326ms err="running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"helloworld2\", Namespace: \"\"\nObject: &{map[\"apiVersion\":\"v1\" \"kind\":\"Namespace\" \"metadata\":map[\"annotations\":map[\"flux.weave.works/sync-checksum\":\"eff788baf5f8906b3879d666e60b5407f7b9f294\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"flux.weave.works/sync-gc-mark\":\"sha256.cFRnOwOXCB_oNpi52QLlUkjvODSLBIo1ntBRfzoPqp0\"] \"name\":\"helloworld2\" \"namespace\":\"\"]]}\nfrom server for: \"STDIN\": namespaces \"helloworld2\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"helloworld2\"" output=
ts=2019-03-26T01:16:16.753540835Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=229.928726ms err=null output="service/echoserver unchanged"
ts=2019-03-26T01:16:17.020017775Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=266.312128ms err="running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=services\", GroupVersionKind: \"/v1, Kind=Service\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"metadata\":map[\"annotations\":map[\"kubectl.kubernetes.io/last-applied-configuration\":\"\" \"flux.weave.works/sync-checksum\":\"24d87352fb027d306ce7b03676b6d723dc59a799\"] \"labels\":map[\"app\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.3KBCW83dRp2s10Zkls_vLADf8Wl0E9Agzz8sIT3wHkU\"] \"name\":\"echoserver\" \"namespace\":\"helloworld2\"] \"spec\":map[\"ports\":[map[\"name\":\"http\" \"port\":'\\u1f90' \"protocol\":\"TCP\" \"targetPort\":'\\u1f90']] \"selector\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"] \"type\":\"ClusterIP\"] \"apiVersion\":\"v1\" \"kind\":\"Service\"]}\nfrom server for: \"STDIN\": services \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"services\" in API group \"\" in the namespace \"helloworld2\"" output=
ts=2019-03-26T01:16:17.187479884Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=167.164138ms err=null output="deployment.apps/echoserver configured"
ts=2019-03-26T01:16:17.414758158Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=226.754625ms err="running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"apps/v1, Resource=deployments\", GroupVersionKind: \"apps/v1, Kind=Deployment\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"spec\":map[\"selector\":map[\"matchLabels\":map[\"component\":\"echoserver\" \"app\":\"echoserver\"]] \"strategy\":map[\"type\":\"RollingUpdate\"] \"template\":map[\"metadata\":map[\"labels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"]] \"spec\":map[\"automountServiceAccountToken\":%!q(bool=false) \"containers\":[map[\"image\":\"gcr.io/google-samples/hello-app:1.0\" \"name\":\"echoserver\" \"ports\":[map[\"containerPort\":'\\u1f90' \"name\":\"web\" \"protocol\":\"TCP\"]] \"securityContext\":map[\"allowPrivilegeEscalation\":%!q(bool=false)]]]]] \"replicas\":'\\x01'] \"apiVersion\":\"apps/v1\" \"kind\":\"Deployment\" \"metadata\":map[\"name\":\"echoserver\" \"namespace\":\"helloworld2\" \"annotations\":map[\"flux.weave.works/sync-checksum\":\"191df8fec32a1deda61e32f995d8ffb7ea160781\" \"flux.weave.works/tag.echoserver\":\"semver:*\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.b3ah4IFHE0JJUNAUMz8Q7ikY7f9dDREvkAYx40-0FTA\"]]]}\nfrom server for: \"STDIN\": deployments.apps \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"deployments\" in API group \"apps\" in the namespace \"helloworld2\"" output=
ts=2019-03-26T01:16:17.57742458Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=184.507826ms err="running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"networking.k8s.io/v1, Resource=networkpolicies\", GroupVersionKind: \"networking.k8s.io/v1, Kind=NetworkPolicy\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"kind\":\"NetworkPolicy\" \"metadata\":map[\"annotations\":map[\"flux.weave.works/sync-checksum\":\"d98993cf8b5516b417533ebeeb0cfabe5c7ff6fa\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"app\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.yxyBpR0YQllFHVEBWXu_LxyedQX59j9IYMah3mi6rL4\"] \"name\":\"echoserver\" \"namespace\":\"helloworld2\"] \"spec\":map[\"ingress\":[map[\"ports\":[map[\"port\":'\\u1f90' \"protocol\":\"TCP\"]]]] \"podSelector\":map[\"matchLabels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"]] \"policyTypes\":[\"Ingress\" \"Egress\"]] \"apiVersion\":\"networking.k8s.io/v1\"]}\nfrom server for: \"STDIN\": networkpolicies.networking.k8s.io \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"networkpolicies\" in API group \"networking.k8s.io\" in the namespace \"helloworld2\"" output=
ts=2019-03-26T01:16:17.764813618Z caller=sync.go:521 component=cluster method=Sync cmd="kubectl apply -f -" took=186.921197ms err=null output="networkpolicy.networking.k8s.io/echoserver unchanged"
ts=2019-03-26T01:16:17.765076931Z caller=loop.go:210 component=sync-loop err="<cluster>:namespace/helloworld2: running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"helloworld2\", Namespace: \"\"\nObject: &{map[\"apiVersion\":\"v1\" \"kind\":\"Namespace\" \"metadata\":map[\"annotations\":map[\"flux.weave.works/sync-checksum\":\"eff788baf5f8906b3879d666e60b5407f7b9f294\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"flux.weave.works/sync-gc-mark\":\"sha256.cFRnOwOXCB_oNpi52QLlUkjvODSLBIo1ntBRfzoPqp0\"] \"name\":\"helloworld2\" \"namespace\":\"\"]]}\nfrom server for: \"STDIN\": namespaces \"helloworld2\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"helloworld2\"; helloworld2:service/echoserver: running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=services\", GroupVersionKind: \"/v1, Kind=Service\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"metadata\":map[\"annotations\":map[\"kubectl.kubernetes.io/last-applied-configuration\":\"\" \"flux.weave.works/sync-checksum\":\"24d87352fb027d306ce7b03676b6d723dc59a799\"] \"labels\":map[\"app\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.3KBCW83dRp2s10Zkls_vLADf8Wl0E9Agzz8sIT3wHkU\"] \"name\":\"echoserver\" \"namespace\":\"helloworld2\"] \"spec\":map[\"ports\":[map[\"name\":\"http\" \"port\":'\\u1f90' \"protocol\":\"TCP\" \"targetPort\":'\\u1f90']] \"selector\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"] \"type\":\"ClusterIP\"] \"apiVersion\":\"v1\" \"kind\":\"Service\"]}\nfrom server for: \"STDIN\": services \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"services\" in API group \"\" in the namespace \"helloworld2\"; helloworld2:deployment/echoserver: running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"apps/v1, Resource=deployments\", GroupVersionKind: \"apps/v1, Kind=Deployment\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"spec\":map[\"selector\":map[\"matchLabels\":map[\"component\":\"echoserver\" \"app\":\"echoserver\"]] \"strategy\":map[\"type\":\"RollingUpdate\"] \"template\":map[\"metadata\":map[\"labels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"]] \"spec\":map[\"automountServiceAccountToken\":%!q(bool=false) \"containers\":[map[\"image\":\"gcr.io/google-samples/hello-app:1.0\" \"name\":\"echoserver\" \"ports\":[map[\"containerPort\":'\\u1f90' \"name\":\"web\" \"protocol\":\"TCP\"]] \"securityContext\":map[\"allowPrivilegeEscalation\":%!q(bool=false)]]]]] \"replicas\":'\\x01'] \"apiVersion\":\"apps/v1\" \"kind\":\"Deployment\" \"metadata\":map[\"name\":\"echoserver\" \"namespace\":\"helloworld2\" \"annotations\":map[\"flux.weave.works/sync-checksum\":\"191df8fec32a1deda61e32f995d8ffb7ea160781\" \"flux.weave.works/tag.echoserver\":\"semver:*\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.b3ah4IFHE0JJUNAUMz8Q7ikY7f9dDREvkAYx40-0FTA\"]]]}\nfrom server for: \"STDIN\": deployments.apps \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"deployments\" in API group \"apps\" in the namespace \"helloworld2\"; helloworld2:networkpolicy/echoserver: running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"networking.k8s.io/v1, Resource=networkpolicies\", GroupVersionKind: \"networking.k8s.io/v1, Kind=NetworkPolicy\"\nName: \"echoserver\", Namespace: \"helloworld2\"\nObject: &{map[\"kind\":\"NetworkPolicy\" \"metadata\":map[\"annotations\":map[\"flux.weave.works/sync-checksum\":\"d98993cf8b5516b417533ebeeb0cfabe5c7ff6fa\" \"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"app\":\"echoserver\" \"flux.weave.works/sync-gc-mark\":\"sha256.yxyBpR0YQllFHVEBWXu_LxyedQX59j9IYMah3mi6rL4\"] \"name\":\"echoserver\" \"namespace\":\"helloworld2\"] \"spec\":map[\"ingress\":[map[\"ports\":[map[\"port\":'\\u1f90' \"protocol\":\"TCP\"]]]] \"podSelector\":map[\"matchLabels\":map[\"app\":\"echoserver\" \"component\":\"echoserver\"]] \"policyTypes\":[\"Ingress\" \"Egress\"]] \"apiVersion\":\"networking.k8s.io/v1\"]}\nfrom server for: \"STDIN\": networkpolicies.networking.k8s.io \"echoserver\" is forbidden: User \"system:serviceaccount:flux-system:flux\" cannot get resource \"networkpolicies\" in API group \"networking.k8s.io\" in the namespace \"helloworld2\""
ts=2019-03-26T01:16:17.767684483Z caller=daemon.go:624 component=daemon event="Sync: c89a1b3, <cluster>:namespace/helloworld2, helloworld2:deployment/echoserver, helloworld2:networkpolicy/echoserver, helloworld2:service/echoserver, helloworld:deployment/echoserver, helloworld:networkpolicy/echoserver, helloworld:service/echoserver" logupstream=false
ts=2019-03-26T01:16:20.95148383Z caller=loop.go:441 component=sync-loop tag=flux-sync old= new=c89a1b3362d904300f10a74c6daf3f2cdf900a18
ts=2019-03-26T01:16:22.190745263Z caller=loop.go:103 component=sync-loop event=refreshed url=git@github.com:2opremio/locked-down-flux.git branch=master HEAD=c89a1b3362d904300f10a74c6daf3f2cdf900a18

That said, that's something unrelated to this PR.

2opremio added 3 commits Jan 16, 2019
* Rename `getAllowedNamespaces()` to `getAllowedAndExistingNamespaces()`
* Remove redundant namespace check
* Check for namespace existence when syncing
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch from 0696022 to a61e228 Mar 26, 2019
@2opremio 2opremio force-pushed the 2opremio:1471-extend-ns-filtering branch from f2272f7 to 0aaca0b Mar 26, 2019
Copy link
Member

squaremo left a comment

This is looking good to me 🥇 💯 🍍

+234 thoroughly discussed lines!

@2opremio 2opremio merged commit 0e601a6 into fluxcd:master Mar 27, 2019
1 check passed
1 check passed
ci/circleci: build Your tests passed on CircleCI!
Details
@2opremio 2opremio deleted the 2opremio:1471-extend-ns-filtering branch Mar 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.