Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DelegatingReader cant be used to read unstructured.Unstructured from cache #615

Closed
alvaroaleman opened this issue Sep 30, 2019 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@alvaroaleman
Copy link
Member

The delegatingReader always redirects read requests for unstructured.Unstructured to the API reader:

_, isUnstructured := obj.(*unstructured.Unstructured)

However it seems the InformersMap is perfectly well capable of establishing watches for unstructured.Unstructured:

_, isUnstructured := obj.(*unstructured.Unstructured)

When doing a quick test by creating a client that just uses a Cache as Reader and requesting unstructured.Unstructured through it, everything seemed to work fine and I was able to get objects this way:

I0930 12:05:21.031942    2216 reflector.go:131] Starting reflector *unstructured.Unstructured (10h0m0s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196
I0930 12:05:21.031972    2216 reflector.go:169] Listing and watching *unstructured.Unstructured from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196

/kind bug
CC @thetechnick

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 30, 2019
@shawn-hurley
Copy link

You have to create the delegating reader to use this behavior yourself on startup. Here is an example of how to do it:

	mgr, err := manager.New(cfg, manager.Options{
		NewClient: func(cache cache.Cache, config *rest.Config, options client.Options) (client.Client, error) {
			c, err := client.New(config, options)
			if err != nil {
				return nil, err
			}
			return &client.DelegatingClient{
				Reader:       cache,
				Writer:       c,
				StatusClient: c,
			}, nil
		},
	})

We chose to make this the default behavior to make sure that folks were not caching the entire cluster on accident with unstructured.

@alvaroaleman
Copy link
Member Author

I can work around the problem, but this issue is about avoiding the workaround :)

We chose to make this the default behavior to make sure that folks were not caching the entire cluster on accident with unstructured.

Generally I find this a very valid concern, but why do you think this is more likely and/or more problematic when ppl use unstructured.Unstructured?

I would have argued that the types that create a particular high amount of traffic like Pods or Endpoints are mostly in the corev1 api and ppl tend to not have very high number of CRs.

@alvaroaleman
Copy link
Member Author

Also I personally find the behavior of automatically creating watches pretty magic in itself and IMHO its something ppl need to be aware of. Having a sub-condition there is IMHO mainly making it more complicated to understand, but is not very likely to prevent ppl from using this wrong.

@shawn-hurley
Copy link

Generally I find this a very valid concern, but why do you think this is more likely and/or more problematic when ppl use unstructured.Unstructured?

It is easier to have a generic list of things, that you go and get with unstructured then having to get the type for all of those things with the client.

@alvaroaleman
Copy link
Member Author

It is easier to have a generic list of things, that you go and get with unstructured then having to get the type for all of those things with the client.

I do not really follow, do you have an example of what you mean there? There is nothing preventing ppl from using a []runtime.Object, the client resolves the GVK via the scheme after all.

@DirectXMan12
Copy link
Contributor

So the initial logic was that the common case for unstructured is that you need to read basically arbitrary objects from the cluster from "untrusted" input (things like the garbage collector). In this case, you could potentially accidentally cache the entire cluster, which seemed like a major footgun, and we wanted to avoid it.

@DirectXMan12
Copy link
Contributor

hopefully that helps provide some background

@alvaroaleman
Copy link
Member Author

Okay, thanks for the explanation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants