Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache-server: expose apiresourceschemas and apiexports #1815

Merged
merged 2 commits into from
Aug 25, 2022

Conversation

p0lyn0mial
Copy link
Contributor

@p0lyn0mial p0lyn0mial commented Aug 23, 2022

Summary

exposes schema-less apiresourceschemas and apiexports resources. It allows us to store any fields we need. Since this server will be used by the shards, schemas will be implicit.

I did the following manual test which indicates storing "foo": "bar" with an apiexport obj works.

POST:

curl -k -X POST 'https://localhost:6443/apis/apis.kcp.dev/v1alpha1/apiexports' -H 'Content-Type: application/json' -d '{"apiVersion":"apis.kcp.dev/v1alpha1","kind":"APIExport","metadata":{"name":"ab"},"foo":"bar"}'

GET:

curl -k 'https://localhost:6443/apis/apis.kcp.dev/v1alpha1/apiexports/ab'
{
  "apiVersion": "apis.kcp.dev/v1alpha1",
  "foo": "bar",
  "kind": "APIExport",
  "metadata": {
    "annotations": {
      "kcp.dev/cluster": "system:admin"
    },
    "creationTimestamp": "2022-08-24T08:50:09Z",
    "generation": 1,
    "managedFields": [
      {
        "apiVersion": "apis.kcp.dev/v1alpha1",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:foo": {}
        },
        "manager": "curl",
        "operation": "Update",
        "time": "2022-08-24T08:50:09Z"
      }
    ],
    "name": "ab",
    "resourceVersion": "11",
    "uid": "b4b9447e-5ea8-4b73-a996-eb4a98eae3e0"
  }
}

note for now passing a logical cluster name is not required in the future it might be required along with a shard name.

Related issue(s)

requires kcp-dev/kubernetes#93

@openshift-ci openshift-ci bot requested review from ncdc and sttts August 23, 2022 15:59
@p0lyn0mial p0lyn0mial removed the request for review from ncdc August 23, 2022 15:59
@p0lyn0mial
Copy link
Contributor Author

/hold

I still need to test it manually.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 23, 2022

return wait.PollInfiniteWithContext(ctx, time.Second, func(ctx context.Context) (bool, error) {
for _, crd := range crds {
_, err := apiExtensionsClusterClient.Cluster(SystemCRDLogicalCluster).ApiextensionsV1().CustomResourceDefinitions().Create(ctx, crd, metav1.CreateOptions{})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

creating crds with empty schemas is cheap and doesn't require creating an in-memory storage implementation.

it can be reworked in the future if we don't like it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the storage is required for listing CRDs and for discovery (not implemented yet).

@p0lyn0mial p0lyn0mial force-pushed the cache-exports-schemas branch 2 times, most recently from 675859b to f162ca2 Compare August 24, 2022 09:19
@p0lyn0mial p0lyn0mial mentioned this pull request Aug 24, 2022
37 tasks
@p0lyn0mial
Copy link
Contributor Author

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 24, 2022
Copy link
Contributor

@stevekuznetsov stevekuznetsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly seems reasonable, some small comments

config/crds/bootstrap.go Outdated Show resolved Hide resolved
configcrds "github.com/kcp-dev/kcp/config/crds"
)

var SystemCRDLogicalCluster = logicalcluster.New("system:system-crds")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a constant somewhere already, could we import it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should treat it as a distinct value, I picked up the same logical cluster for symmetry.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment to explain that would be great! Makes sense.

pkg/cache/server/bootstrap/bootstrap.go Outdated Show resolved Hide resolved
pkg/cache/server/bootstrap/bootstrap.go Show resolved Hide resolved
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
},
} // wipe the schema, we don't need validation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only diff from the other bootstrap functions? could we DRY this out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could call CreateSingle from this function and retry on any error.

@@ -125,17 +143,31 @@ func NewConfig(opts *cacheserveroptions.CompletedOptions) (*Config, error) {
serverConfig.LoopbackClientConfig.DisableCompression = true
clientutils.EnableMultiCluster(serverConfig.LoopbackClientConfig, &serverConfig.Config, "namespaces", "apiservices", "customresourcedefinitions", "clusterroles", "clusterrolebindings", "roles", "rolebindings", "serviceaccounts", "secrets")

var err error
c.ApiExtensionsClusterClient, err = apiextensionsclient.NewClusterForConfig(serverConfig.LoopbackClientConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we're moving to the context-based clients instead of NewCluster - can we do that here as well to save Varsha some work in the future? :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

np, mind sharing an example?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look for any use of logicalcluster.WithCluster() e.g.:

	return c.kubeClusterClient.CoreV1().Namespaces().Patch(logicalcluster.WithCluster(ctx, clusterName), name, pt, data, opts, subresources...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, but I need a Wildcard logical cluster for informers, it seems like informers don't support the new way of passing a cluster name.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is the one case where this approach does not work

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#1813 fixes that :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need 2 clients until it merges?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow actually - you need a cluster-scoped client for the factory:

# github.com/kcp-dev/kcp/pkg/client/informers
func NewSharedInformerFactoryWithOptions( ... versioned.Interface, ... ) {}

Where you can get such a thing from .Cluster(logicalcluster.Wildcard) which should be identical to SetCluster() on the *rest.Config, no?

# github.com/kcp-dev/kcp/pkg/client/clientset/versioned
func (c *Cluster) Cluster(name v2.Name) Interface {}

So NewClusterForConfig(cfg).Cluster(logicalcluster.Wildcard) would become NewClientForConfig(kcpclienthelper.SetCluster(rest.CopyConfig(cfg), logicalcluster.Wildcard))?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something is broken, I think I have to use it along with NewClusterRoundTripper but then the path seems to be incorrect /clusters/*/clusters/system:system-crds/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apiresourceschemas.apis.kcp.dev

I'd like to proceed with this PR if that's okay. I can revisit using the context-based clients in the future

pkg/cache/server/server.go Outdated Show resolved Hide resolved
// run inside of a post-start-hook. The k8s APIServer wrote the post-start-hook context code before contexts
// were part of the Go stdlib.
func goContext(parent genericapiserver.PostStartHookContext) context.Context {
func GoContext(parent genericapiserver.PostStartHookContext) context.Context {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like creating common/utils pkgs :P

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You and @sttts both ... but I like copy-pasting something 100x even less than I like creating a utility package ;)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved to a util pkg

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

util packages 😱

@ncdc
Copy link
Member

ncdc commented Aug 25, 2022

@p0lyn0mial if you wouldn't mind, please prefix the PR titles with the area they're for (it makes it easier to know at a glance) - thanks!

/retitle cache-server: expose apiresourceschemas and apiexports

@openshift-ci openshift-ci bot changed the title expose apiresourceschemas and apiexports cache-server: expose apiresourceschemas and apiexports Aug 25, 2022
Copy link
Contributor

@stevekuznetsov stevekuznetsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other comments can/will be follow-ups.

import (
"context"

genericapiserver "k8s.io/apiserver/pkg/server"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: If I had a crystal ball, I think I'd make the prophecy that Stefan would want this helper to end up in this upstream package ... :^)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about moving that into the upstream package?

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 25, 2022
@p0lyn0mial p0lyn0mial added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 25, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 25, 2022

[APPROVALNOTIFIER] This PR is APPROVED

Approval requirements bypassed by manually added approval.

This pull-request has been approved by: stevekuznetsov

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@stevekuznetsov
Copy link
Contributor

#342

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants