Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 run replication tests on a shared kcp instance #2620

Conversation

p0lyn0mial
Copy link
Contributor

@p0lyn0mial p0lyn0mial commented Jan 13, 2023

Summary

previously all scenarios were run in a private environment. After the change only the scenarios that are disruptive run in a sandbox, the rest is run on a shared kcp instance (either on a single or multi-shard env)

This change allows us to run non-disruptive scenarios on a multi-shard cluster.

Related issue(s)

#2596
kcp-dev/contrib-tmc#84

@p0lyn0mial
Copy link
Contributor Author

/test e2e-sharded

ctx, cancel := context.WithCancel(context.Background())
t.Cleanup(cancel)

// TODO (p0lyn0mial): detect the type of the env we are running on (single vs multi-shard)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, ci/prow/e2e-sharded keeps failing so I need to fix it in this PR :)

@p0lyn0mial p0lyn0mial force-pushed the run-replication-tests-on-shared-instance branch 2 times, most recently from 83f84c1 to a18bd4a Compare January 16, 2023 12:20
@p0lyn0mial
Copy link
Contributor Author

/retest

a standalone version of the cache server is tested on a multi-shard environment(in TestReplication test).
a helper function for creating a rest config for the cache server depending on
the underlying test environment.
@p0lyn0mial p0lyn0mial force-pushed the run-replication-tests-on-shared-instance branch from a18bd4a to 7702dde Compare January 16, 2023 13:04
the CI jobs wait for the admin kubeconfig before running the tests.

creating the kubeconfig after the shards are ready makes sure
that the shards are registered and that the env is stable.
@stevekuznetsov
Copy link
Contributor

@p0lyn0mial xref kcp-dev/contrib-tmc#84

"Shard",
corev1alpha1.SchemeGroupVersion.WithResource("shards"),
&corev1alpha1.Shard{
ObjectMeta: metav1.ObjectMeta{Name: "test-shard"},
ObjectMeta: metav1.ObjectMeta{Name: withPseudoRandomSuffix("test-shard")},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can all or most of these use GenerateName?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we could do that but that would require some/bigger changes because the tests need stable name so that they can find resources for modification/deletion.

I can prepare a follow-up PR for that^

err := <-terminatedCh
shardsErrCh <- indexErrTuple{i, err}
}(i, terminatedCh)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I explained in the commit msg, have a look eb96d8b

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW @hardys also has it in #2407

// intended to be common between fixture for servers whose lifecycle
// is test-managed and fixture for servers whose lifecycle is managed
// separately from a test run.
func loadKubeConfig(kubeconfigPath string) (clientcmd.ClientConfig, error) {
func LoadKubeConfig(kubeconfigPath, contextName string) (clientcmd.ClientConfig, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this? We shouldn't build a helper library for kube.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to avoid copying of code that loads a kubeconfig

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sttts k8s libraries have just a little too much power, every project I've ever seen has their own simplification to not copy-paste ...

@stevekuznetsov
Copy link
Contributor

/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 16, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 16, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: stevekuznetsov

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 16, 2023
@openshift-merge-robot openshift-merge-robot merged commit 33064b5 into kcp-dev:main Jan 17, 2023
@kcp-ci-bot kcp-ci-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants