Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 End-to-end tests for APIExportEndpointSlice #2608

Merged
merged 1 commit into from
Jan 30, 2023

Conversation

fgiloux
Copy link
Contributor

@fgiloux fgiloux commented Jan 12, 2023

Summary

This PR introduces end-to-end tests covering APIExportEndpointSlice reconcilation.

Related issue(s)

Contributes to #2332

@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 12, 2023

/hold will be rebased on top of #2469 when it has merged

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 12, 2023
@openshift-ci openshift-ci bot added the kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API label Jan 12, 2023
@fgiloux fgiloux force-pushed the slice-e2e branch 4 times, most recently from 86a6d9e to c274fcf Compare January 12, 2023 15:40
@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 12, 2023

shard creation introduces side effects with other end-to-end tests.
This can be addressed in #2612
This PR could either get progressed without the check for the endpoint URLs (currently commented out) or wait for #2612 to be addressed.

@fgiloux fgiloux force-pushed the slice-e2e branch 8 times, most recently from 49fb946 to 1e23c0c Compare January 16, 2023 19:29
@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 16, 2023

/unhold

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 16, 2023
ObjectMeta: metav1.ObjectMeta{
Name: "my-export",
},
Spec: apisv1alpha1.APIExportSpec{},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this required?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no it is not. Removing it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

mergePatch, err := jsonpatch.CreateMergePatch(encodeJSON(t, slice), encodeJSON(t, patchedSlice))
require.NoError(t, err)

t.Logf("Try to patch the APIExportEndpointSlice to point at a Partition")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: would rename to Patching . . . for consistency with the previous comments

}
}

func newPrivateEnvironment(t *testing.T) *environment {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd remove environment, addWorkspacesAndClient, addRootClient and I'd change the signature of createAndTestResources method to accept clients. I think it would be easier to read and extend in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The paths to the two logical clusters would also need to be passed as parameters. The code of addWorkspacesAndClient would need to be replicated in both test cases.
I am not sure that the result would improve readability and extensibility. To me it is more of a style preference. Do you have any concrete point in mind where it would make a difference?


t.Logf("Retrying to create the APIExportEndpointSlice after the APIExport has been created")
require.Eventually(t, func() bool {
slice, err = sliceClient.Cluster(e.partitionClusterPath).Create(ctx, slice, metav1.CreateOptions{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: since we are creating the slice here would it make sense to change the name of the cluster to serviceProviderPath?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure. Both workspaces, the one with the APIExport and the one with the Partition and APIExportEndpointSlice are managed by the service provider.

t.Logf("Adding a Partition to the APIExportEndpointSlice")
patchedSlice := slice.DeepCopy()
patchedSlice.Spec.Partition = partition.Name
mergePatch, err := jsonpatch.CreateMergePatch(encodeJSON(t, slice), encodeJSON(t, patchedSlice))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: i'd just update, seems we could get rid of encodeJSON methods

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

require.NoError(t, err)
return conditions.IsTrue(s, apisv1alpha1.APIExportValid)
}, wait.ForeverTestTimeout, 100*time.Millisecond, "expected valid APIExport")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we examine APIExportEndpoints at this point ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For that we would need to make assumptions on the shards in a shared environment, which I think it is better to avoid as long as we have not defined clear conventions.
Endpoints population is specifically tested in the private environment. If we could do it here we would not need the private cluster. I would prefer not to need it but it comes back to have clear conventions consistently followed (possibly enforced) across the e2e tests, e.g.:

  • 3 shards with constant labels never modified nor deleted
  • any shard created/modified/deleted during testing uses different labels
  • e2e tests should not be sensible to shards created/modified in other test cases: nothing should get scheduled on them implicitly, without explicit scheduling workspaces should get created on the original 3 shards.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general we can assume that an environment created by the test binary is stable (either single or multi-shard) and doesn't change. Tests that are destructive should set-up their own env.

Validating if APIExportEndpointSlice without a Partition has all shards assigned seems worth checking.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ? The test case appears to be adding a shard that matches a partition assigned to a APIExportEndpointSlice.

This is not the same as "Validating if a APIExportEndpointSlice without a Partition has all (default) shards assigned" or is it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an end-to-end test for APIExportEndpointSlice without partition in the private environment.

require.Eventually(t, func() bool {
s, err := e.kcpClusterClient.Cluster(e.partitionClusterPath).ApisV1alpha1().APIExportEndpointSlices().Get(ctx, slice.Name, metav1.GetOptions{})
require.NoError(t, err)
return conditions.IsFalse(s, apisv1alpha1.PartitionValid) && conditions.GetReason(s, apisv1alpha1.PartitionValid) == apisv1alpha1.PartitionInvalidReferenceReason
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we examine APIExportEndpoints with invalid partition?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can check that there are no endpoints.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

require.NoError(t, err)
return len(s.Status.APIExportEndpoints) == 0
}, wait.ForeverTestTimeout, 100*time.Millisecond, "not expecting any endpoint")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense to add a scenario with a partition that would select some shards ? could be just 1 in multi-shard env.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per the above this is covered with the private server.

@fgiloux fgiloux force-pushed the slice-e2e branch 2 times, most recently from 8b484a0 to 6e73137 Compare January 17, 2023 14:42
e.addWorkspacesAndClient(t)
e.addRootClient(t)

slice := e.createAndTestResources(ctx, t)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mhm, are we already testing it in TestAPIExportEndpointSlice ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are setting up prerequisites for the tests that are run in the private environment.

@fgiloux fgiloux force-pushed the slice-e2e branch 2 times, most recently from 71638de to 149bf32 Compare January 18, 2023 17:48
}, wait.ForeverTestTimeout, 100*time.Millisecond, "expecting a single endpoint for the root shard, got %d", len(sliceWithAll.Status.APIExportEndpoints))
}

func TestAPIExportEndpointSlice(t *testing.T) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have a hard time what these tests actually do. How is TestAPIExportEndpointSlice and TestAPIExportEndpointSlicePrivate different?

Hiding the core test logic in helpers is an anti-pattern. Build helpers (DSL-like) to build up the test setup, but put the actual thing tested here as high level as possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this core test code should read like a story, i.e. not full of helpers again, rather this should be a series of steps plus require.SomeThing lines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TestAPIExportEndpointSlicePrivate manipulates Shards, hence the need for a private cluster. It requires however the APIExport, APIExportEndpointSlice and other resources. The building of these resources is what is tested in TestAPIExportEndpointSlice. Lukasz explicitly asked for the tests to be run on both: a private and a shared cluster (for the part compatible with it). Without this ask TestAPIExportEndpointSlice would not be needed.

"github.com/kcp-dev/kcp/test/e2e/framework"
)

type environment struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

most tests where we used this pattern turned out to be hard to read. As written below, don't follow the temptation to put the actual tests (= do X + require statements) into helpers. This rule of thumb basically excludes this pattern of abstracting the environments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the struct and put everything in the test cases, no functions anymore.


// Create Organization and Workspaces
orgPath, _ := framework.NewOrganizationFixture(t, server)
var exportWs *tenancyv1alpha1.Workspace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you could remove declaration since the variable will be created in line 49

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

var exportWs *tenancyv1alpha1.Workspace
exportClusterPath, exportWs := framework.NewWorkspaceFixture(t, server, orgPath, framework.WithName("export-workspace"))
t.Logf("Export Workspace %v", exportWs)
var partitionWs *tenancyv1alpha1.Workspace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you could remove declaration since the variable will be created in line 49

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

orgPath, _ := framework.NewOrganizationFixture(t, server)
var exportWs *tenancyv1alpha1.Workspace
exportClusterPath, exportWs := framework.NewWorkspaceFixture(t, server, orgPath, framework.WithName("export-workspace"))
t.Logf("Export Workspace %v", exportWs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NewWorkspaceFixture already logs into the log file, i.e. Created %s workspace %s as /clusters/%s

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

besides it looks like exportWs is unused :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

t.Logf("Export Workspace %v", exportWs)
var partitionWs *tenancyv1alpha1.Workspace
partitionClusterPath, partitionWs := framework.NewWorkspaceFixture(t, server, orgPath, framework.WithName("partition"))
t.Logf("Partition Workspace %v", partitionWs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NewWorkspaceFixture already logs into the log file, i.e. Created %s workspace %s as /clusters/%s

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

besides it looks like partitionWs is unused :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

// Create Organization and Workspaces
orgPath, _ := framework.NewOrganizationFixture(t, server)
var exportWs *tenancyv1alpha1.Workspace
exportClusterPath, exportWs := framework.NewWorkspaceFixture(t, server, orgPath, framework.WithName("export-workspace"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we remove framework.WithName("export-workspace") ? I think that would allow for running the test multiple times without side-effects.

Copy link
Contributor Author

@fgiloux fgiloux Jan 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok.

require.Eventually(t, func() bool {
s, err := kcpClusterClient.Cluster(partitionClusterPath).ApisV1alpha1().APIExportEndpointSlices().Get(ctx, slice.Name, metav1.GetOptions{})
require.NoError(t, err)
return len(s.Status.APIExportEndpoints) == 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the point of doing this call in require.Eventually since it will stop after the first iteration?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

"github.com/kcp-dev/kcp/test/e2e/framework"
)

func TestAPIExportEndpointSlice(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rename to TestAPIExportEndpointSliceWithPartition

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

"github.com/kcp-dev/kcp/test/e2e/framework"
)

func TestAPIExportEndpointSlice(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we are missing a happy-case scenario. An SP creates a slices and it gets assigned all/default endpoints.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we discussed this point earlier. It is in the private test case where shards are guaranteed not to be modified by any other test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The destructive scenario is different. I think it is about checking if adding a shard that matches partition requirements results in adding endpoints to the slice.

I think we are missing a default case - when I add a slice will it be filled with endpoints that correspond to the current number of shards?

The default case should work on single and multi-shard environment and should not add, remove or modify shards.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scenario is covered here: https://github.com/kcp-dev/kcp/pull/2608/files#diff-43025ba07d287623f79a84eaa3f3616ca31e4402bd6326683517094a172261efR289-R309
I don't see the point in making the shared cluster test fragile on purpose.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, sorry I missed it. Wouldn't it be better to have scenarios that have well defined limited scope :) ?


// Create Organization and Workspaces
orgPath, _ := framework.NewOrganizationFixture(t, server)
var exportWs *tenancyv1alpha1.Workspace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the same comments apply here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

},
}

t.Logf("Creating an APIExportEndpointSlice with reference to a nonexistent APIExport")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we simplify this test case and test only what is specific for it, i.e. adding a shard that matches partition requirements results in adding endpoints to the slice ?

Copy link
Contributor Author

@fgiloux fgiloux Jan 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can remove a bit of it but the prerequisites still need to get set up.

require.Eventually(t, func() bool {
s, err := kcpClusterClient.Cluster(partitionClusterPath).ApisV1alpha1().APIExportEndpointSlices().Get(ctx, slice.Name, metav1.GetOptions{})
require.NoError(t, err)
return len(s.Status.APIExportEndpoints) == 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we check the content of s.Status.APIExportEndpoints it should match shard.Spec.VirtualWorkspaceURL and the export name ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

Copy link
Contributor

@p0lyn0mial p0lyn0mial left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also should we add a few negative scenarios (can be a new PR), like testing

?

@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 27, 2023

also should we add a few negative scenarios (can be a new PR), like testing

Do we want to replicate the unit tests in the e2e tests? I am a bit concerned that they already take a significant amount of time to run.

Signed-off-by: Frederic Giloux <fgiloux@redhat.com>
@p0lyn0mial
Copy link
Contributor

/lgtm

@sttts
Copy link
Member

sttts commented Jan 30, 2023

/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 30, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 30, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sttts

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 30, 2023
@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 30, 2023

/retest

1 similar comment
@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 30, 2023

/retest

@openshift-merge-robot openshift-merge-robot merged commit 22d06c1 into kcp-dev:main Jan 30, 2023
@fgiloux fgiloux deleted the slice-e2e branch February 7, 2023 15:24
@fgiloux fgiloux mentioned this pull request Feb 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants