Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ WIP: Add Kubernetes Gomega extension with to make testing controllers easier #1364

Closed
wants to merge 2 commits into from

Conversation

JoelSpeed
Copy link
Contributor

This is an initial attempt at solving #933. I've revamped the design a little from the original to try and take into account some changes to the client since I originally came up with this idea.

Essentially this is a Kubernetes wrapper on top of Gomega to make it easy to write tests using envtest.

I've marked this as WIP at the moment as I would like to get some feedback on the initial design (and naming), and I would like to add docs on how to use this as well as tests to work as proof that the matchers work and as examples on how to implement it.

  • Add docs
  • Add tests

The outcome here is that inside a test, I should be able to write something like:

m.WithTimeout(timeout).Eventually(workerNode1).Should(WithField("Spec.Taints",
	ContainElement(SatisfyAll(
		utils.WithField("Effect", Equal(corev1.TaintEffect("NoSchedule"))),
		utils.WithField("Key", Equal("node.kubernetes.io/unschedulable")),
	)),
))

To check that my controller eventually reconciles my node and adds the taint specified. Or yanno, whatever you want to check. Internally, this will poll until the Get returns no errors and then return the node, and with the WithField transform, allow me to extract particular fields and assert their values.

This adds a utility that is intended to be used with envtest to make it 
easier for users to write tests.

The Matcher wraps common operations that you might do with gomega when 
interacting with Kubernetes to allow simpler test assertions.
This should allow people to access fields within their objects fetched 
by the Matcher. This allows easier assertions by allowing specific 
fields to be compared during tests.
@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 29, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: JoelSpeed
To complete the pull request process, please assign vincepri after the PR has been reviewed.
You can assign the PR to them by writing /assign @vincepri in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jan 29, 2021
@JoelSpeed
Copy link
Contributor Author

/assign @vincepri Since I know you commented on the original issue I proposed. Looking for initial feedback on if this is something we still want to do and whether the interfaces seem reasonable.

)

// Matcher has Gomega Matchers that use the controller-runtime client.
type Matcher struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Matcher should be unexported here, and the initialized struct should be returned as a non-pointer value from a builder method. With a repetitive and parallel nature of tests, it will be good to always know you work with unique komega interface using exact client, extras or timeout.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm that's a good point, at the moment we are using all pointer receivers, which means we will affect the parent. Perhaps we need to stop doing that and then whenever you call one of the builder style methods, it would not affect the parent it's being called upon, means you could compose different settings for each call if need be

I expect people will create a new matcher for each test anyway (either in a BeforeEach in Ginkgo or at the start of the test in normal test suites), a NewMatcher method might make this cleaner though, good suggestion

)

// Matcher has Gomega Matchers that use the controller-runtime client.
type Matcher struct {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm that's a good point, at the moment we are using all pointer receivers, which means we will affect the parent. Perhaps we need to stop doing that and then whenever you call one of the builder style methods, it would not affect the parent it's being called upon, means you could compose different settings for each call if need be

I expect people will create a new matcher for each test anyway (either in a BeforeEach in Ginkgo or at the start of the test in normal test suites), a NewMatcher method might make this cleaner though, good suggestion

// Create creates the object on the API server.
func (m *Matcher) Create(obj client.Object, opts ...client.CreateOption) gomega.GomegaAssertion {
err := m.Client.Create(m.context(), obj, opts...)
return gomega.Expect(err, m.extras...)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, these will only work with the default package level gomega assertions, should make the option of passing in a custom gomega.WithT as well to this can be used in normal go testing tests

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 16, 2021
@JoelSpeed
Copy link
Contributor Author

/remove-lifecycle stale

Would still like to contribute this at some point if it's useful. Additional feedback on the initial design would be appreciated

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 15, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 14, 2021
@joelanford
Copy link
Member

/lifecycle frozen

This seems pretty useful, so I don't want this to be auto-closed by the ci-robot.

@JoelSpeed is this still a WIP? If you're not actively making changes, it might be good to remove the WIP designation to signal reviewers that its ready for review.

@k8s-ci-robot
Copy link
Contributor

@joelanford: The lifecycle/frozen label cannot be applied to Pull Requests.

In response to this:

/lifecycle frozen

This seems pretty useful, so I don't want this to be auto-closed by the ci-robot.

@JoelSpeed is this still a WIP? If you're not actively making changes, it might be good to remove the WIP designation to signal reviewers that its ready for review.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@JoelSpeed
Copy link
Contributor Author

There's some feedback that I wanted to address about what's exported that probably shouldn't be, I'll try and get to that this week, then I'll remove the WIP tag

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants