Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter by labels and annotations #123

Merged

Conversation

@dlmiddlecote
Copy link
Contributor

dlmiddlecote commented Jun 30, 2019

Issue : #45

Description

Add the ability to specify labels or annotations that an object should have to trigger the execution of a handler.

Types of Changes

  • New feature (non-breaking change which adds functionality)
  • Documentation / non-code

Tasks

List of tasks you will do to complete the PR

  • Documentation
  • Tests
  • Add Example

Review

List of tasks the reviewer must do to review the PR

  • Tests
  • Documentation

@nolar - Would you be able to comment on the direction of the handler matching code? I don't feel super happy with it, I feel you may be able to shed some light on a nice way to handle this. Thanks 馃憤

@dlmiddlecote dlmiddlecote requested review from nolar and samurang87 as code owners Jun 30, 2019
@zincr

This comment has been minimized.

Copy link

zincr bot commented Jun 30, 2019

馃 zincr found 0 problems , 1 warning

鈩癸笍 Large Commits
鉁 Approvals
鉁 Specification
鉁 Dependency Licensing

Details on how to resolve are provided below


Large Commits

Checks all commits for large additions to a single file. Large commits should be reviewed more carefully for potential copyright and licensing issues

This file contains a substantial change, please review to determine if the change comes from an external source and if there are any copyright or licensing issues to be aware of

@dlmiddlecote dlmiddlecote force-pushed the dlmiddlecote:45-filter-by-labels-annotations branch from 60e21ad to 877fc7a Jul 1, 2019
Copy link
Contributor

nolar left a comment

@dlmiddlecote Yes, looks like a correct direction. There are few notes below written while reading the code. But those are mostly cosmetic.

It would be also nice to add an example to ./examples/10-filters/, showing how to use the label & annotation filters. 鈥 Which are also useful to test manually and to simulate the mini-runs for one specific purpose. And which are also used as end2end tests (related: #126).

With an example, I can try running it, and see how it "feels" from the operator-developer's point of view.

PS: And what specifically does make you not happy with this code? Any doubts?

@@ -0,0 +1,45 @@
"""

This comment has been minimized.

Copy link
@nolar

nolar Jul 3, 2019

Contributor

Perhaps, something should be said here. Or the docstring removed, which is also fine.

@@ -0,0 +1,45 @@
"""

This comment has been minimized.

Copy link
@nolar

nolar Jul 3, 2019

Contributor

This file seems belonging to kopf.structs, e.g. kopf.structs.filters 鈥 all the dict-manipulation methods are gather there, to not overload the asyncio/k8s logic (see also #124 for some unrelated cleanups of the kopf.reactor).

PS: Also note that all structs are nouns for entities, not for activities, and mostly plural 鈥斅爅ust for convenient read as e.g. filters.match() or something like that. Unlike the reactor & engines packages, which are activities (or tasks).

'field': (lambda handler, _, changed_fields: _matches_field(handler, changed_fields)),
'labels': (lambda handler, body, _: _matches_metadata(handler, body, metadata_type='labels')),
'annotations': (lambda handler, body, _: _matches_metadata(handler, body, metadata_type='annotations')),
}

This comment has been minimized.

Copy link
@nolar

nolar Jul 3, 2019

Contributor

Maybe it would be easier if these 3 filters are written just as 3 ifs in matches_filter(), calling the 3 relevant functions. I'm not sure if it is worth to generalise so much and to introduce all the lambdas and meta-parametrization.

"""

def has_filter(handler):
return bool(handler.field or handler.labels or handler.annotations)

This comment has been minimized.

Copy link
@nolar

nolar Jul 3, 2019

Contributor

So, here I had a little confusion while reading: the separation of has_filter() and matches_filter() is not clear. Why is it needed?

Generally, a handler has to be assumed as matching, unless proven otherwise. Absence of any filtering criterion (one of 3) is the same as matching, i.e. it is a part of the matching criteria:

    if handler.field and not _matches_field(...):
        return False

Or, more English-language-like version ("if [it] does not have a criterion or matches the criterion"):

    return (
        (not handler.field or _matches_field(...)) and
        (not handler.labels or _matches_labels(...)) and
        (not handler.annotations or _matches_annotations(...))
    )

Or maybe inside of the matching functions, as it is already done for if not metadata: below. And then it becomes:

    return (
        _matches_field(...) and
        _matches_labels(...) and
        _matches_annotations(...)
    )

with mocker.patch('kopf.reactor.matching.matches_filter') as matches_filter:
matches_filter.return_value = True
handlers = registry.get_cause_handlers(cause)

This comment has been minimized.

Copy link
@nolar

nolar Jul 3, 2019

Contributor

A little clarification here: with is not needed.

mocker is a fixture of pytest-mock plugin. It has all the magic to restore to the previous state after each test.

So, we can easily mocker.patch() stuff all around, and to not worry about un-patching it 鈥 unless it is needed in the same test in the original state, which is a rare case and a sign of a test overload.

Such approach makes the tests straightforward, linear, without special code-blocks or if-branches.

def cause_labels_and_annotations(resource, request):
body = {'metadata': request.param}
return Mock(resource=resource, event='some-event', diff=None, body=body)

This comment has been minimized.

Copy link
@nolar

nolar Jul 3, 2019

Contributor

If those fixtures are not reused across multiple tests, it might be more convenient and easier to put them (i.e. the simulated Cause mocks) directly into the test functions below, and parametrize those tests with @pytest.mark.parametrize(...).

That would also make the tests easier to read, as initialisation, action, and assertion will be gathered in one place within ~5 lines span.

@dlmiddlecote dlmiddlecote force-pushed the dlmiddlecote:45-filter-by-labels-annotations branch from 877fc7a to 641a4d9 Jul 3, 2019
@dlmiddlecote

This comment has been minimized.

Copy link
Contributor Author

dlmiddlecote commented Jul 4, 2019

@nolar - thanks for all the feedback, I've implemented the changes you've suggested, and I like it much more now (I didn't really like matching.py before, and filters.py feels much nicer). Also thank you for explaining more about mocker and the test layouts.

I'll add an example like you suggested too, and some documentation.

I also do not understand why the E2E tests are failing in travis, when I run them locally they run ok. Do you have any ideas?

@dlmiddlecote dlmiddlecote force-pushed the dlmiddlecote:45-filter-by-labels-annotations branch from 1a8f5e3 to 274a498 Jul 4, 2019
dlmiddlecote added 13 commits Jun 30, 2019
@dlmiddlecote dlmiddlecote force-pushed the dlmiddlecote:45-filter-by-labels-annotations branch from 274a498 to 16b4ae1 Jul 9, 2019
dlmiddlecote added 4 commits Jul 9, 2019
@dlmiddlecote

This comment has been minimized.

Copy link
Contributor Author

dlmiddlecote commented Jul 14, 2019

@nolar - I'm finding that the e2e tests are a bit flakey, I get different ones failing at different times, I don't know if you've noticed this at all?

The last build of this PR fails with:

error: unable to recognize "peering.yaml": no matches for kind "KopfPeering" in version "zalando.org/v1"

However, the build in my repo is ok.

I also have other failures that seem to be down to timing (i.e. making the sleep inside test_examples.py greater seems to help.

@nolar

This comment has been minimized.

Copy link
Contributor

nolar commented Jul 14, 2019

@dlmiddlecote Yes, I've noticed that too.

Usually, re-running the failed build helps with this specific error ("unable to recognize peering.yaml" 鈥 minikube's k8s is somewhat slow to recognise new CRDs in a fraction of second, need a second or two of delay). There is a button to re-trigger one individual build.

But it also fails with multiple other reasons in my new branches/PRs 鈥 I'm looking on how to make them more stable in general.

@dlmiddlecote

This comment has been minimized.

Copy link
Contributor Author

dlmiddlecote commented Jul 16, 2019

@nolar - Thanks for this info, I don't have the ability to re-run the failed build within the zalando-incubator project unfortunately. Would you be able to kick that off again, if possible?

Also, thanks for continueing to work on the stability of the e2e tests.

I think that this PR is ready for re-review now, if you'd be so kind.

@nolar

This comment has been minimized.

Copy link
Contributor

nolar commented Jul 16, 2019

@dlmiddlecote I've re-triggered. It is now green.

PS: With #149 merged to master, some of the e2e instabilities should go away (especially this peering.yaml error 鈥 as peering is disabled for e2e tests now (for a reason)).

@nolar
nolar approved these changes Jul 24, 2019
Copy link
Contributor

nolar left a comment

Awesome! @dlmiddlecote Thanks for all the improvements.

@nolar nolar merged commit 8da86a0 into zalando-incubator:master Jul 24, 2019
3 checks passed
3 checks passed
Zincr-bot Found 0 problems, 1 warnings
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
coverage/coveralls Coverage increased (+0.02%) to 83.425%
Details
@nolar

This comment has been minimized.

Copy link
Contributor

nolar commented Jul 24, 2019

@nolar nolar added the enhancement label Oct 22, 2019
@nolar nolar mentioned this pull request Nov 27, 2019
0 of 8 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can鈥檛 perform that action at this time.