Skip to content
This repository has been archived by the owner. It is now read-only.

Filter by labels and annotations #123

Merged

Conversation

dlmiddlecote
Copy link
Contributor

@dlmiddlecote dlmiddlecote commented Jun 30, 2019

Issue : #45

Description

Add the ability to specify labels or annotations that an object should have to trigger the execution of a handler.

Types of Changes

  • New feature (non-breaking change which adds functionality)
  • Documentation / non-code

Tasks

List of tasks you will do to complete the PR

  • Documentation
  • Tests
  • Add Example

Review

List of tasks the reviewer must do to review the PR

  • Tests
  • Documentation

@nolar - Would you be able to comment on the direction of the handler matching code? I don't feel super happy with it, I feel you may be able to shed some light on a nice way to handle this. Thanks 馃憤

@zincr
Copy link

@zincr zincr bot commented Jun 30, 2019

馃 zincr found 0 problems , 1 warning

鈩癸笍 Large Commits
鉁 Approvals
鉁 Specification
鉁 Dependency Licensing

Details on how to resolve are provided below


Large Commits

Checks all commits for large additions to a single file. Large commits should be reviewed more carefully for potential copyright and licensing issues

This file contains a substantial change, please review to determine if the change comes from an external source and if there are any copyright or licensing issues to be aware of

@dlmiddlecote dlmiddlecote force-pushed the 45-filter-by-labels-annotations branch from 60e21ad to 877fc7a Compare Jul 1, 2019
Copy link
Contributor

@nolar nolar left a comment

@dlmiddlecote Yes, looks like a correct direction. There are few notes below written while reading the code. But those are mostly cosmetic.

It would be also nice to add an example to ./examples/10-filters/, showing how to use the label & annotation filters. 鈥 Which are also useful to test manually and to simulate the mini-runs for one specific purpose. And which are also used as end2end tests (related: #126).

With an example, I can try running it, and see how it "feels" from the operator-developer's point of view.

PS: And what specifically does make you not happy with this code? Any doubts?

@@ -0,0 +1,45 @@
"""
Copy link
Contributor

@nolar nolar Jul 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps, something should be said here. Or the docstring removed, which is also fine.

@@ -0,0 +1,45 @@
"""
Copy link
Contributor

@nolar nolar Jul 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file seems belonging to kopf.structs, e.g. kopf.structs.filters 鈥 all the dict-manipulation methods are gather there, to not overload the asyncio/k8s logic (see also #124 for some unrelated cleanups of the kopf.reactor).

PS: Also note that all structs are nouns for entities, not for activities, and mostly plural 鈥斅爅ust for convenient read as e.g. filters.match() or something like that. Unlike the reactor & engines packages, which are activities (or tasks).

'field': (lambda handler, _, changed_fields: _matches_field(handler, changed_fields)),
'labels': (lambda handler, body, _: _matches_metadata(handler, body, metadata_type='labels')),
'annotations': (lambda handler, body, _: _matches_metadata(handler, body, metadata_type='annotations')),
}
Copy link
Contributor

@nolar nolar Jul 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would be easier if these 3 filters are written just as 3 ifs in matches_filter(), calling the 3 relevant functions. I'm not sure if it is worth to generalise so much and to introduce all the lambdas and meta-parametrization.

"""

def has_filter(handler):
return bool(handler.field or handler.labels or handler.annotations)
Copy link
Contributor

@nolar nolar Jul 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, here I had a little confusion while reading: the separation of has_filter() and matches_filter() is not clear. Why is it needed?

Generally, a handler has to be assumed as matching, unless proven otherwise. Absence of any filtering criterion (one of 3) is the same as matching, i.e. it is a part of the matching criteria:

    if handler.field and not _matches_field(...):
        return False

Or, more English-language-like version ("if [it] does not have a criterion or matches the criterion"):

    return (
        (not handler.field or _matches_field(...)) and
        (not handler.labels or _matches_labels(...)) and
        (not handler.annotations or _matches_annotations(...))
    )

Or maybe inside of the matching functions, as it is already done for if not metadata: below. And then it becomes:

    return (
        _matches_field(...) and
        _matches_labels(...) and
        _matches_annotations(...)
    )


with mocker.patch('kopf.reactor.matching.matches_filter') as matches_filter:
matches_filter.return_value = True
handlers = registry.get_cause_handlers(cause)
Copy link
Contributor

@nolar nolar Jul 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A little clarification here: with is not needed.

mocker is a fixture of pytest-mock plugin. It has all the magic to restore to the previous state after each test.

So, we can easily mocker.patch() stuff all around, and to not worry about un-patching it 鈥 unless it is needed in the same test in the original state, which is a rare case and a sign of a test overload.

Such approach makes the tests straightforward, linear, without special code-blocks or if-branches.

tests/registries/test_handler_matching.py Show resolved Hide resolved
def cause_labels_and_annotations(resource, request):
body = {'metadata': request.param}
return Mock(resource=resource, event='some-event', diff=None, body=body)

Copy link
Contributor

@nolar nolar Jul 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If those fixtures are not reused across multiple tests, it might be more convenient and easier to put them (i.e. the simulated Cause mocks) directly into the test functions below, and parametrize those tests with @pytest.mark.parametrize(...).

That would also make the tests easier to read, as initialisation, action, and assertion will be gathered in one place within ~5 lines span.

@dlmiddlecote dlmiddlecote force-pushed the 45-filter-by-labels-annotations branch from 877fc7a to 641a4d9 Compare Jul 3, 2019
@dlmiddlecote
Copy link
Contributor Author

@dlmiddlecote dlmiddlecote commented Jul 4, 2019

@nolar - thanks for all the feedback, I've implemented the changes you've suggested, and I like it much more now (I didn't really like matching.py before, and filters.py feels much nicer). Also thank you for explaining more about mocker and the test layouts.

I'll add an example like you suggested too, and some documentation.

I also do not understand why the E2E tests are failing in travis, when I run them locally they run ok. Do you have any ideas?

@dlmiddlecote dlmiddlecote force-pushed the 45-filter-by-labels-annotations branch from 1a8f5e3 to 274a498 Compare Jul 4, 2019
@dlmiddlecote dlmiddlecote force-pushed the 45-filter-by-labels-annotations branch from 274a498 to 16b4ae1 Compare Jul 9, 2019
@dlmiddlecote
Copy link
Contributor Author

@dlmiddlecote dlmiddlecote commented Jul 14, 2019

@nolar - I'm finding that the e2e tests are a bit flakey, I get different ones failing at different times, I don't know if you've noticed this at all?

The last build of this PR fails with:

error: unable to recognize "peering.yaml": no matches for kind "KopfPeering" in version "zalando.org/v1"

However, the build in my repo is ok.

I also have other failures that seem to be down to timing (i.e. making the sleep inside test_examples.py greater seems to help.

@nolar
Copy link
Contributor

@nolar nolar commented Jul 14, 2019

@dlmiddlecote Yes, I've noticed that too.

Usually, re-running the failed build helps with this specific error ("unable to recognize peering.yaml" 鈥 minikube's k8s is somewhat slow to recognise new CRDs in a fraction of second, need a second or two of delay). There is a button to re-trigger one individual build.

But it also fails with multiple other reasons in my new branches/PRs 鈥 I'm looking on how to make them more stable in general.

@dlmiddlecote
Copy link
Contributor Author

@dlmiddlecote dlmiddlecote commented Jul 16, 2019

@nolar - Thanks for this info, I don't have the ability to re-run the failed build within the zalando-incubator project unfortunately. Would you be able to kick that off again, if possible?

Also, thanks for continueing to work on the stability of the e2e tests.

I think that this PR is ready for re-review now, if you'd be so kind.

@nolar
Copy link
Contributor

@nolar nolar commented Jul 16, 2019

@dlmiddlecote I've re-triggered. It is now green.

PS: With #149 merged to master, some of the e2e instabilities should go away (especially this peering.yaml error 鈥 as peering is disabled for e2e tests now (for a reason)).

nolar
nolar approved these changes Jul 24, 2019
Copy link
Contributor

@nolar nolar left a comment

Awesome! @dlmiddlecote Thanks for all the improvements.

@nolar nolar merged commit 8da86a0 into zalando-incubator:master Jul 24, 2019
3 checks passed
@nolar
Copy link
Contributor

@nolar nolar commented Jul 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants