Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create an example of a python custom controller #334

Closed
dimberman opened this issue Aug 30, 2017 · 25 comments
Closed

Create an example of a python custom controller #334

dimberman opened this issue Aug 30, 2017 · 25 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@dimberman
Copy link

I am attempting to create an example custom controller using the python-client, but I cant seem to find any Informer (https://github.com/kubernetes/client-go/tree/master/informers) endpoint in client python. Am I missing this endpoint or has this not been added to the library yet?

If it does exist, I would very much like to contribute an example to encourage creation of custom controllers.

cc: @foxish @mbohlool

@mbohlool
Copy link
Contributor

mbohlool commented Aug 31, 2017

Informer is a client-go thing not a kubernetes API thing (as it is in client-go repo). An informer is basically a watch (example in python). Actually as far as I can tell, there is no Informer in kubernetes but sharedInformer that is a more advanced concept that we do not have in python client and if anybody wants to implement that, I would spend dedicated review time for them :)

@karmab
Copy link

karmab commented Oct 11, 2017

is it enough to leverage the watch code ? what is specific to the shared informer?

@mbohlool
Copy link
Contributor

shared informers has cache and connection sharing.

@mattmoor
Copy link

FYI I am adding a little sample in this PR to bazelbuild/rules_k8s.

@mattmoor
Copy link

Also I'm very open to feedback :)

@karmab
Copy link

karmab commented Oct 11, 2017

@mattmoor thanks, this is exactly what i had in mind and was coming with (testing with tpr instead of crd but crds are the way to go)!

@karmab
Copy link

karmab commented Oct 12, 2017

@mbohlool ok, if i were to implement shared informers in this python library, what would be the way to go ?

@mbohlool
Copy link
Contributor

It would be basically an extension to Watch class to cache and share watch calls. It would be amazing if the interface of the class does not change and it would be smart enough to share an existing watch connection.

@karmab
Copy link

karmab commented Oct 16, 2017

@mattmoor @dimberman i've also created https://github.com/karmab/samplecontroller if you wanna have a look

@mattmoor
Copy link

So my controller deployments aren't living terribly long. I was playing with a derivative of my example, and it is very slowly crash looping.

I'm not sure of the exact symptom, but I have seen messages about connection exhaustion (maybe a leak in the watch logic?).

Is there a more stable pattern than the one I'm using, or any useful debugging trick to better identify what's going on?

@karmab
Copy link

karmab commented Oct 21, 2017 via email

@mattmoor
Copy link

@karmab I assume you mean the combination of the outer while True and the resource_version logic to avoid reprocessing? Is resource_version from a single object a reasonable thing to rely upon?

@karmab
Copy link

karmab commented Oct 21, 2017 via email

@mattmoor
Copy link

@karmab I think that is more verbose than you need, try:

    resource_version = ''
    while True:
        stream = watch.Watch().stream(crds.list_cluster_custom_object,
                                      DOMAIN, VERSION, PLURAL,
                                      resource_version=resource_version)
        for event in stream:
            try:
                obj = event["object"]
                # Normal processing...

                # Configure where to resume streaming.                                                                                                                                           
                metadata = obj.get("metadata")
                if metadata:
                    resource_version = metadata["resourceVersion"]
            except:
                logging.exception("Error handling event")

I'm going to leave that running for a while and see if it does the trick :)

mattmoor added a commit to bazelbuild/rules_k8s that referenced this issue Oct 22, 2017
This updates the example to follow the pattern discussed [here])kubernetes-client/python#334).
@karmab
Copy link

karmab commented Oct 22, 2017

@mattmoor you are right, i changed my controller

@marcellodesales
Copy link

marcellodesales commented Mar 10, 2018

@karmab @mattmoor @mbohlool

My Requirement

I'm writing a controller that:

  • It watches for the instances of itself (A)
  • When (A) is created, it must create a related CRD for controller (B), (C), and (D)
  • Beyond creation, (A) must watch the Status changes of (B), (C), (D)
  • When (B), (C), (D) reaches the desirable state, (A) needs to report on that

Questions

  • So, should I create 4 different watchers?

    • I was brought here because of the informers API as well (suggested by @liggitt on sig-api-machinery)
  • Is there anything like the following?

                          watch.Watch().stream(crds.list_cluster_custom_object,
                                      [ 
                                        { DOMAIN_A, VERSION_A, PLURAL_A}, 
                                        { DOMAIN_B, VERSION_B, PLURAL_B},
                                        { DOMAIN_C, VERSION_C, PLURAL_C}, 
                                        { DOMAIN_D, VERSION_D, PLURAL_D},
                                      ])```

         for event in stream:
            try:
                type = object.type()
                switch (type)
                   case (A): A_handler(obj)
                   case (B): B_handler(obj)
                   case (C): C_handler(obj)
                   case (D): D_handler(obj)                  
            except:
                logging.exception("Error handling event")

@mattmoor
Copy link

mattmoor commented Mar 10, 2018

I'm not aware of a way to do that.

You can certainly use ~4 watches to accomplish this, but I'm also curious if there's a better way :)

@sebgoa
Copy link
Contributor

sebgoa commented Mar 29, 2018

@mattmoor I went back to your example controller listed above, it does not seem to restart at the correct resourceVersion. It keeps restarting from the first CRD object created.

Have you seen this behavior before ?

@mattmoor
Copy link

No, and I've been predominantly working with Go-based controllers lately.

@mbohlool
Copy link
Contributor

dup of #30?

@akaihola
Copy link

akaihola commented Sep 28, 2018

Tip: the Metacontroller framework may help in some cases when writing custom controllers.

With Metacontroller, you can write your controller logic in any language.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants