-
Notifications
You must be signed in to change notification settings - Fork 1
Handle error events #16
Comments
ok, so if I understood this correctly this is generating error events for the resource pod that is running the operator, i.e. print the errors by doing Is that even possible lol? I been a couple of hours checking this and I dont understand crap :D I cant make sense on this with all the wrangler stuff in the middle. @mudler any pointers that I can have a look at? |
I'm not sure in wrangler context, but with standard k8s api that would involve using "k8s.io/client-go/tools/record" to stream back events recording from the controller. The event recorder can be retrieved in the manager We could also think at status reporting for now, so we can propagate at least the error back into the object status, but reporting to events is something that in general we should have in place as a mechanism. |
yeah, that I could find about the "standard way" of getting the recorder and the manager but Im pretty lost on where to init that crap in our context with all the autogenerated stuff. AFAI can see, we got no manager around in the wrangler context? anyway, gonna look harder |
I'm trying to have a look at how make recorder generally available while working on #23 , but I won't cover propagating back errors from all the involved parts, just covering the new component |
it is included here: #26 although, I've still not added tests for it yet |
closed by #27 |
Currently errors during syncing are printed out in the operator logs, but ideally those should be streamed back as kubernetes events of the corresponding resource. Besides, there are many
TODO
in the code relative exactly to this. So we should pass errors by to the event reporterFollow up of #14
The text was updated successfully, but these errors were encountered: