Join GitHub today
Watch hooks logs and log release-checking status in helm install/upgrade #3481
There are 2 issues related to logs streaming, that we are experiencing using helm.
Watch hooks logs
Generally we have a project, that uses helm for deploy (https://github.com/flant/dapp). And in our scheme deploy run as follows:
It works ok, but we have a strong belief, that it is a job of the helm to show logs.
So, there is an idea to use hook-job annotation like
And that logs streaming could occur directly in install / upgrade release grpc-action using server-side streaming.
Check release state and show status during checking
There is related to the logging issue about checking release state (
Release checking may also log some status info about "what is being checked" and "what we are waiting for". It is not necessary pod's logs, it could be container status, information about replicas count, that needed to be reached and current replicas count for deployment.
Again, in our project we have done some work on Deployment status watching (https://github.com/flant/dapp/blob/master/lib/dapp/kube/kubernetes/manager/deployment.rb#L33). After helm install / upgrade is done the thread that watches hooks logs is also done, we run a deployments-watching procedure.
It is similar to what
The main purpose of that logging is to show enough info for the end user during install / upgrade operation, so that user can diagnose the problem and make changes to templates without running kubectl to dig what is wrong.
And again we have a strong belief, that helm in
Also there is a related issue #2298, but separate helm command to show logs will not make helm more user-friendly deployment tool, that works out-of-the box.
This was referenced
Feb 9, 2018
Here is update to related PR list about logging:
Here is experiments from my team with first implementation: https://github.com/kubernetes/helm/pull/3479/files.
It contains all needed layers to pass information through the chain: kube-client - KubeClient interface - tiller ReleaseServer - helm-client. Implementation of job-watching in kube-client is the most complex part and and contains experimental code for now.
This feature aimed to bring following possibilities for helm users:
This feature is useful for CI/CD where helm used as end-user deployment tool. Streaming all sets logs and kubernetes system-events could also be useful for some purposes, but that scope of release monitoring is not addressed in this PR.
The way it works is the same as in #3263 -- streaming response to install/upgrade request. The main problem with this way of streaming is backward compatibility between different versions of helm-client and tiller (https://github.com/kubernetes/helm/pull/3263/files#diff-e337e5a949100848dad15963f6fc8b02R64). Actually I began experiments to get info from tiller in the same way and then I received a link to that issue from the people who already worked on that problem.
Here is my team proposal to that problem:
Implemented complex part related to job/pod/container watching. The result is a JobWatchMonitor, PodWatchMonitor primitives.
On the client side helm prints logs and errors the same way as
Overall job-watch procedure is separated from WatchUntilReady into WatchJobsUntilReady (https://github.com/kubernetes/helm/pull/3479/files#diff-73ee28a4d39ab9ab84ba5a6f9dee3867R365), because JobWatchMonitor primitive makes code more specific and focused on the end-goal of sequential watching of hook-jobs. And this primitive should watch job-related events by itself.
Passing logs and events from kube-client to release-server is implemented through WatchFeed interface. Actually it is a simple set of callbacks, because there is really no need to create golang-channels at this level.
For now this feature is works only for install request. "Copy-paste" only needed to add streaming to upgrade and rollback, but it is not the main problem. Annotation
The main problem for now is api-compatibility and I think I need some feedback to make changes to integrate this work properly. What do you think about this feature generally speaking? It is about 80% done and needs some adjustions, maybe something is done wrong.
What about extending install/upgrade/rollback grpc procedures to the server-side streamed versions and give these procedures different names. Helm client could use extended version only if they are available on tiller? What is wrong with that idea?
Also I think some parts of logs streaming can be reused for this PR #2342 too.
@distorhead I just cherrypicked your patches to latest 2.9 release, and added pre/post upgrade hook support. It works like a charm! Really looking forward seeing the feature incorporated into master.
Cool, thanks for trying out!
Actually jobs watch is only the first step. The next steps could be support for watching for all other resource besides Job. And this watch code is also should be written as to be used not only to Watch hooks logs, but also for Check release state and show status during checking.
But for the first step current implementation already can be useful for users to watch jobs-hooks-only. Main problems to solve for this step:
Actually my team is here and we are ready to solve all that, for now any works on this were just frozen before any feedback is received, because someone responsible from helm maintainers should agree or disagree or make corrections for this direction of works.
Apologies for the delay on a response. The core team (including myself) has been focusing solely on getting an alpha release of Helm 3 out the door, so any development on significant contributions to Helm 2 have been much harder to dedicate resources for peer review and design work at this time. We highly suggest bringing proposals/discussions to the weekly public dev calls if you need feedback: https://github.com/helm/community/blob/master/communication.md
If this endpoint would break existing grpc endpoints, what about holding off until we have an alpha release of Helm 3 available and rebase this work against Helm 3? That way we won't have to write an API negotiation layer between helm and tiller... especially since tiller's being removed.
We need this functionality, so we had to create a separate project and start using it. This code has been debugged more, tested and improved alot (for example now it uses kubernetes informers, which is a reliable primitive from kubernetes library, instead of raw watch kubernetes api).
This kubedog project is mainly the library that provides API to enable resources tracking in some application. There is also cli interface to this tracking functions. Library can track Jobs, Deployments, StatefulSets, DaemonSets.
So if someone needs these functions right now -- try it out, and give some feedback ;)
As for the future of this issue: when Helm 3 will be released we can try to reintegrate these functions.