Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm status stopped showing containers #6896

Closed
mshakhmaykin opened this issue Nov 6, 2019 · 14 comments
Closed

helm status stopped showing containers #6896

mshakhmaykin opened this issue Nov 6, 2019 · 14 comments
Assignees
Labels
bug Categorizes issue or PR as related to a bug. v2.x Issues and Pull Requests related to the major version v2
Milestone

Comments

@mshakhmaykin
Copy link

mshakhmaykin commented Nov 6, 2019

Couldn't find it in the latest changelog, but:

before upgrade to 2.16, it used to show more detailed information about running pods. Now I need to run kubectl -n get pods -l app=<app-name> to find out if my containers started and passed the Liveness probes. Previously I could just do helm status <release>. Any chances to return to the original behavior?

Output of helm version:
Client: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}

Output of kubectl version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.7-eks-e9b1d0", GitCommit:"e9b1d0551216e1e8ace5ee4ca50161df34325ec2", GitTreeState:"clean", BuildDate:"2019-09-21T08:33:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
EKS

@bacongobbler bacongobbler added bug Categorizes issue or PR as related to a bug. v2.x Issues and Pull Requests related to the major version v2 labels Nov 6, 2019
@bacongobbler bacongobbler added this to the 2.16.1 milestone Nov 6, 2019
@hayorov
Copy link

hayorov commented Nov 7, 2019

After this change, helm status is ridiculously useless command, it requires to do kubectl get next.

LAST DEPLOYED: Wed Nov  6 20:22:21 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                 AGE
hlf-ord--ord         15h
hlf-orderer-genesis  15h

==> v1/Deployment
NAME     AGE
hlf-ord  15h

==> v1/PersistentVolumeClaim
NAME     AGE
hlf-ord  15h

==> v1/Pod(related)
NAME                      AGE
hlf-ord-655f9d5856-tscnz  2m15s

==> v1/Secret
NAME                   AGE
hlf-orderer-admincert  15h
hlf-orderer-cacert     15h
hlf-orderer-cert       15h
hlf-orderer-key        15h

==> v1/Service
NAME     AGE
hlf-ord  15h

==> v1beta1/Ingress
NAME     AGE
hlf-ord  15h

@bacongobbler
Copy link
Member

fixed via #6897 which will be available in 2.16.1.

@mshakhmaykin
Copy link
Author

mshakhmaykin commented Nov 15, 2019

This didn't help unfortunately.

$ helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
$ helm status service-app
LAST DEPLOYED: Mon Oct 14 17:40:40 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME        AGE
service-app  31d

==> v1/Pod(related)
NAME                         AGE
service-app-687549b68d-k4wk6  21m
service-app-687549b68d-kz5jp  30d

==> v1/Service
NAME        AGE
service-app  31d

@bacongobbler
Copy link
Member

bacongobbler commented Nov 15, 2019

weird; bbdfe5e is part of the 2.16.1 branch. Perhaps the printer on Kubernetes' end changed again. Did you deploy the app with 2.16.1?

Re-opening for further investigation.

@bacongobbler bacongobbler reopened this Nov 15, 2019
@bacongobbler bacongobbler modified the milestones: 2.16.1, 2.16.2 Nov 15, 2019
@thuandt
Copy link

thuandt commented Nov 15, 2019

@bacongobbler I have same issue with 2.16.1

My application used to helm v2.14.3, then I upgraded helm to v2.16.1 pods status has been gone

@cridam
Copy link

cridam commented Nov 29, 2019

Deployment, and Service columns have also disappeared. Will 2.16.2 fix ?
i.e
before 2.16.x
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mymicroservice ClusterIP xxx 8443/TCP 25d
after
==> v1/Service
NAME AGE
mymicroservice 25d

@liurd
Copy link

liurd commented Dec 13, 2019

Are you going to build 2.16.2 to fix the issue?

@thuandt
Copy link

thuandt commented Dec 18, 2019

It should be fix by #7196

@bacongobbler
Copy link
Member

I don't believe #7196 was intended to fix this issue, however if you can demonstrate that #7196 does indeed fix the issue, we'd love to understand why.

@nickdgriffin
Copy link

I'm anxiously awaiting 2.16.2 for #7319, is this issue still in scope for the release?

@bacongobbler
Copy link
Member

I'm still seeing the issue present with the HEAD of dev-v2 with a Kubernetes 1.17 cluster.

I'm anxiously awaiting 2.16.2 for #7319, is this issue still in scope for the release?

#7319 is irrelevant to this ticket, but yes, It should be in the next patch release.

To get a better understanding of the issue here: helm status relies on the same APIs that kubectl get relies upon. k8s.io/kubernetes has broken backwards compatibility in the past, so it's quite possible it's happened again.

I'm currently looking into why it keeps breaking, and assess whether we need to replace it with some other API.

@bacongobbler
Copy link
Member

bacongobbler commented Mar 6, 2020

So... kubernetes did indeed break backwards compatibility. You now have to generate a metav1.Table if you want the endpoint to render the same as kubectl get. This broke somewhere around kubernetes 1.16, right when this bug popped up.

If someone wants to take a closer look at this, I started diving deeper into how kubectl get prints objects now in #7728. I don't have time to follow through and fix the issue, but hopefully this helps someone looking at fixing this.

@mattfarina mattfarina self-assigned this Mar 19, 2020
@mattfarina
Copy link
Collaborator

mattfarina commented Mar 19, 2020

I have a PR coming with a fix for this. It works. Just need to try and clean up the code some more and figure out how to explain the change.

mattfarina added a commit to mattfarina/helm that referenced this issue Mar 19, 2020
Changes to the Kubernetes API server and kubectl libraries caused
the status to no longer display when helm status was run for a
release. This change restores the status display.

Generation of the tables for display was optionally moved server
side. A request for the data as a table is made and a kubectl
printer for tables can display this data. Kubectl uses this setup and
the structure here closely resembles kubectl. kubectl is still
able to display objects as tables from prior to server side
printing. Server side printing is able to handle non-core
resources like CRDs.

Note, an extra request is made because table responses cannot be
easily transformed into Go objects for Kubernetes types to work
with. There is one request to get the resources for display in
a table and a second request to get the resources to lookup the
related pods. The related pods are now requested as a table as
well for display purposes.

This is likely part of the larger trend to move features like
this server side so that more libraries in more languages can
get to the feature.

The fix only works for Kubernetes 1.15 and newer. This is when
the table functionality was added to the api server. Kubernetes
supports the latest 3 minor versions meaning kubectl does the
same. It only needs to support back to 1.15 now for table
display. In Kubernetes 1.14 kubectl can still display the table
information using different printing methods.

Closes helm#6896

Signed-off-by: Matt Farina <matt@mattfarina.com>
mattfarina added a commit to mattfarina/helm that referenced this issue Mar 19, 2020
Changes to the Kubernetes API server and kubectl libraries caused
the status to no longer display when helm status was run for a
release. This change restores the status display.

Generation of the tables for display was optionally moved server
side. A request for the data as a table is made and a kubectl
printer for tables can display this data. Kubectl uses this setup and
the structure here closely resembles kubectl. kubectl is still
able to display objects as tables from prior to server side
printing. Server side printing is able to handle non-core
resources like CRDs.

Note, an extra request is made because table responses cannot be
easily transformed into Go objects for Kubernetes types to work
with. There is one request to get the resources for display in
a table and a second request to get the resources to lookup the
related pods. The related pods are now requested as a table as
well for display purposes.

This is likely part of the larger trend to move features like
this server side so that more libraries in more languages can
get to the feature.

The fix only works for Kubernetes 1.15 and newer. This is when
the table functionality was added to the api server. Kubernetes
supports the latest 3 minor versions meaning kubectl does the
same. It only needs to support back to 1.15 now for table
display. In Kubernetes 1.14 kubectl can still display the table
information using different printing methods.

Closes helm#6896

Signed-off-by: Matt Farina <matt@mattfarina.com>
mattfarina added a commit to mattfarina/helm that referenced this issue Mar 20, 2020
Changes to the Kubernetes API server and kubectl libraries caused
the status to no longer display when helm status was run for a
release. This change restores the status display.

Generation of the tables for display was moved server
side. A request for the data as a table is made and a kubectl
printer for tables can display this data. Kubectl uses this setup and
the structure here closely resembles kubectl. kubectl is still
able to display objects as tables from prior to server side
printing but only prints limited information.

Note, an extra request is made because table responses cannot be
easily transformed into Go objects for Kubernetes types to work
with. There is one request to get the resources for display in
a table and a second request to get the resources to lookup the
related pods. The related pods are now requested as a table as
well for display purposes.

This is likely part of the larger trend to move features like
this server side so that more libraries in more languages can
get to the feature.

Closes helm#6896

Signed-off-by: Matt Farina <matt@mattfarina.com>
@mattfarina
Copy link
Collaborator

PR just landed that fixes this. Should be in the next 2.16 release.

mattfarina added a commit that referenced this issue Mar 23, 2020
Changes to the Kubernetes API server and kubectl libraries caused
the status to no longer display when helm status was run for a
release. This change restores the status display.

Generation of the tables for display was moved server
side. A request for the data as a table is made and a kubectl
printer for tables can display this data. Kubectl uses this setup and
the structure here closely resembles kubectl. kubectl is still
able to display objects as tables from prior to server side
printing but only prints limited information.

Note, an extra request is made because table responses cannot be
easily transformed into Go objects for Kubernetes types to work
with. There is one request to get the resources for display in
a table and a second request to get the resources to lookup the
related pods. The related pods are now requested as a table as
well for display purposes.

This is likely part of the larger trend to move features like
this server side so that more libraries in more languages can
get to the feature.

Closes #6896

Signed-off-by: Matt Farina <matt@mattfarina.com>
(cherry picked from commit e8396c9)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Categorizes issue or PR as related to a bug. v2.x Issues and Pull Requests related to the major version v2
Projects
None yet
Development

No branches or pull requests

8 participants