Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpinApp CRD Active Replicas Status Column is unclear #87

Closed
bacongobbler opened this issue Feb 20, 2024 · 4 comments · Fixed by #88
Closed

SpinApp CRD Active Replicas Status Column is unclear #87

bacongobbler opened this issue Feb 20, 2024 · 4 comments · Fixed by #88
Assignees
Labels
bug Something isn't working

Comments

@bacongobbler
Copy link
Contributor

Given the following manifest:

apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  name: hello-rust
spec:
  image: "bacongobbler/hello-rust:latest"
  replicas: 2
  executor: containerd-shim-spin

When I apply it to the cluster and run kubectl get spinapps, no replicas are shown.

><> k get spinapps
NAME         READY REPLICAS   EXECUTOR
hello-rust   2                containerd-shim-spin

However, the number of replicas are shown if I run k get spinapps hello-rust -o yaml:

><> k get spinapps hello-rust -o yaml
apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"core.spinoperator.dev/v1","kind":"SpinApp","metadata":{"annotations":{},"name":"hello-rust","namespace":"default"},"spec":{"executor":"containerd-shim-spin","image":"bacongobbler/hello-rust:latest","replicas":2}}
  creationTimestamp: "2024-02-20T19:07:23Z"
  generation: 1
  name: hello-rust
  namespace: default
  resourceVersion: "1663"
  uid: 41e1456e-6284-43ff-b37b-cbc302730b15
spec:
  checks: {}
  enableAutoscaling: false
  executor: containerd-shim-spin
  image: bacongobbler/hello-rust:latest
  replicas: 2
  resources: {}
  runtimeConfig: {}
status:
  activeScheduler: containerd-shim-spin
  conditions:
  - lastTransitionTime: "2024-02-20T19:07:25Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-02-20T19:07:23Z"
    message: ReplicaSet "hello-rust-77496795b6" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  readyReplicas: 2

Possibly related to #86

@endocrimes
Copy link
Contributor

READY REPLICAS is a single field (just bad whitespace) - and correctly shows 2

@bacongobbler
Copy link
Contributor Author

bacongobbler commented Feb 20, 2024

Ah! That makes more sense now. Thank you.

Is there a way to change the output to match the output of kubectl get deployments?

><> k get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
hello-rust   2/2     2            2           92m

In this case, I'm proposing we change the output of kubectl get spinapps to this:

NAME         READY  EXECUTOR
hello-rust   2/2    containerd-shim-spin

Or perhaps by hyphenating the output:

NAME         READY-REPLICAS   EXECUTOR
hello-rust   2                containerd-shim-spin

To me, it reads as two different tabs in the table output: one showing the number of "ready" replicas (0), and one showing the number of "expected" replicas (unknown).

@endocrimes
Copy link
Contributor

Yep - you can update api/spinapp_types.go and change the comment annotation (then make manifests install will update all the kube-bits)

@endocrimes endocrimes changed the title No replicas shown in kubectl get spinapps SpinApp CRD Active Replicas Status Column is unclear Feb 20, 2024
@bacongobbler
Copy link
Contributor Author

Looks like you can't combine spec.replicas and status.readyReplicas into one "Ready" column like what kubectl get deployments does. There's a ticket upstream that is now inactive.

kubernetes/kubernetes#67268

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants