We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What happened: kubectl get pods -l heritage=kudo returned no resources.
kubectl get pods -l heritage=kudo
What you expected to happen: 3 pods should have been returned
How to reproduce it (as minimally and precisely as possible): Here is a terminal log from a clean Kubernetes & Kudo install:
KUDO Version: version.Info{GitVersion:"0.10.0", GitCommit:"d2310b12", BuildDate:"2020-01-10T18:54:59Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"} [centos@ip-10-0-0-32 ~]$ kubectl kudo init $KUDO_HOME has been configured at /home/centos/.kudo ✅ installed crds ✅ installed service accounts and other requirements for controller to run ✅ installed kudo controller [centos@ip-10-0-0-32 ~]$ kubectl kudo install zookeeper operator.kudo.dev/v1beta1/zookeeper created operatorversion.kudo.dev/v1beta1/zookeeper-0.3.0 created instance.kudo.dev/v1beta1/zookeeper-instance created [centos@ip-10-0-0-32 ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE zookeeper-instance-validation-qdjvr 1/1 Running 0 15s zookeeper-instance-zookeeper-0 1/1 Running 0 42s zookeeper-instance-zookeeper-1 1/1 Running 0 42s zookeeper-instance-zookeeper-2 1/1 Running 0 42s [centos@ip-10-0-0-32 ~]$ kubectl get pods -l heritage=kudo No resources found in default namespace. [centos@ip-10-0-0-32 ~]$ kubectl describe pod zookeeper-instance-zookeeper-0 Name: zookeeper-instance-zookeeper-0 Namespace: default Priority: 0 Node: ip-10-0-0-63.ec2.internal/10.0.0.63 Start Time: Thu, 23 Jan 2020 14:47:15 +0000 Labels: app=zookeeper controller-revision-hash=zookeeper-instance-zookeeper-55cfc75788 instance=zookeeper-instance statefulset.kubernetes.io/pod-name=zookeeper-instance-zookeeper-0 zookeeper=zookeeper Annotations: cni.projectcalico.org/podIP: 192.168.26.65/32 Status: Running IP: 192.168.26.65 IPs: IP: 192.168.26.65 Controlled By: StatefulSet/zookeeper-instance-zookeeper Containers: kubernetes-zookeeper: Container ID: docker://ede3e8ee495fe679dcdcf1b9a75a62501fdfd354a0dadd54e7bf581e0ba30a17 Image: zookeeper:3.4.14 Image ID: docker-pullable://docker.io/zookeeper@sha256:491427fc9f788c168e096422afe620fdb269a5d604efd11f953682919101c658 Ports: 2181/TCP, 2888/TCP, 3888/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: sh -c ZOOKEEPERPATH=`pwd` /etc/zookeeper/bootstrap.sh --servers=3 --data_dir=/var/lib/zookeeper/data --data_log_dir=/logs --conf_dir=/conf --client_port=2181 --election_port=3888 --server_port=2888 --tick_time=2000 --init_limit=10 --sync_limit=5 --heap=512M --max_client_cnxns=60 --snap_retain_count=3 --purge_interval=12 --max_session_timeout=40000 --min_session_timeout=4000 --log_level=INFO State: Running Started: Thu, 23 Jan 2020 14:47:21 +0000 Ready: True Restart Count: 0 Limits: cpu: 1 memory: 2Gi Requests: cpu: 250m memory: 1Gi Liveness: exec [sh -c /etc/healthcheck/healthcheck.sh 2181] delay=10s timeout=5s period=30s #success=1 #failure=3 Readiness: exec [sh -c /etc/healthcheck/healthcheck.sh 2181] delay=10s timeout=5s period=10s #success=1 #failure=3 Environment: <none> Mounts: /etc/healthcheck from zookeeper-instance-healthcheck (rw) /etc/zookeeper from zookeeper-instance-bootstrap (rw) /var/lib/zookeeper from zookeeper-instance-datadir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-czhwf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: zookeeper-instance-datadir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: zookeeper-instance-datadir-zookeeper-instance-zookeeper-0 ReadOnly: false zookeeper-instance-bootstrap: Type: ConfigMap (a volume populated by a ConfigMap) Name: zookeeper-instance-bootstrap Optional: false zookeeper-instance-healthcheck: Type: ConfigMap (a volume populated by a ConfigMap) Name: zookeeper-instance-healthcheck Optional: false default-token-czhwf: Type: Secret (a volume populated by a Secret) SecretName: default-token-czhwf Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 64s (x2 over 64s) default-scheduler error while running "VolumeBinding" filter plugin for pod "zookeeper-instance-zookeeper-0": pod has unbound immediate PersistentVolumeClaims Normal Scheduled 63s default-scheduler Successfully assigned default/zookeeper-instance-zookeeper-0 to ip-10-0-0-63.ec2.internal Normal Pulling 61s kubelet, ip-10-0-0-63.ec2.internal Pulling image "zookeeper:3.4.14" Normal Pulled 57s kubelet, ip-10-0-0-63.ec2.internal Successfully pulled image "zookeeper:3.4.14" Normal Created 57s kubelet, ip-10-0-0-63.ec2.internal Created container kubernetes-zookeeper Normal Started 57s kubelet, ip-10-0-0-63.ec2.internal Started container kubernetes-zookeeper
Anything else we need to know?:
Environment:
kubectl version
kubectl kudo version
CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"```
Kernel (e.g. uname -a): '''Linux ip-10-0-0-32.ec2.internal 3.10.0-862.3.2.el7.x86_64 Changed logic for testing if CRD is already present on cluster #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux'''
uname -a
Install tools:
Others:
The text was updated successfully, but these errors were encountered:
Caused by #1302
Sorry, something went wrong.
Apply metadata recursivly as Kustomize did
1d0bf6b
Fixes #1302 #1303 Signed-off-by: Andreas Neumann <aneumann@mesosphere.com>
Fixed in 0.10.1
No branches or pull requests
What happened:
kubectl get pods -l heritage=kudo
returned no resources.What you expected to happen:
3 pods should have been returned
How to reproduce it (as minimally and precisely as possible):
Here is a terminal log from a clean Kubernetes & Kudo install:
Anything else we need to know?:
Environment:
kubectl version
): 1.17.2kubectl kudo version
): 0.10.0VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"```
Kernel (e.g.
uname -a
): '''Linux ip-10-0-0-32.ec2.internal 3.10.0-862.3.2.el7.x86_64 Changed logic for testing if CRD is already present on cluster #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux'''Install tools:
Others:
The text was updated successfully, but these errors were encountered: