Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm Chart catalog fails to be installed - validators.kubedb.com/v1alpha1: the server is currently unable to handle the request #671

Open
cmoulliard opened this issue Oct 22, 2019 · 11 comments

Comments

@cmoulliard
Copy link

@cmoulliard cmoulliard commented Oct 22, 2019

Issue

The following installation of kubedb chart

KUBEDB_VERSION=0.12.0

helm init
until kubectl get pods -n kube-system -l name=tiller | grep 1/1; do sleep 1; done
kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

helm repo add appscode https://charts.appscode.com/stable/
helm repo update
helm install appscode/kubedb \
   --name kubedb-operator \
   --version ${KUBEDB_VERSION} \
   --namespace kubedb \
   --set apiserver.enableValidatingWebhook=false,apiserver.enableMutatingWebhook=false

TIMER=0
until kubectl get crd elasticsearchversions.catalog.kubedb.com memcachedversions.catalog.kubedb.com mongodbversions.catalog.kubedb.com mysqlversions.catalog.kubedb.com postgresversions.catalog.kubedb.com redisversions.catalog.kubedb.com || [[ ${TIMER} -eq 60 ]]; do
  sleep 10
  TIMER=$((TIMER + 1))
done

helm install appscode/kubedb-catalog \
  --name kubedb-catalog \
  --version ${KUBEDB_VERSION} \
  --namespace kubedb \
  --set catalog.postgres=true,catalog.elasticsearch=false,catalog.etcd=false,catalog.memcached=false,catalog.mongo=false,catalog.mysql=false,catalog.redis=false

is failing on k8s or ocp and reports as error validators.kubedb.com/v1alpha1: the server is currently unable to handle the request

Helm version used: v2.15.0
KubeDB version: 0.12.0

REMARK: We dont have this issue using version v2.14.3 !!!!

Error log

TASK [halkyon : Install KubeDB catalog] ******************************************************************************************************************************************************************************************************************************
fatal: [159.69.209.188]: FAILED! => {"changed": true, "cmd": "helm --kubeconfig=$HOME/.kube/config install appscode/kubedb-catalog --name kubedb-catalog --version 0.12.0 --namespace kubedb", "delta": "0:00:00.364934", "end": "2019-10-22 12:51:01.471582", "msg": "non-zero return code", "rc": 1, "start": "2019-10-22 12:51:01.106648", "stderr":
"Error: Could not get apiVersions from Kubernetes: 
unable to retrieve the complete list of server APIs: mutators.kubedb.com/v1alpha1:
the server is currently unable to handle the request, validators.kubedb.com/v1alpha1:
the server is currently unable to handle the request", "stderr_lines": ["Error: Could not get 
apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: 
mutators.kubedb.com/v1alpha1: the server is currently unable to handle the request, 
validators.kubedb.com/v1alpha1: the server is currently unable to handle the request"]
, "stdout": "", "stdout_lines": []}
        to retry, use: --limit @/Users/dabou/Code/snowdrop/openshift-infra/ansible/playbook/post_installation.retry
@cmoulliard cmoulliard changed the title Hem Chart catalog fails to be installed - validators.kubedb.com/v1alpha1: the server is currently unable to handle the request Helm Chart catalog fails to be installed - validators.kubedb.com/v1alpha1: the server is currently unable to handle the request Oct 22, 2019
@AXington

This comment has been minimized.

Copy link

@AXington AXington commented Nov 4, 2019

also running into this issue

@donaldguy

This comment has been minimized.

Copy link

@donaldguy donaldguy commented Nov 7, 2019

I am seeing an issue like this on helm v3.00-rc.1 as well (k8s 1.14.7 on minikube)

@donaldguy

This comment has been minimized.

Copy link

@donaldguy donaldguy commented Nov 7, 2019

Digging slightly further and comparing to #574, I am seeing

Name:         v1alpha1.validators.kubedb.com
Namespace:    
Labels:       app=kubedb
              chart=kubedb-v0.13.0-rc.0
              heritage=Helm
              release=kubedb-operator
Annotations:  <none>
API Version:  apiregistration.k8s.io/v1
Kind:         APIService
Metadata:
  Creation Timestamp:  2019-11-07T21:28:54Z
  Resource Version:    714
  Self Link:           /apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.validators.kubedb.com
  UID:                 99a689df-01a5-11ea-8936-080027354d92
Spec:
  Ca Bundle:             [ snip ] 
  Group:                   validators.kubedb.com
  Group Priority Minimum:  10000
  Service:
    Name:            kubedb-operator
    Namespace:       kube-system
  Version:           v1alpha1
  Version Priority:  15
Status:
  Conditions:
    Last Transition Time:  2019-11-07T21:28:54Z
    Message:               endpoints for service/kubedb-operator in "kube-system" have no addresses
    Reason:                MissingEndpoints
    Status:                False
    Type:                  Available
Events:                    <none>

and indeed

❯ kubectl get endpoints -n kube-system          
NAME                      ENDPOINTS                                               AGE
kube-controller-manager   <none>                                                  12m
kube-dns                  172.17.0.2:53,172.17.0.4:53,172.17.0.2:53 + 3 more...   12m
kube-scheduler            <none>                                                  12m
kubedb-operator                                                                   8m54s

Though of course that just implies the pod is never going ready, right?

And there is nothing really interesting there, log wise. and indeed the probes are failing with TLS errors :(

Warning  Unhealthy  11m (x11 over 13m)    kubelet, minikube  Readiness probe failed: Get https://172.17.0.3:8443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  3m36s (x22 over 12m)  kubelet, minikube  Liveness probe failed: Get https://172.17.0.3:8443/healthz: net/http: TLS handshake timeout
@donaldguy

This comment has been minimized.

Copy link

@donaldguy donaldguy commented Nov 7, 2019

Indeed for me adding --set apiserver.healthcheck.enabled=false is enough to make the helm install succeeded, and pretty quick, --wait and all

@donaldguy

This comment has been minimized.

Copy link

@donaldguy donaldguy commented Nov 7, 2019

So yeah, I can't speak for OP, but for my part this seems to be downstream of #504 #655

@GramozKrasniqi

This comment has been minimized.

Copy link

@GramozKrasniqi GramozKrasniqi commented Dec 11, 2019

Still there with:

Helm : version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}

Kubectl: Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes on AKS: version: 1.13.12

The only solution for now kill and make new k8s cluster.

@donaldguy

This comment has been minimized.

Copy link

@donaldguy donaldguy commented Dec 11, 2019

They should definitely figure out how to make the healthchecks reliable; but for the record, they do provide a way to scrub stuff without resorting to a cluster recreate:

curl -fsSL https://github.com/kubedb/installer/raw/v0.13.0-rc.0/deploy/kubedb.sh \
    | bash -s -- --uninstall --purge

This will leave the helm releases behind (but the resources gone) - so tack on a helm delete --purge --no-hooks kubedb-operator; helm delete --purge --no-hooks kubedb-catalog (drop the --purge and add a --namespace if you are on helm 3)

@donaldguy

This comment has been minimized.

Copy link

@donaldguy donaldguy commented Dec 11, 2019

Be warned that if you are also trying to use kubevault (or maybe searchlight too?) this will scrub the AppBinding apiservice registration and/or CRD definition :(

@GramozKrasniqi

This comment has been minimized.

Copy link

@GramozKrasniqi GramozKrasniqi commented Dec 11, 2019

@donaldguy thanks for the tip :) really helpful

@MatthiasLohr

This comment has been minimized.

Copy link

@MatthiasLohr MatthiasLohr commented Jan 10, 2020

Any progress here? Running into the same problem...

@Doca

This comment has been minimized.

Copy link

@Doca Doca commented Jan 10, 2020

We are having the same problem. Running K8s 1.16.2 and kubedb version 0.12.0 when trying to change the deployment for redis.

"invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference"

The loggings of the failing operator show the following:
`I0110 09:12:36.951715 1 run.go:24] Starting kubedb-server...
I0110 09:12:37.148577 1 lib.go:112] Kubernetes version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
I0110 09:12:37.161792 1 controller.go:72] Ensuring CustomResourceDefinition...
I0110 09:12:43.307350 1 run.go:36] Starting KubeDB controller
I0110 09:12:43.315670 1 secure_serving.go:116] Serving securely on [::]:8443
I0110 09:12:43.576320 1 xray.go:232] testing ValidatingWebhook using an object with GVR = kubedb.com/v1alpha1, Resource=redises
I0110 09:12:44.707723 1 statefulset.go:59] Patching StatefulSet redis/redis-shard0 with {"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"redis"},{"name":"exporter"}],"containers":[{"$setElementOrder/env":[{"name":"POD_IP"}],"$setElementOrder/ports":[{"containerPort":6379},{"containerPort":16379}],"env":[{"name":"POD_IP","valueFrom":{"fieldRef":{"apiVersion":null}}}],"name":"redis","ports":[{"containerPort":16379,"protocol":null}]}],"securityContext":null}},"volumeClaimTemplates":[{"metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"rook-ceph-block-ssd"},"creationTimestamp":null,"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"dataSource":null,"resources":{"requests":{"storage":"100Gi"}},"storageClassName":"rook-ceph-block-ssd"},"status":{}}]}}.
I0110 09:12:44.924397 1 statefulset.go:59] Patching StatefulSet redis/redis-shard1 with {"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"redis"},{"name":"exporter"}],"containers":[{"$setElementOrder/env":[{"name":"POD_IP"}],"$setElementOrder/ports":[{"containerPort":6379},{"containerPort":16379}],"env":[{"name":"POD_IP","valueFrom":{"fieldRef":{"apiVersion":null}}}],"name":"redis","ports":[{"containerPort":16379,"protocol":null}]}],"securityContext":null}},"volumeClaimTemplates":[{"metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"rook-ceph-block-ssd"},"creationTimestamp":null,"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"dataSource":null,"resources":{"requests":{"storage":"100Gi"}},"storageClassName":"rook-ceph-block-ssd"},"status":{}}]}}.
I0110 09:12:45.039978 1 statefulset.go:59] Patching StatefulSet redis/redis-shard2 with {"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"redis"},{"name":"exporter"}],"containers":[{"$setElementOrder/env":[{"name":"POD_IP"}],"$setElementOrder/ports":[{"containerPort":6379},{"containerPort":16379}],"env":[{"name":"POD_IP","valueFrom":{"fieldRef":{"apiVersion":null}}}],"name":"redis","ports":[{"containerPort":16379,"protocol":null}]}],"securityContext":null}},"volumeClaimTemplates":[{"metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"rook-ceph-block-ssd"},"creationTimestamp":null,"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"dataSource":null,"resources":{"requests":{"storage":"100Gi"}},"storageClassName":"rook-ceph-block-ssd"},"status":{}}]}}.
I0110 09:12:47.431764 1 cluster.go:71] All redis servers are ready
I0110 09:12:47.431820 1 cluster.go:630] Ensuring new cluster...
I0110 09:12:47.431844 1 cluster.go:158] Ensuring 1st pod as master in each statefulSet...
I0110 09:12:58.449140 1 cluster.go:296] Ensuring extra slaves be removed...
I0110 09:12:58.449178 1 cluster.go:158] Ensuring 1st pod as master in each statefulSet...
I0110 09:13:09.866547 1 cluster.go:338] Ensuring extra masters be removed...
I0110 09:13:09.866590 1 cluster.go:158] Ensuring 1st pod as master in each statefulSet...
I0110 09:13:20.735378 1 cluster.go:457] Ensuring new masters be added...
I0110 09:13:20.735421 1 cluster.go:158] Ensuring 1st pod as master in each statefulSet...
I0110 09:13:32.360077 1 cluster.go:504] Ensuring slots are rebalanced...
I0110 09:13:32.360131 1 cluster.go:158] Ensuring 1st pod as master in each statefulSet...
I0110 09:13:43.409007 1 cluster.go:584] Ensuring new slaves be added...
I0110 09:13:43.409048 1 cluster.go:158] Ensuring 1st pod as master in each statefulSet...
I0110 09:13:54.334257 1 cluster.go:158] Ensuring 1st pod as master in each statefulSet...
I0110 09:14:05.482135 1 statefulset.go:111] Cluster configured
I0110 09:14:05.482175 1 statefulset.go:112] Checking for removing master(s)...
I0110 09:14:05.541659 1 statefulset.go:138] Checking for removing slave(s)...
E0110 09:14:05.869703 1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/monitor.go:36
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/monitor.go:79
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/redis.go:129
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/workqueue.go:53
/go/src/github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue/worker.go:68
/go/src/github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue/worker.go:51
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1e8b7db]

goroutine 1268 [running]:
github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x2143e80, 0x65be860)
/usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller.(*Controller).addOrUpdateMonitor(0xc0004d5a20, 0xc000235c00, 0xc00030b8f8, 0x0, 0x0, 0x0)
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/monitor.go:36 +0xdb
github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller.(*Controller).manageMonitor(0xc0004d5a20, 0xc000235c00, 0x0, 0x0)
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/monitor.go:79 +0xa0
github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller.(*Controller).create(0xc0004d5a20, 0xc000235c00, 0xc000235c00, 0x2517008)
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/redis.go:129 +0x718
github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller.(*Controller).runRedis(0xc0004d5a20, 0xc000ee2c70, 0xb, 0x18, 0xc00137ddb0)
/go/src/github.com/kubedb/operator/vendor/github.com/kubedb/redis/pkg/controller/workqueue.go:53 +0x3c3
github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue.(*Worker).processNextEntry(0xc000367b80, 0xc00091ff00)
/go/src/github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue/worker.go:68 +0xe9
github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue.(*Worker).processQueue(0xc000367b80)
/go/src/github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue/worker.go:51 +0x2b
github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc001d523b0)
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d523b0, 0x3b9aca00, 0x0, 0x1, 0xc0002ba660)
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc001d523b0, 0x3b9aca00, 0xc0002ba660)
/go/src/github.com/kubedb/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue.(*Worker).Run
/go/src/github.com/kubedb/operator/vendor/kmodules.xyz/client-go/tools/queue/worker.go:37 +0x86`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.