Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error log in pod kubeapps-apprepository-controller #577

Closed
obeyler opened this issue Aug 31, 2018 · 43 comments

Comments

Projects
None yet
4 participants
@obeyler
Copy link

commented Aug 31, 2018

kubernetes 1.11.2

Hello,
Do you know what can be this trouble ?

E0831 16:04:40.671935 1 reflector.go:205] github.com/kubeapps/kubeapps/cmd/apprepository-controller/pkg/client/informers/externalversions/factory.go:74: Failed to list *v1alpha1.AppRepository: the server could not find the requested resource (get apprepositories.kubeapps.com)

UI doesn't show any cartrige anymore and "only loading" is present on home page
image

@prydonius

This comment has been minimized.

Copy link
Member

commented Aug 31, 2018

@obeyler are you able to get apprepositories using the CLI (kubectl get apprepos)? Also what is the output of kubectl get crds?

@obeyler

This comment has been minimized.

Copy link
Author

commented Aug 31, 2018

@prydonius
apprepos or crds are unknown resources
kubectl get apprepos
error: the server doesn't have a resource type "apprepos"

kubectl get crds
error: the server doesn't have a resource type "crds"

@obeyler

This comment has been minimized.

Copy link
Author

commented Aug 31, 2018

$ kubectl get all -n kubeapps
NAME READY STATUS RESTARTS AGE
pod/apprepo-cleanup-bitnami-vdwtq-qg8ml 1/1 Running 0 5h
pod/apprepo-cleanup-incubator-qr848-7bhns 1/1 Running 0 5h
pod/apprepo-cleanup-stable-vl8r4-wcdf8 1/1 Running 0 5h
pod/apprepo-cleanup-svc-cat-d6z47-4fmfw 1/1 Running 0 5h
pod/kubeapps-667545b66c-4hq95 1/1 Running 0 5h
pod/kubeapps-apprepository-controller-f56d8c55d-j8jp8 1/1 Running 0 5h
pod/kubeapps-chartsvc-64f9879fb9-xhd8l 1/1 Running 0 5h
pod/kubeapps-dashboard-579787f774-f9fgh 1/1 Running 0 5h
pod/kubeapps-tiller-proxy-66bd48f7ff-xwlck 1/1 Running 0 5h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubeapps ClusterIP 10.100.200.16 80/TCP 5h
service/kubeapps-chartsvc ClusterIP 10.100.200.238 8080/TCP 5h
service/kubeapps-dashboard ClusterIP 10.100.200.236 8080/TCP 5h
service/kubeapps-mongodb ClusterIP 10.100.200.74 27017/TCP 5h
service/kubeapps-tiller-proxy ClusterIP 10.100.200.136 8080/TCP 5h

NAME DESIRED SUCCESSFUL AGE
job.batch/apprepo-cleanup-bitnami-vdwtq 1 0 5h
job.batch/apprepo-cleanup-incubator-qr848 1 0 5h
job.batch/apprepo-cleanup-stable-vl8r4 1 0 5h
job.batch/apprepo-cleanup-svc-cat-d6z47 1 0 5h

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/kubeapps 1 1 1 1 5h
deployment.extensions/kubeapps-apprepository-controller 1 1 1 1 5h
deployment.extensions/kubeapps-chartsvc 1 1 1 1 5h
deployment.extensions/kubeapps-dashboard 1 1 1 1 5h
deployment.extensions/kubeapps-mongodb 1 0 0 0 5h
deployment.extensions/kubeapps-tiller-proxy 1 1 1 1 5h

NAME DESIRED CURRENT READY AGE
replicaset.extensions/kubeapps-667545b66c 1 1 1 5h
replicaset.extensions/kubeapps-apprepository-controller-f56d8c55d 1 1 1 5h
replicaset.extensions/kubeapps-chartsvc-64f9879fb9 1 1 1 5h
replicaset.extensions/kubeapps-dashboard-579787f774 1 1 1 5h
replicaset.extensions/kubeapps-mongodb-686865cdf4 1 0 0 5h
replicaset.extensions/kubeapps-tiller-proxy-66bd48f7ff 1 1 1

@obeyler

This comment has been minimized.

Copy link
Author

commented Aug 31, 2018

I've activated the OIDC on kube-api. May it can be the root cause of this. I remove it to check if the trouble persist without it. I'll keep you in touch

@prydonius

This comment has been minimized.

Copy link
Member

commented Aug 31, 2018

It looks like the CustomResourceDefinitions feature got disabled in your cluster. What about kubectl get customresourcedefinitions (just incase the shorthand didn't work?).

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 1, 2018

here is the result :

kubectl get customresourcedefinitions
NAME                          AGE
dormantdatabases.kubedb.com   16h
elasticsearches.kubedb.com    16h
functions.projectriff.io      13h
invokers.projectriff.io       13h
memcacheds.kubedb.com         16h
mongodbs.kubedb.com           16h
mysqls.kubedb.com             16h
postgreses.kubedb.com         16h
redises.kubedb.com            16h
snapshots.kubedb.com          16h
topics.projectriff.io         13h
@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 1, 2018

I desactivate the OIDCwithout any success :-(

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 1, 2018

I see that kubeapps deployment for mongodb doesn't success to scale the pod:
Error creating: pods "kubeapps-mongodb-686865cdf4-crm68" is forbidden: pod.Spec.SecurityContext.RunAsUser is forbidden

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 3, 2018

It seems that there are two issues here:

  • When deleting the previous version of Kubeapps it didn't finish those pod/apprepo-cleanup-* pods should have finished before installing the chart again. This error should go away if you uninstall the chart and completely delete the kubeapps namespace. Once that is clean, reinstalling the chart should generate the apprepositories custom resource.
  • About the mongodb issue: Apparently your cluster doesn't allow to use the RunAsUser key. To disable it you can use the flag --set mongodb.securityContext.enable=false when installing kubeapps.
@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 3, 2018

@andresmgot Do you know what kind of property should i use to enable RunAsUser ?

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 3, 2018

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 4, 2018

In fact the cluster need to add PodSecurityPolicy in enable-admission-plugins option of kube-api to use the security-context
unlucky I cannot set it yet

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 4, 2018

Anyway, the securityContext is not necessary since the MongoDB image is already running by default with an unprivileged user. It's just a generic good practice to set that. Does it work for you if you set the flag --set mongodb.securityContext.enable=false?

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 4, 2018

with set mongodb.securityContext.enabled=false I've got this kind of error:

mongodb INFO   Installation parameters for mongodb:
mongodb INFO     Persisted data and properties have been restored.
mongodb INFO     Any input specified will not take effect.
mongodb INFO   This installation requires no credentials.
mongodb INFO  ########################################################################
mongodb INFO 
nami    INFO  mongodb successfully initialized
INFO  ==> Starting mongodb... 
INFO  ==> Starting mongod...
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten] MongoDB starting : pid=28 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=kubeapps-mongodb-6bb66d4f6d-h56tp
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten] db version v3.6.6
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten] git version: 6405d65b1d6432e138b44c13085d0c2fe235d6bd
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.0f  25 May 2017
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten] modules: none
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten] build environment:
2018-09-04T12:33:16.320+0000 I CONTROL  [initandlisten]     distmod: debian92
2018-09-04T12:33:16.321+0000 I CONTROL  [initandlisten]     distarch: x86_64
2018-09-04T12:33:16.321+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2018-09-04T12:33:16.321+0000 I CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: true } }
2018-09-04T12:33:16.322+0000 I STORAGE  [initandlisten] exception in initAndListen std::exception: open: Address family not supported by protocol, terminating
2018-09-04T12:33:16.322+0000 I CONTROL  [initandlisten] now exiting
2018-09-04T12:33:16.322+0000 I CONTROL  [initandlisten] shutting down with code:100 

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 4, 2018

I see, so that seems to be this issue: bitnami/bitnami-docker-mongodb#108

IPv6 is enabled by default in the mongodb image, if you cluster doesn't support IPv6 you need to disable it. Again that should be something doable using another flag: --set mongodb.mongodbEnableIPv6=false

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 4, 2018

--set mongodb.mongodbEnableIPv6=false has no effect.
I think is ignored as when I look into https://github.com/helm/charts/tree/60fdbcb3820ee6bcb23d36b5c06ff1935da7ccd5/stable/mongodb the mongodbEnableIPV6 doesn't exist

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 4, 2018

in fact in some version this flag exist but as there is no version specified as dependency the kubapps deployment takes the first mongodb version found

when I do command on Kubeapps/chart/kubeapps
helm dependency list
NAME VERSION REPOSITORY STATUS
mongodb >= 0 https://kubernetes-charts.storage.googleapis.com missing

even the kubeapps version is fixed, the deployment can works of failed depending the evolution of mongodb. You should fixed the dependency

@prydonius

This comment has been minimized.

Copy link
Member

commented Sep 4, 2018

@obeyler if you're installing the chart from this git repo and not using the Bitnami charts repo, you will need to run helm dep build in kubeapps/chart/kubeapps.

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

@prydonius @andresmgot
From Bitnami charts repo the chart for mongodb is 4.0.4
image

and the ipv6 desactivator had beed added on mongodb chart on 4.2.2
image
Do you think you can update the mongodb chart into kubeapps ?

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 5, 2018

Hi @obeyler, it's true that the flag --set mongodb.mongodbEnableIPv6=false is present in the chart for the version 4.0.4 but it's being ignored when the replication is not enabled. I have requested that change to @juan131 (mongodb chart maintainer). He'll add that flag in the standalone environment as well.

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

Thanks for all @andresmgot

@juan131

This comment has been minimized.

Copy link
Contributor

commented Sep 5, 2018

Hi @obeyler

We just updated the MongoDB chart. Could you please try using the latest version of the MongoDB chart (4.2.3)?

helm repo update
helm upgrade 'your-release-name' stable/mongodb --set mongodbEnableIPv6=false
@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

@juan131 are you sure that is --set mongodb.mongodbEnableIPv6=false
when I look into your code it seems to be --set mongodb.mongodbEnableIPv6=no isn't it ?

@juan131

This comment has been minimized.

Copy link
Contributor

commented Sep 5, 2018

Hi @obeyler

It's "false" for sure. Please note the boolean is transformed into a string.

Regarding the commands I suggested, if you're using KubeApps, you should use:

helm repo update
helm upgrade 'your-release-name' bitnami/kubeapps --set mongodb.mongodbEnableIPv6=false
@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

Ok thanks I'll try

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

@juan131 May I can do same for monocular ?

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

@juan31
@juan131 sorry I still have the same result
@prydonius may the chart repo bitnami/kubeapps doesn't yet update the dependency ?

I do

helm delete --purge kubeapps
helm repo update
helm install  -n kubeapps bitnami/kubeapps --set mongodb.mongodbEnableIPv6=false --set mongodb.securityContext.enabled=false

log from pods :

 mongodb INFO 
nami    INFO  mongodb successfully initialized
INFO  ==> Starting mongodb... 
INFO  ==> Starting mongod...
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] MongoDB starting : pid=28 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=kubeapps-mongodb-6bb66d4f6d-sl8g5
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] db version v3.6.6
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] git version: 6405d65b1d6432e138b44c13085d0c2fe235d6bd
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.0f  25 May 2017
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] modules: none
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] build environment:
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten]     distmod: debian92
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten]     distarch: x86_64
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2018-09-05T10:01:47.225+0000 I CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: true } }
2018-09-05T10:01:47.227+0000 I STORAGE  [initandlisten] exception in initAndListen std::exception: open: Address family not supported by protocol, terminating
2018-09-05T10:01:47.227+0000 I CONTROL  [initandlisten] now exiting
2018-09-05T10:01:47.227+0000 I CONTROL  [initandlisten] shutting down with code:100 

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 5, 2018

@obeyler as far as I can see monocular Monocular pines a specific version of Mongodb in the requirements.yaml: https://github.com/helm/monocular/blob/master/deployment/monocular/requirements.yaml#L3 so that should be updated

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 5, 2018

@obeyler you are right, we should update the chart in bitnami/kubeapps to use the latest version. I will let you know once that is ready. Sorry for the inconveniences.

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 5, 2018

no problem @andresmgot I'm happy to test it and find some issue :-)

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 6, 2018

@obeyler you should be able to install the new chart now. You can specify the flag --version 0.3.4 to ensure you are installing the latest version.

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 6, 2018

@andresmgot
Pb mongodb solved with the 0.3.4 as the mongodb start well now.
But I still have "loading" on first page :-(
Do you have any idea of what pod log I should look inside to find the root cause of that ?

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 6, 2018

All these pods have :

time="2018-09-06T10:00:41Z" level=fatal msg="Can't connect to mongoDB: unable to connect to MongoDB"

image

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 6, 2018

image
the most surprising is that the first pod of sync job run well but after the first "run" of job they all failed


time="2018-09-06T09:47:22Z" level=info msg="icon not found" name=crypto
time="2018-09-06T09:47:22Z" level=info msg="icon not found" name=parse
time="2018-09-06T09:47:23Z" level=info msg="icon not found" name=apache
time="2018-09-06T09:47:23Z" level=info msg="icon not found" name=tensorflow-inception
time="2018-09-06T09:47:23Z" level=info msg="icon not found" name=bitnami-common
time="2018-09-06T09:47:23Z" level=info msg="icon not found" name=nginx
time="2018-09-06T09:47:23Z" level=info msg="icon not found" name=kubewatch
time="2018-09-06T09:47:23Z" level=info msg="values.yaml not found" name=crypto version=0.0.2
time="2018-09-06T09:47:26Z" level=info msg="values.yaml not found" name=crypto version=0.0.1
time="2018-09-06T09:47:36Z" level=info msg="values.yaml not found" name=bitnami-common version=0.0.2
time="2018-09-06T09:47:36Z" level=info msg="values.yaml not found" name=bitnami-common version=0.0.1
time="2018-09-06T09:47:40Z" level=info msg="Successfully added the chart repository bitnami to database"

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 6, 2018

@obeyler it's not critical that all the sync jobs success once one works the MongoDB database gets populated. If you see some of them failing it's probably because MongoDB was still not ready. You can check the cause fo the failure if you see the logs of any of them. If that works properly you will see the available charts in the Charts view.

In any case, are you seeing the Loading page when listing the applications? (the first issue you reported). If that's the case please check the Chrome console to get more info. If that's the view failing it's probably the tiller-proxy the one returning an error. Can you check its logs as well? (kubectl logs -n kubeapps -l app=kubeapps-tiller-proxy)

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 6, 2018

nothing revelant into tiller proxy

 kubectl logs -n kubeapps -l app=kubeapps-tiller-proxy
time="2018-09-06T10:12:52Z" level=info msg="Using tiller host: tiller-deploy.kube-system:44134"
time="2018-09-06T10:12:52Z" level=info msg="Started Tiller Proxy" addr=":8080"
@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 6, 2018

@prydonius @andresmgot
I see the first page like describe on top of this issue. Loading...
I see also that the memory consumed by one pod is always growing
image
The log of mongodb pod shows also that a big activity is present.

@prydonius

This comment has been minimized.

Copy link
Member

commented Sep 6, 2018

This is strange, @obeyler is there any errors in the JS console, and can you show us the response of any failing calls in the Network tab of your browser inspector?

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 7, 2018

@prydonius I've also install the new chart 0.4.0 from scratch with same result
note that the cache is desactivated

image

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 7, 2018

@prydonius @andresmgot If you want I can create a zoom conference to share with you my screen and talk about it

@andresmgot

This comment has been minimized.

Copy link
Contributor

commented Sep 7, 2018

That's weird. From that view GET releases is returning a 200 but from the tiller-proxy logs you sent earlier have not logged any request (and it's retrieving an empty response, that's why the "Loading" message is being shown). It seems that you are accessing Kubeapps through an Ingress: How are you accessing it? If that's the issue this may help you: #382

Apart from that, does the Console in your browser show any error?

@obeyler

This comment has been minimized.

Copy link
Author

commented Sep 7, 2018

@prydonius @andresmgot
Yes !!!
The trouble was that the ingressRules was targetting kubeapps-dashboard service and not kubeapps
I also note into issue #382 that the service port is not 8080 (as dashboard) but only 80
Thanks you for your help ! :-D I'm Very very happy !!! If you go to the CF summit at basel contact me I would be very happy to offers you some beers and congratulate for your work

@prydonius

This comment has been minimized.

Copy link
Member

commented Sep 8, 2018

@obeyler I'm sorry that the Service names are confusing, I've created #602 to try and make this less confusing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.