New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

monocular-api pod crash too many. #444

YunSangJun opened this Issue May 2, 2018 · 4 comments


None yet
2 participants

YunSangJun commented May 2, 2018

monocular-api pod crash too many.

$ kubectl get pods -n monocular
NAME                                            READY     STATUS             RESTARTS   AGE
monocular-mongodb-779c69c4b8-ldd8w              1/1       Running            0          41m
monocular-monocular-api-5566949b68-62p4r        0/1       CrashLoopBackOff   3          9m
monocular-monocular-prerender-cdc9449bf-bvf4f   1/1       Running            0          41m
monocular-monocular-ui-59948df86d-hphcw         1/1       Running            0          9m

I set api.livenessProbe.initialDelaySeconds to 1800s on values.yaml

  replicaCount: 1
    initialDelaySeconds: 1800
$ kubectl get deploy monocular-monocular-api -n monocular -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
  replicas: 1
      app: monocular-monocular-api  
      - env:
        - name: MONOCULAR_HOME
          value: /monocular
        image: bitnami/monocular-api:v0.6.2
        imagePullPolicy: Always
          failureThreshold: 3
            path: /healthz
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 1800
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        name: monocular

After the fourth restart, the status of the monocular-api pod changes to Running status.

$ kubectl get pods -n monocular
NAME                                            READY     STATUS    RESTARTS   AGE
monocular-mongodb-779c69c4b8-ldd8w              1/1       Running   0          1h
monocular-monocular-api-5566949b68-62p4r        1/1       Running   4          33m
monocular-monocular-prerender-cdc9449bf-bvf4f   1/1       Running   0          1h
monocular-monocular-ui-59948df86d-hphcw         1/1       Running   0          33m

But 404 error occur when i access to Monocular dashboard.
And the following is logs of monocular-api pod.

$ kubectl logs -f monocular-monocular-api-5566949b68-62p4r -n monocular
time="2018-05-02T09:25:38Z" level=info msg="Processing icon" dest="/monocular/repo-data/incubator/fluentd-cloudwatch/0.2.1/logo-160x160-fit.png" source="/monocular/repo-data/incubator/fluentd-cloudwatch/0.2.1/logo-original.png" 
time="2018-05-02T09:25:39Z" level=info msg="Processing icon" dest="/monocular/repo-data/incubator/fluentd-cloudwatch/0.1.1/logo-160x160-fit.png" source="/monocular/repo-data/incubator/fluentd-cloudwatch/0.1.1/logo-original.png" 
time="2018-05-02T09:25:39Z" level=info msg="Processing icon" dest="/monocular/repo-data/incubator/etcd/0.1.3/logo-160x160-fit.png" source="/monocular/repo-data/incubator/etcd/0.1.3/logo-original.png" 
time="2018-05-02T09:25:39Z" level=error msg="Error on DownloadAndProcessChartIcon" chart=consul error="Error downloading, 404
time="2018-05-02T09:25:40Z" level=warning msg="authentication is disabled" error="no signing key, ensure MONOCULAR_AUTH_SIGNING_KEY is set" 
time="2018-05-02T09:25:40Z" level=info msg="Started Monocular API server" addr=":8081" 
[negroni] 2018-05-02T09:25:43Z | 200 | 	 100.689µs | | GET /healthz
[negroni] 2018-05-02T09:44:01Z | 404 | 	 142.446µs | xxx | GET /api/v1/charts 
[negroni] 2018-05-02T09:44:01Z | 404 | 	 73.937µs | xxx | GET /api/auth/verify 

@YunSangJun YunSangJun changed the title from monocular-api crash too many. to monocular-api pod crash too many. May 2, 2018


This comment has been minimized.


prydonius commented May 2, 2018

The restarts are expected, unfortunately the API takes a long time to download the chart repositories.

As for the 404 error, can you verify what version of the Nginx Ingress Controller you are running, and also what annotations the ingress resource has kubectl get ingress -o yaml monocular?


This comment has been minimized.


YunSangJun commented May 2, 2018

The following is an ingress resource.

$ kubectl get ing -n monocular -o yaml
apiVersion: v1
- apiVersion: extensions/v1beta1
  kind: Ingress
    annotations: nginx /
    creationTimestamp: 2018-05-02T10:10:34Z
    generation: 1
      app: monocular-monocular
      chart: monocular-0.5.1
      heritage: Tiller
      release: monocular
    name: monocular-monocular
    namespace: monocular
    resourceVersion: "881358"
    selfLink: /apis/extensions/v1beta1/namespaces/monocular/ingresses/monocular-monocular
    uid: 0d81dde9-4df1-11e8-915d-82c09a00d8d2
    - host: xxx
        - backend:
            serviceName: monocular-monocular-ui
            servicePort: 80
          path: /
        - backend:
            serviceName: monocular-monocular-api
            servicePort: 80
          path: /api/
    - hosts:
      - xxx
      secretName: monocular-tls

Nginx Ingress Controller version

# nginx -v
nginx version: nginx/1.13.5

This comment has been minimized.


prydonius commented May 2, 2018

@YunSangJun sorry I meant the image tag your ingress controller is running. One thing you can try is editing the monocular ingress and changing: /

to /

This comment has been minimized.


YunSangJun commented May 3, 2018

@prydonius Thanks. This seems to be my ingress issue.

@YunSangJun YunSangJun closed this May 3, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment