Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CrashLoopBackOff in Rancher #376

Closed
Dgadavin opened this issue May 4, 2018 · 8 comments
Closed

CrashLoopBackOff in Rancher #376

Dgadavin opened this issue May 4, 2018 · 8 comments
Labels
triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@Dgadavin
Copy link

Dgadavin commented May 4, 2018

Hi. I have an issue running alb-ingress-controller on Rancher.

I've done everything like in README and my pod is unhealthy. Here is what I see in logs:

I0504 14:34:08.809359       1 launch.go:112] &{ALB Ingress Controller 1.0-alpha.9 git-a01b40ac git://github.com/coreos/alb-ingress-controller}
I0504 14:34:08.809587       1 launch.go:282] Creating API client for https://10.43.0.1:443
I0504 14:34:08.820374       1 launch.go:295] Running in Kubernetes Cluster version v1.10 (v1.10.1) - git (clean) commit d4ab47518836c750f9949b9e0d387f20fb92260b - platform linux/amd64

After that I have an error.
Here is kubectl get po --all-namespaces:

cattle-system   cattle-cluster-agent-7bd646566f-p5wgw     1/1       Running            0          1d
cattle-system   cattle-node-agent-cmb8n                   1/1       Running            0          1d
default         2048-deployment-5664c74dcc-8phsh          1/1       Running            0          6m
default         2048-deployment-5664c74dcc-tg8q6          1/1       Running            0          6m
ingress-nginx   nginx-ingress-controller-kb57t            1/1       Running            0          2h
kube-system     alb-ingress-controller-7f6fb954bb-9r4gp   0/1       CrashLoopBackOff   4          2m
kube-system     canal-fqpmq                               3/3       Running            0          1d
kube-system     default-http-backend-7ff44df5b7-r9f7v     1/1       Running            0          2h
kube-system     kube-dns-7dfdc4897f-nbvk7                 3/3       Running            0          1d
kube-system     kube-dns-autoscaler-6c4b786f5-5ks4t       1/1       Running            0          1d

Also here is service output kubectl get svc --all-namespaces:

NAMESPACE       NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default         kubernetes             ClusterIP   10.43.0.1       <none>        443/TCP         2h
default         service-2048           NodePort    10.43.248.183   <none>        80:32329/TCP    8m
ingress-nginx   default-http-backend   ClusterIP   10.43.9.98      <none>        80/TCP          1d
kube-system     default-http-backend   ClusterIP   10.43.190.161   <none>        80/TCP          2h
kube-system     kube-dns               ClusterIP   10.43.0.10      <none>        53/UDP,53/TCP   1d

Please help me to understand whats wrong with my setup.
Thanks

@lareeth
Copy link

lareeth commented Jun 14, 2018

If you run the following command it will tell you why its in CrashLoopBackOff

 kubectl describe pod alb-ingress-controller-7f6fb954bb-9r4gp

There are newer images available, try using either tag 82b0003 or latest

@bigkraig
Copy link

You can also add a -p to get the logs from the previous execution of the pod but without more information its not possible to determine why it's in a CrashLoopBackOff state.

@bigkraig bigkraig added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jun 18, 2018
@Dgadavin
Copy link
Author

It was happened because I have k8s with RBAC and when I updated my manifest file with RBAC everything became good. I think it will be great if you add to examples configuration of ALB-ingress-controller with RBAC setup.
Now everything work good. So we can close the issue.

@bigkraig
Copy link

Would you mind either submitting a PR with those required RBAC changes or dumping them in here so I can make sure a working set is added?

@Dgadavin
Copy link
Author

Ok. Will do.

@Dgadavin
Copy link
Author

Dgadavin commented Jun 25, 2018

I've created a PR > #414

@bigkraig
Copy link

Thanks @Dgadavin, can you sign the CLA?

@Dgadavin
Copy link
Author

@bigkraig Done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

3 participants