Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

GLBC ingress controller troubleshooting information is out of date, and unclear after v1.3 changes #1839

Closed
miend opened this issue Oct 5, 2016 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@miend
Copy link

miend commented Oct 5, 2016

The troubleshooting section of the readme for the GLBC ingress controller states that one should look at the controller pod's logs for troubleshooting information. According to the information in #1733, as of v1.3, the L7 controller runs on master, not as a regular pod. Google, of course, does not allow access to master on GKE clusters.

First, this means that the GLBC ingress controller documentation should be updated to reflect its current status. And secondly, how am I recommended to get logs from the ingress controller to troubleshoot potential issues with it if I cannot access it? Or is there some other method now recommended for troubleshooting it? I'm currently receiving a 502 bad gateway error when trying to access services/pods via ingress which are, as far as I can tell, conforming exactly to the standards set in documentation. I thought getting these logs would help me rule out any issues on the controller's end, but I've found I can't get to it at all.

@miend miend changed the title GCE ingress controller troubleshooting information is out of date, and unclear after v1.3 changes GLBC ingress controller troubleshooting information is out of date, and unclear after v1.3 changes Oct 5, 2016
@bprashanth
Copy link

we should be surfacing more through the events: #1369, but in the meanwhile, can you run kubectl describe on the ingress? can you give me the ingress/service etc for a repro? your service needs to be type=NodePort and you need to serve a health check on / or a readiness probe eg:
https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce#health-checks
https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/examples/health_checks/README.md

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants