Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example guestbook all-in-one fails #417

Closed
hect1995 opened this issue Apr 27, 2021 · 4 comments
Closed

Example guestbook all-in-one fails #417

hect1995 opened this issue Apr 27, 2021 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@hect1995
Copy link

hect1995 commented Apr 27, 2021

I am deploying the guestbook all-in-one example into Openshift 4 as:

argocd app create guestbook --repo https://github.com/kubernetes/examples.git --path guestbook/all-in-one --dest-server https://kubernetes.default.svc --dest-namespace argocd

argocd app sync guestbook

$ oc get pods
NAME                                  READY   STATUS             RESTARTS   AGE
argocd-application-controller-0       1/1     Running            0          150m
argocd-dex-server-5dd657bd9-65lg7     1/1     Running            2          150m
argocd-operator-df9b47968-8xshg       1/1     Running            0          83m
argocd-redis-759b6bc7f4-4749g         1/1     Running            0          150m
argocd-repo-server-6c495f858f-qp9l5   1/1     Running            0          150m
argocd-server-859b4b5578-s8n29        1/1     Running            0          150m
frontend-85595f5bf9-2hcst             0/1     CrashLoopBackOff   7          12m
frontend-85595f5bf9-gh4ss             0/1     CrashLoopBackOff   7          12m
frontend-85595f5bf9-tpbt9             0/1     CrashLoopBackOff   7          12m
redis-follower-dddfbdcc9-lz8v5        1/1     Running            0          12m
redis-follower-dddfbdcc9-vnp9n        1/1     Running            0          12m
redis-leader-fb76b4755-6nn2n          1/1     Running            0          12m

In all nodes from frontend I get:

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.129.2.226. Set the 'ServerName' directive globally to suppress this message

(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80

(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80

no listening sockets available, shutting down

AH00015: Unable to open logs
@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants