Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

voyager pod replica is changed #940

Closed
steveum0105 opened this issue Mar 16, 2018 · 8 comments

Comments

Projects
None yet
2 participants
@steveum0105
Copy link

commented Mar 16, 2018

Voyager version : 5.0.0-rc.11
K8s version : 1.8.1

I have tried to add tls setup in a namespaced environment

[]# kubectl get deployment

NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
voyager-test-ingress   2         3         1            2           1h


[]# kubectl get pods

NAME                                      READY     STATUS    RESTARTS   AGE
voyager-test-ingress-655f6d464b-zw2ph   0/1       Pending   0          10m
voyager-test-ingress-c76d98959-ngvtf    1/1       Running   0          32m
voyager-test-ingress-c76d98959-xt5rp    1/1       Running   0          32m

operator log is below

I0316 08:05:03.880363       1 ingress_crds.go:79] [baf29c23-0c17-4d3a-9db8-6c544f216a6c] voyager.appscode.com/v1beta1 test-ingress@test has changed. Diff: {*v1beta1.Ingress}.ObjectMeta.ResourceVersion:
        -: "4522212"
        +: "4522761"
{*v1beta1.Ingress}.Spec.TLS[0].Hosts:
        -: []string(nil)
        +: []string{"demo-galrae-https.testdomain.com"}
{*v1beta1.Ingress}.Spec.Rules[?->1]:
        -: <non-existent>
        +: v1beta1.IngressRule{Host: "demo-galrae-https.testdomain.com", IngressRuleValue: v1beta1.IngressRuleValue{HTTP: &v1beta1.HTTPIngressRuleValue{Paths: []v1beta1.HTTPIngressPath{{Backend: v1beta1.HTTPIngressBackend{IngressBackend: v1beta1.IngressBackend{ServiceName: "test-102fccfdb8-b447-44d2-8847-efaba459a6e4", ServicePort: intstr.IntOrString{IntVal: 80}, BackendRule: []string{"balance roundrobin", "option httpchk GET /hello", "http-check expect status 200"}}}}}}}}

this is what I have tried

{'apiVersion': 'voyager.appscode.com/v1beta1',
 'kind': 'Ingress',
 'metadata': {'annotations': {'ingress.appscode.com/node-selector': '{"nodetype": "ing-node"}',
                              'ingress.appscode.com/replicas': '2',
                              'ingress.appscode.com/type': 'HostPort',
                              'kubectl.kubernetes.io/last-applied-configuration': '{"apiVersion":"voyager.appscode.com/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.appscode.com/node-selector":"{\\"nodetype\\": \\"ing-node\\"}","ingress.appscode.com/replicas":"2","ingress.appscode.com/type":"HostPort"},"name":"test-ingress","namespace":"test"},"spec":{"tls":[{"secretName":"testdomain2.com"},{"secretName":"testdomain.com"}],"tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/ing-node","operator":"Exists"}]}}\n'},
              'clusterName': '',
              'creationTimestamp': '2018-03-16T06:42:26Z',
              'deletionGracePeriodSeconds': None,
              'deletionTimestamp': None,
              'generation': 0,
              'initializers': None,
              'name': 'test-ingress',
              'namespace': 'test',
              'resourceVersion': '4522761',
              'selfLink': '/apis/voyager.appscode.com/v1beta1/namespaces/test/ingresses/test-ingress',
              'uid': '308540df-28e5-11e8-97f1-fa163e1f5ebe'},
 'spec': {'rules': [{'host': 'demo-galrae-http.testdomain.com',
                     'http': {'paths': [{'backend': {'backendRule': ['balance roundrobin',
                                                                     'option httpchk GET /hello',
                                                                     'http-check expect status 200'],
                                                     'serviceName': 'test-202fccfdb8-b447-44d2-8847-efaba459a6e4',
                                                     'servicePort': 80}}]}},
                    {'host': 'demo-galrae-https.testdomain.com',
                     'http': {'paths': [{'backend': {'backendRule': ['balance roundrobin',
                                                                     'option httpchk GET /hello',
                                                                     'http-check expect status 200'],
                                                     'serviceName': 'test-102fccfdb8-b447-44d2-8847-efaba459a6e4',
                                                     'servicePort': 80}}]}}],
          'tls': [{'hosts': ['demo-galrae-https.testdomain.com'],
                   'secretName': 'testdomain.com'},
                  {'secretName': 'testdomain2.com'}],
          'tolerations': [{'effect': 'NoSchedule',
                           'key': 'node-role.kubernetes.io/ing-node',
                           'operator': 'Exists'}]}}

HAproxy config and Ingress is OK.
but voyager pod has tried rebooting.

@steveum0105

This comment has been minimized.

Copy link
Author

commented Mar 16, 2018

I see this when I test tls setup in a namespaced environment except kube-system

@tamalsaha

This comment has been minimized.

Copy link
Member

commented Mar 16, 2018

Can you show the command you used to install Voyager?

Can you show the logs for the rebooted pods?

@steveum0105

This comment has been minimized.

Copy link
Author

commented Mar 16, 2018

curl -fsSL https://raw.githubusercontent.com/appscode/voyager/5.0.0-rc.11/hack/deploy/voyager.sh | bash -s -- --provider=minikube --rbac --namespace=test


daemon.info: Mar 16 08:31:14 tlsmounter: Starting TLS mounter ...
daemon.info: Mar 16 08:31:14 tlsmounter: exec voyager tls-mounter --ingress-api-version=voyager.appscode.com/v1beta1 --ingress-name=test-ingress --cloud-provider=minikube --v=3 --qps=1e+06
daemon.info: Mar 16 08:31:14 reloader: Starting HAProxy configuration watcher and reloader ...
daemon.info: Mar 16 08:31:14 reloader: exec voyager kloader run --v=3 --qps=1e+06 --burst=1000000 --boot-cmd=/etc/sv/haproxy/reload --configmap=voyager-test-ingress --mount-location=/etc/haproxy
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.987565      30 logs.go:19] FLAG: --alsologtostderr="false"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.987911      30 logs.go:19] FLAG: --analytics="true"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988244      30 logs.go:19] FLAG: --boot-cmd="/etc/sv/haproxy/reload"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988361      30 logs.go:19] FLAG: --burst="1000000"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988444      30 logs.go:19] FLAG: --configmap="voyager-test-ingress"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988536      30 logs.go:19] FLAG: --help="false"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988615      30 logs.go:19] FLAG: --kubeconfig=""
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988698      30 logs.go:19] FLAG: --log.format="\"logger:stderr\""
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988788      30 logs.go:19] FLAG: --log.level="\"info\""
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.988872      30 logs.go:19] FLAG: --log_backtrace_at=":0"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989007      30 logs.go:19] FLAG: --log_dir=""
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989130      30 logs.go:19] FLAG: --logtostderr="true"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989344      30 logs.go:19] FLAG: --master=""
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989453      30 logs.go:19] FLAG: --mount-location="/etc/haproxy"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989567      30 logs.go:19] FLAG: --qps="1e+06"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989682      30 logs.go:19] FLAG: --resync-period="5m0s"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989782      30 logs.go:19] FLAG: --secret=""
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989875      30 logs.go:19] FLAG: --stderrthreshold="2"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.989972      30 logs.go:19] FLAG: --v="3"
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:14.990059      30 logs.go:19] FLAG: --vmodule=""
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.995845      35 logs.go:19] FLAG: --alsologtostderr="false"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.996089      35 logs.go:19] FLAG: --analytics="true"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.996318      35 logs.go:19] FLAG: --boot-cmd=""
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.996636      35 logs.go:19] FLAG: --burst="1000000"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.996756      35 logs.go:19] FLAG: --cloud-provider="minikube"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.996902      35 logs.go:19] FLAG: --help="false"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.997014      35 logs.go:19] FLAG: --ingress-api-version="voyager.appscode.com/v1beta1"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.997182      35 logs.go:19] FLAG: --ingress-name="test-ingress"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.997563      35 logs.go:19] FLAG: --init-only="false"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.997673      35 logs.go:19] FLAG: --kubeconfig=""
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.997774      35 logs.go:19] FLAG: --log.format="\"logger:stderr\""
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.997881      35 logs.go:19] FLAG: --log.level="\"info\""
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.997981      35 logs.go:19] FLAG: --log_backtrace_at=":0"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998081      35 logs.go:19] FLAG: --log_dir=""
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998196      35 logs.go:19] FLAG: --logtostderr="true"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998309      35 logs.go:19] FLAG: --master=""
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998403      35 logs.go:19] FLAG: --mount="/etc/ssl/private/haproxy"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998525      35 logs.go:19] FLAG: --qps="1e+06"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998624      35 logs.go:19] FLAG: --resync-period="5m0s"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998723      35 logs.go:19] FLAG: --stderrthreshold="2"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.998820      35 logs.go:19] FLAG: --v="3"
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:14.999154      35 logs.go:19] FLAG: --vmodule=""
daemon.err: Mar 16 08:31:14 reloader: W0316 08:31:15.036764      30 client_config.go:529] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.039702      30 reflector.go:202] Starting reflector *v1.ConfigMap (5m0s) from github.com/appscode/voyager/vendor/github.com/appscode/kloader/controller/mount_configmap.go:111
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.039783      30 reflector.go:240] Listing and watching *v1.ConfigMap from github.com/appscode/voyager/vendor/github.com/appscode/kloader/controller/mount_configmap.go:111
daemon.err: Mar 16 08:31:14 tlsmounter: W0316 08:31:15.043797      35 client_config.go:529] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.053075      30 util.go:19] Update Received: 1
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.053199      30 mount_configmap.go:56] Queued Add event
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.053325      30 mount_configmap.go:143] Processing change to ConfigMap test/voyager-test-ingress
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.053562      30 util.go:40] calling boot file to execute
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.055161      30 util.go:19] Update Received: 2
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.062504      30 util.go:43] Output:
daemon.err: Mar 16 08:31:14 reloader:  Configuration file is valid
daemon.err: Mar 16 08:31:14 reloader: ok: run: haproxy: (pid 31) 1s
daemon.err: Mar 16 08:31:14 reloader: I0316 08:31:15.062515      30 util.go:48] boot file executed
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:15.075079      35 controller.go:237] Starting tls-mounter
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:15.075543      35 reflector.go:202] Starting reflector *v1beta1.Certificate (5m0s) from github.com/appscode/voyager/pkg/tlsmounter/controller.go:245
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:15.075571      35 reflector.go:240] Listing and watching *v1beta1.Certificate from github.com/appscode/voyager/pkg/tlsmounter/controller.go:245
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:15.075949      35 reflector.go:202] Starting reflector *v1.Secret (5m0s) from github.com/appscode/voyager/pkg/tlsmounter/controller.go:239
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:15.075959      35 reflector.go:240] Listing and watching *v1.Secret from github.com/appscode/voyager/pkg/tlsmounter/controller.go:239
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:15.076522      35 reflector.go:202] Starting reflector *v1beta1.Ingress (5m0s) from github.com/appscode/voyager/pkg/tlsmounter/controller.go:241
daemon.err: Mar 16 08:31:14 tlsmounter: I0316 08:31:15.076607      35 reflector.go:240] Listing and watching *v1beta1.Ingress from github.com/appscode/voyager/pkg/tlsmounter/controller.go:241
daemon.info: Mar 16 08:31:14 tlsmounter: Sync/Add/Update for Ingress test-ingress
daemon.info: Mar 16 08:31:14 tlsmounter: Sync/Add/Update for Ingress test-ingress
daemon.info: Mar 16 08:31:14 tlsmounter: Sync/Add/Update for Ingress test-ingress
daemon.info: Mar 16 08:31:14 tlsmounter: Sync/Add/Update for Ingress test-ingress
daemon.err: Mar 16 08:31:14 reloader: I0316 08:36:15.053613      30 util.go:19] Update Received: 3

This is the current state for the log above.
I have deleted all my rules. In the end, two voyager pods are all rebooted again. (replica of deployment is now 2)

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.appscode.com/node-selector: '{"nodetype": "ing-node"}'
    ingress.appscode.com/replicas: "2"
    ingress.appscode.com/type: HostPort
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"voyager.appscode.com/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.appscode.com/node-selector":"{\"nodetype\": \"ing-node\"}","ingress.appscode.com/replicas":"2","ingress.appscode.com/type":"HostPort"},"name":"test-ingress","namespace":"test"},"spec":{"tls":[{"secretName":"testdomain.com"},{"secretName":"testdomain2.com"}],"tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/ing-node","operator":"Exists"}]}}
  clusterName: ""
  creationTimestamp: 2018-03-16T06:42:26Z
  deletionGracePeriodSeconds: null
  deletionTimestamp: null
  generation: 0
  initializers: null
  name: test-ingress
  namespace: test
  resourceVersion: "4526201"
  selfLink: /apis/voyager.appscode.com/v1beta1/namespaces/galrae/ingresses/galrae-ingress
  uid: 308540df-28e5-11e8-97f1-fa163e1f5ebe
spec:
  rules: []
  tls:
  - secretName: testdomain.com
  - secretName: testdomain2.com
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/ing-node
    operator: Exists
@tamalsaha

This comment has been minimized.

Copy link
Member

commented Mar 16, 2018

@steveum0105, can you please try with the new 6.0.0 release and report back?

There are few changes necessary for this:

  • nodeSelector is now part of the spec.
  • backendRule is now bckendRules (plural).
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.appscode.com/replicas: '2'
    ingress.appscode.com/type: HostPort
  name: test-ingress
  namespace: test
spec:
  nodeSelector:
    nodetype: ing-node
  rules:
  - host: demo-galrae-http.testdomain.com
    http:
      paths:
      - backend:
          backendRules:
          - balance roundrobin
          - option httpchk GET /hello
          - http-check expect status 200
          serviceName: test-202fccfdb8-b447-44d2-8847-efaba459a6e4
          servicePort: 80
  - host: demo-galrae-https.testdomain.com
    http:
      paths:
      - backend:
          backendRules:
          - balance roundrobin
          - option httpchk GET /hello
          - http-check expect status 200
          serviceName: test-102fccfdb8-b447-44d2-8847-efaba459a6e4
          servicePort: 80
  tls:
  - hosts:
    - demo-galrae-https.testdomain.com
    secretName: testdomain.com
  - secretName: testdomain2.com
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/ing-node
    operator: Exists
@steveum0105

This comment has been minimized.

Copy link
Author

commented Mar 19, 2018

OK I'm gonna test with 6.0 and review again

@steveum0105

This comment has been minimized.

Copy link
Author

commented Mar 23, 2018

I upgraded my environment. k8s is now 1.9.5, and voyager is 6.0.0.

The problem happened again and I tested more cases.

Initial pods

kubectl get pods

voyager-test-ingress-8fb675b88-84mg5    1/1       Running   0          18m
voyager-test-ingress-8fb675b88-zwwf4    1/1       Running   0          18m
voyager-operator-79dfc8c9d7-4nkxd         1/1       Running   0          4h

Initial ingress

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.appscode.com/replicas: "2"
    ingress.appscode.com/type: HostPort
  name: test-ingress
  namespace: test
spec:
  nodeSelector:
    nodetype: ing-node
  tls:
  - secretName: demo-https.test.com
  - secretName: testtest.com
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/ing-node
    operator: Exists

Firstly I tried http host. In this case, everything is OK.

http tried pods

voyager-test-ingress-8fb675b88-84mg5    1/1       Running   0          18m
voyager-test-ingress-8fb675b88-zwwf4    1/1       Running   0          18m
voyager-operator-79dfc8c9d7-4nkxd         1/1       Running   0          4h

http tried ingress

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.appscode.com/replicas: "2"
    ingress.appscode.com/type: HostPort
  name: test-ingress
  namespace: test
spec:
  nodeSelector:
    nodetype: ing-node
  rules:
  - host: demo-http.test.com
    http:
      paths:
      - backend:
          backendRules:
          - balance roundrobin
          - option httpchk GET /hello
          - http-check expect status 200
          serviceName: 202fccfdb8-b447-44d2-8847-efaba459a6e4
          servicePort: 80
  tls:
  - secretName: demo-https.test.com
  - secretName: testtest.com
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/ing-node
    operator: Exists

Secondly I tried https host. There is a problem.
I can see pending ingress pod.

https tried pods

voyager-test-ingress-5c695d8bb7-gq7qm   0/1       Pending   0          17m
voyager-test-ingress-8fb675b88-84mg5    1/1       Running   0          35m
voyager-test-ingress-8fb675b88-zwwf4    1/1       Running   0          35m
voyager-operator-79dfc8c9d7-4nkxd         1/1       Running   0          4h

https tried deployments

NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
voyager-test-ingress   2         3         1            2           36m
voyager-operator         1         1         1            1           4h

https tried ingress

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.appscode.com/replicas: "2"
    ingress.appscode.com/type: HostPort
  name: test-ingress
  namespace: test
spec:
  nodeSelector:
    nodetype: ing-node
  rules:
  - host: demo-http.test.com
    http:
      paths:
      - backend:
          backendRules:
          - balance roundrobin
          - option httpchk GET /hello
          - http-check expect status 200
          serviceName: 202fccfdb8-b447-44d2-8847-efaba459a6e4
          servicePort: 80
  - host: demo-https.test.com
    http:
      paths:
      - backend:
          backendRules:
          - balance roundrobin
          - option httpchk GET /
          - http-check expect status 200
          serviceName: 102fccfdb8-b447-44d2-8847-efaba459a6e4
          servicePort: 80
  tls:
  - hosts:
    - demo-https.test.com
    secretName: demo-https.test.com
  - secretName: testtest.com
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/ing-node
    operator: Exists
@steveum0105

This comment has been minimized.

Copy link
Author

commented Mar 23, 2018

Additionally, delete case can be though.

When I delete http host first and https host second, I can still see a pending ingress pod.
However, when I delete https host first, the pending Ingress pod disappears.


I think it's related to 'tls setup'.
HAproxy conf is completely OK in every cases I tested, but ingress pod.

@steveum0105

This comment has been minimized.

Copy link
Author

commented Apr 4, 2018

This doesn't recently happen after update latest k8s and voyager.

@steveum0105 steveum0105 closed this Apr 4, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.