Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pod rolling update cause error reuqest flow #76580

Open
luweiv9988 opened this Issue Apr 15, 2019 · 1 comment

Comments

Projects
None yet
2 participants
@luweiv9988
Copy link

luweiv9988 commented Apr 15, 2019

What happened:
when kubernetes rolling update killing off old pod there will be a small amount of traffic still going through the old pod and cause http 502 (php)

What you expected to happen:

here are my configuration:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-chongzhi
  namespace: juhe-test
  labels: 
    app: php-chongzhi
spec:
  selector:
    matchLabels:
      app: php-chongzhi
  replicas: 8
  minReadySeconds: 10
  strategy:
    # indicate which strategy we want for rolling update
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  # rollback version limitd in 5 times
  revisionHistoryLimit: 5
  template:
    metadata:
      labels:
        app: php-chongzhi
    spec:
      initContainers:
      - name: copywebdata
        image: registry.cn-hangzhou.aliyuncs.com/kubernetes_hub/chongzhi.juhe.cn:f705bedb 
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Copy web-data to tmp folder
          cp -rf /data/www/nginx/chongzhi.juhe.cn/ /mnt/
        volumeMounts:
        - name: chongzhi-data
          mountPath: /mnt/
        - name: localtime
          mountPath: /etc/localtime

      imagePullSecrets:
      - name: pullpass

      containers:
      - name: phpfpm
        image: registry.cn-hangzhou.aliyuncs.com/kubernetes_hub/chongzhi.juhe.cn:f705bedb 
        ports:
        - name: php
          containerPort: 9000
        resources:
          requests:
            cpu: 200m
            memory: 256Mi
          limits:
            cpu: "500m"
            memory: "512Mi"
        readinessProbe:
          periodSeconds: 1
          timeoutSeconds: 1
          tcpSocket: 
            port: 9000
        env:
        - name: aliyun_logs_php-chongzhi
          value: "stdout"
        - name: aliyun_logs_php-chongzhi_tag
          value: app=php-chongzhi

      volumes:
      - name: localtime
        hostPath: 
          path: /etc/localtime
      - name: chongzhi-data
        persistentVolumeClaim: 
          claimName: website-data

---
apiVersion: v1
kind: Service
metadata:
  name: php-chongzhi
  namespace: juhe-test
  labels:
    app: php-chongzhi
spec:
  ports:
  - name: php
    port: 9000
  selector:
    app: php-chongzhi

How to reproduce it (as minimally and precisely as possible):

$ kubectl apply -f deployment.yaml --record

Anything else we need to know?:

  • php application
  • http 502

Old pod is terminating ,kuberntes shoud be mark it be dead. but user flow stil can request to
terminating pod cause application error :(

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:16Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:31:35Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
  • Alicloud
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@luweiv9988

This comment has been minimized.

Copy link
Author

luweiv9988 commented Apr 16, 2019

/sig scheduling

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.