You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When setting a Canary object to so session affinity with an Kubernete API Gateway like in Session Affinity. I was running a K6 test to verify that users were assigned to a version, and weren't shifted back on a successful deploy.
I noticed that within 1 second, all the users were assigned to the next version.
I believe this is happening because the HTTPRoute being created doesn't pin the user to the primary version.
Note, charmander is a deployment of ghcr.io/stefanprodan/podinfo
To Reproduce
K8s Yaml and K6 script
---
# Source: charmander/templates/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:
name: charmandernamespace: charmanderlabels:
app.kubernetes.io/name: charmanderapp.kubernetes.io/component: "web"spec:
minReadySeconds: 5replicas: 3revisionHistoryLimit: 5progressDeadlineSeconds: 60strategy:
rollingUpdate:
maxUnavailable: 1type: RollingUpdateselector:
matchLabels:
app.kubernetes.io/name: charmanderapp.kubernetes.io/component: "web"template:
metadata:
annotations:
prometheus.io/scrape: "true"prometheus.io/port: "9797"unique-title: 'greetings from deploy v1'labels:
app.kubernetes.io/name: charmanderapp.kubernetes.io/component: "web"spec:
containers:
- name: podinfodimage: ghcr.io/stefanprodan/podinfo:6.5.0imagePullPolicy: IfNotPresentports:
- name: httpcontainerPort: 9898protocol: TCP
- name: http-metricscontainerPort: 9797protocol: TCP
- name: grpccontainerPort: 9999protocol: TCPcommand:
- ./podinfo
- --port=9898
- --port-metrics=9797
- --grpc-port=9999
- --grpc-service-name=podinfo
- --level=info
- --random-delay=false
- --random-error=trueenv:
- name: PODINFO_UI_COLORvalue: "#34577c"
- name: PODINFO_UI_MESSAGEvalueFrom:
fieldRef:
fieldPath: metadata.annotations['unique-title']startupProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthzinitialDelaySeconds: 30timeoutSeconds: 5resources:
limits:
cpu: 2000mmemory: 512Mirequests:
cpu: 100mmemory: 64Mi
---
# Source: charmander/templates/canary.yamlapiVersion: flagger.app/v1beta1kind: Canarymetadata:
name: charmander-canarynamespace: charmanderspec:
# when set to true, deploy will auto succeed, only use during an emergency.skipAnalysis: false# deployment referencetargetRef:
apiVersion: apps/v1kind: Deploymentname: charmander# the maximum time in seconds for the canary deployment# to make progress before it is rollback (default 600s)progressDeadlineSeconds: 120service:
gatewayRefs:
- group: gateway.networking.k8s.iokind: Gatewayname: default-gatewaynamespace: istio-ingresshosts:
- 'charmander.example.com'port: 9898targetPort: 9898analysis:
interval: 1mmaxWeight: 50metrics: []sessionAffinity:
cookieName: flagger-cookiemaxAge: 3600stepWeight: 10threshold: 5
And running the k6 script
importhttpfrom'k6/http';import{check,sleep}from'k6';exportconstURL="https://charmander.example.com/"exportconstoptions={// A number specifying the number of VUs to run concurrently.vus: 6,// A string specifying the total duration of the test run.duration: '600s',// Disable clearing cookiesnoCookiesReset: true};functionparseRevision(resp){try{returnresp.json().message;}catch(e){returnnull}}exportfunctionsetup(){return{revision: null,changeCount: 0};}exportdefaultfunction(data){varresp=http.get(URL);varrevision=parseRevision(resp);if(data.revision==null){console.log(`VU initial version ${revision}`)data.revision=revision;}if(revision&&revision!==data.revision){data.changeCount++;console.log(data.revision+" : "+revision)data.revision=revision;}check(resp,{'changeCount < 2': ()=>data.changeCount<2});}exportfunctionteardown(data){console.log(data);}
The output looks like
scenarios: (100.00%) 1 scenario, 6 max VUs, 10m30s max duration (incl. graceful stop):
* default: 6 looping VUs for 10m0s (gracefulStop: 30s)
INFO[0000] VU initial version greetings from deploy v2 source=console
INFO[0000] VU initial version greetings from deploy v1 source=console
INFO[0000] VU initial version greetings from deploy v1 source=console
INFO[0000] VU initial version greetings from deploy v2 source=console
INFO[0000] VU initial version greetings from deploy v1 source=console
INFO[0000] VU initial version greetings from deploy v1 source=console
INFO[0000] greetings from deploy v1 : greetings from deploy v2 source=console
INFO[0000] greetings from deploy v1 : greetings from deploy v2 source=console
INFO[0000] greetings from deploy v1 : greetings from deploy v2 source=console
INFO[0001] greetings from deploy v1 : greetings from deploy v2 source=console
INFO[0600] {"changeCount":0,"revision":null} source=console
✓ changeCount < 2
█ setup
█ teardown
checks.........................: 100.00% ✓ 63985 ✗ 0
data_received..................: 27 MB 46 kB/s
data_sent......................: 3.0 MB 4.9 kB/s
http_req_blocked...............: avg=50.85µs min=0s med=1µs max=695.65ms p(90)=1µs p(95)=1µs
http_req_connecting............: avg=11.94µs min=0s med=0s max=86.31ms p(90)=0s p(95)=0s
http_req_duration..............: avg=55.93ms min=33.96ms med=53.5ms max=461.31ms p(90)=64.63ms p(95)=78.13ms
{ expected_response:true }...: avg=56.53ms min=33.96ms med=53.33ms max=461.31ms p(90)=66.94ms p(95)=87.43ms
http_req_failed................: 35.18% ✓ 22515 ✗ 41470
http_req_receiving.............: avg=1.57ms min=6µs med=46µs max=308.44ms p(90)=122µs p(95)=413.79µs
http_req_sending...............: avg=80.69µs min=8µs med=43µs max=26.45ms p(90)=85µs p(95)=130µs
http_req_tls_handshaking.......: avg=32.48µs min=0s med=0s max=301.47ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=54.28ms min=33.81ms med=53.06ms max=461.21ms p(90)=61.73ms p(95)=65.97ms
http_reqs......................: 63985 106.637746/s
iteration_duration.............: avg=56.24ms min=1.79µs med=53.74ms max=772.84ms p(90)=64.99ms p(95)=78.55ms
iterations.....................: 63985 106.637746/s
vus............................: 6 min=6 max=6
vus_max........................: 6 min=6 max=6
running (10m00.0s), 0/6 VUs, 63985 complete and 0 interrupted iterations
default ✓ [======================================] 6 VUs 10m0s
Expected behavior
When running, the users are ~ the correct percent of assigned users.
Additional context
Flagger version: 1.36.1
Kubernetes version: 1.25
Service Mesh provider: GatewayAPI + Istio 1.20.3
Ingress provider: GatewayAPI
The text was updated successfully, but these errors were encountered:
Describe the bug
When setting a Canary object to so session affinity with an Kubernete API Gateway like in Session Affinity. I was running a K6 test to verify that users were assigned to a version, and weren't shifted back on a successful deploy.
I noticed that within 1 second, all the users were assigned to the next version.
I believe this is happening because the HTTPRoute being created doesn't pin the user to the primary version.
HTTPRoute
Note,
charmander
is a deployment ofghcr.io/stefanprodan/podinfo
To Reproduce
K8s Yaml and K6 script
And running the k6 script
The output looks like
Expected behavior
When running, the users are ~ the correct percent of assigned users.
Additional context
The text was updated successfully, but these errors were encountered: