Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autoscaler create pods less than requests. #15131

Closed
huasiy opened this issue Apr 17, 2024 · 1 comment
Closed

Autoscaler create pods less than requests. #15131

huasiy opened this issue Apr 17, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@huasiy
Copy link

huasiy commented Apr 17, 2024

What version of Knative?

Kubernetes v1.29, Knative v1.13

0.9.x
0.10.x
0.11.x
Output of git describe --dirty

Expected Behavior

When I send five requests to existing knative service with seven running pods, five pods should be retained when scaling is triggered.

Actual Behavior

Only four pods will be retained. Sometimes I can even find four out seven pods deleted, one pod created immediately, thus reach four pods.

Steps to Reproduce the Problem

Run this script.

#! /bin/bash

set -ex

echo "Create the app"
cat > /tmp/service <<EOF
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: delete 
  namespace: default
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "1"
        autoscaling.knative.dev/target-utilization-percentage: "100"
        autoscaling.knative.dev/target-burst-capacity: "1"
        autoscaling.knative.dev/metric: "concurrency"
    spec:
      timeoutSeconds: 180 
      containers:
        - image: docker.io/hisy/delete:latest 
          ImagePullPolicy: IfNotPresent 
      terminationGracePeriodSeconds: 300
EOF

kn service apply -f /tmp/service 
sleep 5

export APP=$(kubectl get service.serving.knative.dev/delete | grep http | awk '{print $2}')

echo "Wait for pods to be terminated"
while [ $(kubectl get pods 2>/dev/null | wc -l) -ne 0 ];
do
  sleep 5;
done

echo "hit the autoscaler with burst of requests"
for i in `seq 7`; do
    curl -s "$APP?wait=10" 1>/dev/null &
done

echo "wait for the autoscaler to kick in and the bursty requests to finish"
sleep 30

echo "send longer requets"
for i in `seq 5`; do
    curl "$APP?wait=120"&
    sleep 1;
done

You can find seven pods created first. Then only four pods are retained.

@huasiy huasiy added the kind/bug Categorizes issue or PR as related to a bug. label Apr 17, 2024
Copy link

This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 17, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

1 participant