Companion project for the post What Breaks First at 10k Concurrent Connections in ASP.NET Core.
It contains a single ASP.NET Core 10 minimal API with both the fragile and the fixed version of every endpoint, plus three k6 scripts that reproduce each failure mode and verify each fix.
concurrency-lab/
├── ConcurrencyLab.sln
├── global.json # pins to .NET 10 SDK
├── src/Concurrency.Api/
│ ├── Concurrency.Api.csproj
│ ├── Program.cs # all bad/* and good/* endpoints
│ └── appsettings.json
└── k6/
├── storm.js # thread pool starvation
├── http-storm.js # HttpClient socket exhaustion
└── io-storm.js # backpressure / rate limiting
| Route | Purpose |
|---|---|
GET /health |
Trivial liveness check. |
GET /metrics |
Live thread-pool counters and GC memory. |
GET /bad/blocking |
Sync Thread.Sleep - starves the thread pool. |
GET /bad/http |
new HttpClient() per request - exhausts ephemeral ports. |
GET /bad/io |
No CancellationToken, no limit - unbounded queue. |
GET /good/blocking |
await Task.Delay, accepts CancellationToken. |
GET /good/http |
IHttpClientFactory + tuned SocketsHttpHandler. |
GET /good/io |
Wrapped in a ConcurrencyLimiter (PermitLimit=500). |
cd src/Concurrency.Api
dotnet run -c Release
# Now listening on: http://0.0.0.0:5080Always use
-c Release.Debugbuilds disable JIT optimisations and produce misleading latency numbers.
# Reproduce the failure
k6 run k6/storm.js
# Verify the fix
k6 run -e TARGET=good k6/storm.jsWhile the test is running, watch the thread pool drain:
# Windows PowerShell
while ($true) { Invoke-RestMethod http://localhost:5080/metrics; Start-Sleep -Seconds 1 }# Linux / macOS
watch -n 1 "curl -s http://localhost:5080/metrics | jq"k6 run k6/http-storm.js # exhausts ephemeral ports
k6 run -e TARGET=good k6/http-storm.js # stable, pooledVerify ephemeral port usage on Windows:
(netstat -an | Select-String TIME_WAIT).CountOn Linux:
ss -sk6 run -e TARGET=bad k6/io-storm.js # latency snowball
k6 run k6/io-storm.js # mix of 200 / 429, zero failuresA 429 response means the rate limiter shed the request on purpose. That is
the correct behaviour and is counted as a success in the k6 check.
- The lab listens on
0.0.0.0:5080so a load generator on a second machine can reach it. If you only ever run k6 on the same box, change to127.0.0.1:5080inProgram.cs. - The
IHttpClientFactoryconfiguration deliberately calls back intohttp://localhost:5080/health. In a real service this would be your downstream API. - The rate limiter uses
ConcurrencyLimiterwithPermitLimit=500andQueueLimit=100. See Step 7 in the post for how to choose those numbers for your own workload.