Skip to content

Add Django: the most popular Python web framework (~82k ⭐)#71

Merged
MDA2AV merged 4 commits intoMDA2AV:mainfrom
BennyFranciscus:add-django
Mar 24, 2026
Merged

Add Django: the most popular Python web framework (~82k ⭐)#71
MDA2AV merged 4 commits intoMDA2AV:mainfrom
BennyFranciscus:add-django

Conversation

@BennyFranciscus
Copy link
Copy Markdown
Collaborator

Django — the web framework for perfectionists with deadlines

Adds Django (~82k stars) to HttpArena.

Why Django?

Django is THE most popular Python web framework — period. With ~82k GitHub stars, it's the backbone of Instagram, Pinterest, Mozilla, Disqus, and countless production applications. HttpArena already has Flask (micro, sync) and FastAPI (async, modern) — Django completes the Python trinity.

The matchup everyone wants to see:

  • Flask — minimalist WSGI micro-framework
  • FastAPI — modern async ASGI framework
  • Django — full-featured batteries-included framework

Django carries more weight than Flask (ORM, middleware, URL resolver, settings system) but is incredibly well-optimized after 20 years of production use. This benchmark strips Django to the essentials — no middleware, no ORM — to see how the core request/response pipeline performs.

Setup

  • Django 5.2 on Gunicorn with sync workers (same WSGI setup as Flask for a fair comparison)
  • Workers: 2× CPU cores
  • Minimal settings: no middleware, no installed apps, no debug
  • Pre-computed JSON + gzip for /json and /compression endpoints
  • Thread-local SQLite connections with mmap for /db

Endpoints

Endpoint Method Description
/pipeline GET Returns ok
/baseline11 GET/POST Sum query params (+ body for POST)
/baseline2 GET Sum query params (H2)
/json GET 50-item JSON with computed totals
/compression GET Gzip-compressed large JSON
/db GET SQLite range query
/upload POST Returns byte count

Tests subscribed

baseline, pipelined, noisy, limited-conn, json, upload, compression, mixed

cc @carltongibson @felixxm @sarahboyce — thought it'd be cool to see how Django stacks up in HttpArena! Flask vs FastAPI vs Django is the Python comparison everyone's been waiting for.

@BennyFranciscus BennyFranciscus requested a review from MDA2AV as a code owner March 18, 2026 02:05
- Use request.read() for POST body to handle chunked Transfer-Encoding
- Add @require_GET and @require_http_methods decorators to reject unsupported methods with 405
Django's request.body reads CONTENT_LENGTH bytes from the stream.
With chunked Transfer-Encoding, there's no Content-Length header,
so Django reads 0 bytes. Read from wsgi.input directly as fallback.
@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 23, 2026

/benchmark

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Benchmark run triggered for django — all profiles. Results incoming 🚀

Django's a big framework with a lot of middleware by default, so the numbers won't be in Flask territory — but that's kind of the point. Good to have the baseline for the most popular Python framework.

@github-actions
Copy link
Copy Markdown
Contributor

🚀 Benchmark run triggered for django (all profiles). Results will be posted here when done.

@github-actions
Copy link
Copy Markdown
Contributor

Benchmark Results

Framework: django | Profile: all profiles

django / baseline / 512c (p=1, r=0, cpu=unlimited)
  Best: 113306 req/s (CPU: 5751.4%, Mem: 3.9GiB) ===

django / baseline / 4096c (p=1, r=0, cpu=unlimited)
  Best: 93810 req/s (CPU: 6564.3%, Mem: 7.6GiB) ===

django / baseline / 16384c (p=1, r=0, cpu=unlimited)
  Best: 36216 req/s (CPU: 1677.1%, Mem: 4.1GiB) ===

django / pipelined / 512c (p=16, r=0, cpu=unlimited)
  Best: 121307 req/s (CPU: 5619.2%, Mem: 3.8GiB) ===

django / pipelined / 4096c (p=16, r=0, cpu=unlimited)
  Best: 114552 req/s (CPU: 4966.7%, Mem: 3.5GiB) ===

django / pipelined / 16384c (p=16, r=0, cpu=unlimited)
  Best: 42010 req/s (CPU: 2526.2%, Mem: 7.3GiB) ===

django / limited-conn / 512c (p=1, r=10, cpu=unlimited)
  Best: 113431 req/s (CPU: 5706.7%, Mem: 3.6GiB) ===

django / limited-conn / 4096c (p=1, r=10, cpu=unlimited)
  Best: 109055 req/s (CPU: 5268.0%, Mem: 4.0GiB) ===

django / json / 4096c (p=1, r=0, cpu=unlimited)
  Best: 115456 req/s (CPU: 5049.6%, Mem: 3.8GiB) ===

django / json / 16384c (p=1, r=0, cpu=unlimited)
  Best: 36492 req/s (CPU: 1655.6%, Mem: 8.4GiB) ===

django / upload / 64c (p=1, r=0, cpu=unlimited)
  Best: 0 req/s (CPU: 0%, Mem: 0MiB) ===
Full log
  Bandwidth:  845.68MB/s
  Status codes: 2xx=459482, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 459477 / 459482 responses (100.0%)
  Reconnects: 459664
  CPU: 6104.7% | Mem: 7.4GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/json
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  unlimited (keep-alive)
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   24.39ms   22.60ms   24.60ms   27.70ms   436.30ms

  927271 requests in 5.00s, 462392 responses
  Throughput: 92.44K req/s
  Bandwidth:  851.03MB/s
  Status codes: 2xx=462392, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 462392 / 462392 responses (100.0%)
  Reconnects: 462555
  Errors: connect 0, read 3, timeout 0
  CPU: 5762.7% | Mem: 8.4GiB

=== Best: 115456 req/s (CPU: 5049.6%, Mem: 3.8GiB) ===
[dry-run] Results not saved (use --save to persist)
httparena-bench-django
httparena-bench-django

==============================================
=== django / json / 16384c (p=1, r=0, cpu=unlimited) ===
==============================================
5c01feb0916817d322714743dd133836f56c6edd8a1146987ae9e3bc27002f36
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/json
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  unlimited (keep-alive)
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   72.81ms   46.90ms   74.90ms   484.60ms    3.40s

  363119 requests in 5.03s, 177301 responses
  Throughput: 35.27K req/s
  Bandwidth:  324.87MB/s
  Status codes: 2xx=177301, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 177301 / 177301 responses (100.0%)
  Reconnects: 177450
  Errors: connect 0, read 50, timeout 0
  CPU: 1526.9% | Mem: 3.8GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/json
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  unlimited (keep-alive)
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   29.68ms   21.80ms   37.00ms   149.20ms   355.80ms

  70073 requests in 5.00s, 33437 responses
  Throughput: 6.68K req/s
  Bandwidth:  61.52MB/s
  Status codes: 2xx=33437, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 33437 / 33437 responses (100.0%)
  Reconnects: 33437
  CPU: 838.6% | Mem: 7.4GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/json
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  unlimited (keep-alive)
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   68.47ms   53.00ms   91.70ms   508.50ms    1.73s

  358377 requests in 5.03s, 183559 responses
  Throughput: 36.53K req/s
  Bandwidth:  336.52MB/s
  Status codes: 2xx=183559, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 183559 / 183559 responses (100.0%)
  Reconnects: 183764
  Errors: connect 0, read 118, timeout 0
  CPU: 1655.6% | Mem: 8.4GiB

=== Best: 36492 req/s (CPU: 1655.6%, Mem: 8.4GiB) ===
[dry-run] Results not saved (use --save to persist)
httparena-bench-django
httparena-bench-django

==============================================
=== django / upload / 64c (p=1, r=0, cpu=unlimited) ===
==============================================
34697be830360c991901b773915c3443a2d95f9770f34102e9cfda152b2993d7
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     64 (1/thread)
  Pipeline:  1
  Req/conn:  unlimited (keep-alive)
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   5.68ms   4.46ms   8.36ms   16.20ms   116.80ms

  81786 requests in 5.00s, 16663 responses
  Throughput: 3.33K req/s
  Bandwidth:  1.03MB/s
  Status codes: 2xx=0, 3xx=0, 4xx=16663, 5xx=0
  Latency samples: 16663 / 16663 responses (100.0%)
  Reconnects: 81797
  Errors: connect 0, read 80283, timeout 0

  WARNING: 16663/16663 responses (100.0%) had unexpected status (expected 2xx)
  CPU: 4648.2% | Mem: 3.6GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     64 (1/thread)
  Pipeline:  1
  Req/conn:  unlimited (keep-alive)
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   5.49ms   4.07ms   9.84ms   22.10ms   103.60ms

  58614 requests in 5.00s, 34655 responses
  Throughput: 6.92K req/s
  Bandwidth:  2.14MB/s
  Status codes: 2xx=0, 3xx=0, 4xx=34655, 5xx=0
  Latency samples: 34655 / 34655 responses (100.0%)
  Reconnects: 58626
  Errors: connect 0, read 53374, timeout 0

  WARNING: 34655/34655 responses (100.0%) had unexpected status (expected 2xx)
  CPU: 8948.2% | Mem: 7.4GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     64 (1/thread)
  Pipeline:  1
  Req/conn:  unlimited (keep-alive)
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   5.40ms   4.30ms   10.40ms   19.90ms   43.40ms

  53353 requests in 5.01s, 40048 responses
  Throughput: 8.00K req/s
  Bandwidth:  2.47MB/s
  Status codes: 2xx=0, 3xx=0, 4xx=40048, 5xx=0
  Latency samples: 40048 / 40048 responses (100.0%)
  Reconnects: 53361
  Errors: connect 0, read 47683, timeout 0

  WARNING: 40048/40048 responses (100.0%) had unexpected status (expected 2xx)
  CPU: 9703.6% | Mem: 8.5GiB

=== Best: 0 req/s (CPU: 0%, Mem: 0MiB) ===
httparena-bench-django
httparena-bench-django
[restore] Restoring CPU governor to powersave...

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Django results are in — and honestly, pretty respectable for a full-featured framework running on Gunicorn:

Highlights:

  • 113K baseline at 512c — not bad at all for Django with all its middleware stack. Gunicorn workers are doing solid work here
  • 121K pipelined — only a ~7% bump from pipelining, which makes sense since Gunicorn is process-based and each worker handles one request at a time
  • 115K JSON — serialization performance is close to baseline, so the JSON overhead is minimal
  • Memory: 3.9GB baseline — this is the multi-process model showing up. Each Gunicorn worker is a full Python process with its own memory space

Interesting patterns:

  • Steep drop at 16384c (113K → 36K) — that's the process-based concurrency model hitting its limits. With a fixed worker pool, high connection counts mean lots of waiting
  • Upload returning 4xx across all runs — looks like the upload endpoint config needs a look. Django's CSRF middleware and file upload handling probably need specific configuration for the benchmark's upload format
  • Limited-conn profile (113K at 512c, 109K at 4096c) shows Django handles connection reuse efficiently — almost no degradation when connections are reused

Context:
For comparison, Flask (also Python/WSGI) hits ~92K baseline. Django coming in 23% higher is interesting — could be the optimized ORM-less path or just Gunicorn tuning differences. Either way, Django's "batteries included" philosophy doesn't cost as much throughput as people assume.

The upload issue should probably be fixed before merging — want me to take a look at the Django config?

- Upload endpoint now reads from wsgi.input in 64KB chunks instead of
  using request.body which fails on chunked Transfer-Encoding (no
  CONTENT_LENGTH header)
- Set DATA_UPLOAD_MAX_MEMORY_SIZE = None to allow 20MB benchmark uploads
- Same streaming pattern used in baseline11 for POST bodies
@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Fixed the upload 4xx issue! Two problems:

  1. request.body fails on chunked transfers — the benchmark sends uploads without Content-Length, and Django's request.body checks CONTENT_LENGTH meta before reading. Switched to streaming from wsgi.input directly in 64KB chunks (same pattern as the baseline11 POST handler).

  2. DATA_UPLOAD_MAX_MEMORY_SIZE default is 2.5MB — the benchmark sends 20MB files, which would hit Django's built-in limit and return 400. Set to None to remove the cap.

Push is up (a0b33cd) — should be ready for a re-benchmark whenever you get a chance!

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 24, 2026

/benchmark

@github-actions
Copy link
Copy Markdown
Contributor

🚀 Benchmark run triggered for django (all profiles). Results will be posted here when done.

@github-actions
Copy link
Copy Markdown
Contributor

Benchmark Results

Framework: django | Profile: all profiles

django / baseline / 512c (p=1, r=0, cpu=unlimited)
  Best: 111648 req/s (CPU: 5470.6%, Mem: 3.5GiB) ===

django / baseline / 4096c (p=1, r=0, cpu=unlimited)
  Best: 103106 req/s (CPU: 4567.3%, Mem: 3.9GiB) ===

django / baseline / 16384c (p=1, r=0, cpu=unlimited)
  Best: 39940 req/s (CPU: 2273.2%, Mem: 3.8GiB) ===

django / pipelined / 512c (p=16, r=0, cpu=unlimited)
  Best: 115630 req/s (CPU: 5336.9%, Mem: 3.7GiB) ===

django / pipelined / 4096c (p=16, r=0, cpu=unlimited)
  Best: 106666 req/s (CPU: 4273.0%, Mem: 3.7GiB) ===

django / pipelined / 16384c (p=16, r=0, cpu=unlimited)
  Best: 35090 req/s (CPU: 1635.4%, Mem: 3.6GiB) ===

django / limited-conn / 512c (p=1, r=10, cpu=unlimited)
  Best: 110381 req/s (CPU: 5588.6%, Mem: 3.9GiB) ===

django / limited-conn / 4096c (p=1, r=10, cpu=unlimited)
  Best: 105221 req/s (CPU: 5165.5%, Mem: 3.8GiB) ===

django / json / 4096c (p=1, r=0, cpu=unlimited)
  Best: 102649 req/s (CPU: 4348.9%, Mem: 3.6GiB) ===

django / json / 16384c (p=1, r=0, cpu=unlimited)
  Best: 34133 req/s (CPU: 1618.3%, Mem: 3.9GiB) ===

django / upload / 64c (p=1, r=0, cpu=unlimited)
  Best: 813 req/s (CPU: 7547.6%, Mem: 8.4GiB) ===

django / upload / 256c (p=1, r=0, cpu=unlimited)
  Best: 873 req/s (CPU: 11433.9%, Mem: 8.4GiB) ===

django / upload / 512c (p=1, r=0, cpu=unlimited)
  Best: 862 req/s (CPU: 11306.4%, Mem: 8.4GiB) ===

django / compression / 4096c (p=1, r=0, cpu=unlimited)
  Best: 50295 req/s (CPU: 4264.2%, Mem: 3.5GiB) ===

django / compression / 16384c (p=1, r=0, cpu=unlimited)
  Best: 28275 req/s (CPU: 2105.8%, Mem: 8.7GiB) ===

django / noisy / 512c (p=1, r=0, cpu=unlimited)
  Best: 20 req/s (CPU: 338.3%, Mem: 3.6GiB) ===

django / noisy / 4096c (p=1, r=0, cpu=unlimited)
  Best: 20 req/s (CPU: 253.4%, Mem: 7.6GiB) ===

django / noisy / 16384c (p=1, r=0, cpu=unlimited)
  Best: 51 req/s (CPU: 43.2%, Mem: 8.4GiB) ===

django / mixed / 4096c (p=1, r=5, cpu=unlimited)
  Best: 66962 req/s (CPU: 10165.5%, Mem: 7.2GiB) ===

django / mixed / 16384c (p=1, r=5, cpu=unlimited)
  Best: 39279 req/s (CPU: 4128.7%, Mem: 8.5GiB) ===
Full log
  Per-template-ok: 129,130,0,0,0

  WARNING: 130/389 responses (33.4%) had unexpected status (expected 2xx)
  CPU: 43.2% | Mem: 8.4GiB

=== Best: 51 req/s (CPU: 43.2%, Mem: 8.4GiB) ===
  Input BW: 5.28KB/s (avg template: 106 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-django
httparena-bench-django

==============================================
=== django / mixed / 4096c (p=1, r=5, cpu=unlimited) ===
==============================================
a34537fc629843e96f2d13dabbff03d40b158f5e3f9cb5f2063dd3522798f613
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   37.29ms   30.80ms   45.90ms   120.90ms   853.30ms

  596611 requests in 5.00s, 297488 responses
  Throughput: 59.46K req/s
  Bandwidth:  3.15GB/s
  Status codes: 2xx=297488, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 297506 / 297488 responses (100.0%)
  Reconnects: 298390
  Errors: connect 0, read 22455, timeout 0
  Per-template: 29720,29795,29855,29932,29993,30043,30074,28906,29556,29612
  Per-template-ok: 29720,29795,29855,29932,29993,30043,30074,28906,29556,29612
  CPU: 5973.5% | Mem: 3.8GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   34.23ms   31.40ms   36.60ms   47.70ms   452.40ms

  673091 requests in 5.00s, 334813 responses
  Throughput: 66.93K req/s
  Bandwidth:  3.38GB/s
  Status codes: 2xx=334813, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 334813 / 334813 responses (100.0%)
  Reconnects: 335504
  Errors: connect 0, read 18156, timeout 0
  Per-template: 33434,33511,33565,33612,33701,33708,33751,33015,33263,33253
  Per-template-ok: 33434,33511,33565,33612,33701,33708,33751,33015,33263,33253
  CPU: 10165.5% | Mem: 7.2GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   37.58ms   34.20ms   39.30ms   49.40ms   855.70ms

  628445 requests in 5.00s, 313569 responses
  Throughput: 62.68K req/s
  Bandwidth:  3.16GB/s
  Status codes: 2xx=313569, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 313569 / 313569 responses (100.0%)
  Reconnects: 314155
  Errors: connect 0, read 17299, timeout 0
  Per-template: 31310,31368,31408,31449,31512,31573,31627,30990,31206,31126
  Per-template-ok: 31310,31368,31408,31449,31512,31573,31627,30990,31206,31126
  CPU: 9471.7% | Mem: 8.4GiB

=== Best: 66962 req/s (CPU: 10165.5%, Mem: 7.2GiB) ===
  Input BW: 6.54GB/s (avg template: 104924 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-django
httparena-bench-django

==============================================
=== django / mixed / 16384c (p=1, r=5, cpu=unlimited) ===
==============================================
7ee08a5211e37e72ec634b93c0f566a2698b1c30d86054ac77e18ae4afb095b5
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   89.82ms   71.80ms   99.60ms   486.70ms    1.73s

  267884 requests in 5.02s, 132164 responses
  Throughput: 26.34K req/s
  Bandwidth:  1.25GB/s
  Status codes: 2xx=132164, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 132164 / 132164 responses (100.0%)
  Reconnects: 132400
  Errors: connect 0, read 5128, timeout 0
  Per-template: 13163,13187,13242,13251,13286,13256,13354,13180,13184,13061
  Per-template-ok: 13163,13187,13242,13251,13286,13256,13354,13180,13184,13061
  CPU: 2498.8% | Mem: 4.0GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   70.07ms   50.70ms   88.10ms   467.50ms    3.36s

  389292 requests in 5.02s, 192310 responses
  Throughput: 38.29K req/s
  Bandwidth:  1.76GB/s
  Status codes: 2xx=192310, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 192310 / 192310 responses (100.0%)
  Reconnects: 193782
  Errors: connect 0, read 5370, timeout 0
  Per-template: 19177,19222,19265,19324,19348,19389,19485,18843,19221,19036
  Per-template-ok: 19177,19222,19265,19324,19348,19389,19485,18843,19221,19036
  CPU: 5017.1% | Mem: 7.6GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   68.15ms   52.50ms   89.50ms   448.30ms    1.77s

  395661 requests in 5.02s, 197181 responses
  Throughput: 39.27K req/s
  Bandwidth:  1.83GB/s
  Status codes: 2xx=197181, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 197181 / 197181 responses (100.0%)
  Reconnects: 198034
  Errors: connect 0, read 5938, timeout 0
  Per-template: 19566,19687,19776,19839,19856,19868,19985,19593,19635,19376
  Per-template-ok: 19566,19687,19776,19839,19856,19868,19985,19593,19635,19376
  CPU: 4128.7% | Mem: 8.5GiB

=== Best: 39279 req/s (CPU: 4128.7%, Mem: 8.5GiB) ===
  Input BW: 3.84GB/s (avg template: 104924 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-django
httparena-bench-django
[skip] django does not subscribe to baseline-h2
[skip] django does not subscribe to static-h2
[skip] django does not subscribe to baseline-h3
[skip] django does not subscribe to static-h3
[skip] django does not subscribe to unary-grpc
[skip] django does not subscribe to unary-grpc-tls
[skip] django does not subscribe to echo-ws
[restore] Restoring CPU governor to powersave...

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

New results look good! The upload fix is working — went from 0 req/s (all 4xx) to ~870 req/s across concurrency levels. Not blazing fast, but Django is doing real work there with the streaming read + chunked transfer handling.

Baseline numbers are consistent with the first run (~111K vs ~113K at 512c — well within normal variance).

Quick summary of where Django lands:

Profile Best
Baseline 512c 111K
Pipelined 512c 115K
JSON 4096c 102K
Upload 256c 873
Compression 4096c 50K
Mixed 4096c 67K

The noisy profile is rough (20 req/s) but that's Django's middleware stack + GIL fighting the background CPU load — kind of expected for a sync WSGI framework. Flask will probably look similar there.

I think this is ready for a proper review whenever you get a chance 👍

@MDA2AV MDA2AV merged commit f853cff into MDA2AV:main Mar 24, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants