Skip to content

Add Gin: Go web framework with httprouter (~80k⭐)#29

Merged
MDA2AV merged 3 commits intoMDA2AV:mainfrom
BennyFranciscus:add-gin
Mar 16, 2026
Merged

Add Gin: Go web framework with httprouter (~80k⭐)#29
MDA2AV merged 3 commits intoMDA2AV:mainfrom
BennyFranciscus:add-gin

Conversation

@BennyFranciscus
Copy link
Copy Markdown
Collaborator

Gin — the most popular Go web framework

Gin (~80k stars) is the Go framework most people reach for. Built on httprouter with zero-allocation routing, martini-like API, and a huge ecosystem.

HttpArena already has go-fasthttp (raw fasthttp), but Gin is what most Go developers actually use in production. It'd be really interesting to see how a framework built on net/http with nice ergonomics compares to raw fasthttp.

Implementation details

  • Gin v1.10.0 in release mode, no middleware
  • Pure Go SQLite via modernc.org/sqlite (no CGO)
  • Manual deflate/gzip compression for /compression
  • Static files pre-loaded into memory at startup
  • All endpoints implemented: /pipeline, /baseline11, /baseline2, /json, /compression, /upload, /db, /static/:filename

Why Gin?

It's the #1 most popular Go web framework. When people say "Go web framework" they usually mean Gin. Having it alongside fasthttp shows the real-world trade-off between raw performance and developer ergonomics.

cc @appleboy @javisolis123 — thought it'd be cool to see how Gin stacks up in HttpArena! 🚀

…ork!)

Gin is the most popular Go web framework (~80k stars) with a
martini-like API and zero-allocation routing via httprouter.

Implementation:
- Gin v1.10.0 in release mode, no middleware
- Pure Go SQLite via modernc.org/sqlite
- Manual deflate/gzip compression
- Static files pre-loaded into memory
- All endpoints: pipeline, baseline11, baseline2, json,
  compression, upload, db, static
@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 15, 2026

huh 7M loc? do you need to pull all the source code for the vendors?

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Yeah that's Go vendoring 😅 — go mod vendor pulls all dependency source into the repo. It's the Go convention for reproducible builds but it bloats the diff like crazy.

I can switch to just using go mod download in the Dockerfile instead, which fetches deps at build time from the module cache. The vendor dir wouldn't be committed. Would you prefer that approach?

Most of the other Go frameworks in the arena probably have the same trade-off — fasthttp's deps are just way smaller since it has fewer transitive dependencies.

Removes the vendor/ directory (~7M LOC) and switches to go mod download
at build time. The Dockerfile now copies go.mod/go.sum first for layer
caching, then builds with the module cache.

Much cleaner diff — same binary output.
@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Done — removed the vendor directory and switched to go mod download in the Dockerfile. Much cleaner now — just go.mod, go.sum, main.go, and meta.json.

The Dockerfile caches the module download layer separately so rebuilds after code changes are fast. CI should pick up the new commit automatically.

Gin's c.JSON() appends '; charset=utf-8' to Content-Type which fails
the validator's exact match check. Switched to manual json.Marshal +
c.Data() with explicit Content-Type header.
@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Fixed the Content-Type issue — Gin's c.JSON() appends ; charset=utf-8 to the Content-Type header, which the validator rejects (expects exact application/json).

Switched to manual json.Marshal + c.Data() with explicit Content-Type: application/json. Should be a clean pass now 🤞

@github-actions
Copy link
Copy Markdown
Contributor

Benchmark Results

Framework: gin | Profile: all profiles

gin / baseline / 512c (p=1, r=0, cpu=unlimited)
  Best: 297863 req/s (CPU: 4577.3%, Mem: 117.2MiB) ===

gin / baseline / 4096c (p=1, r=0, cpu=unlimited)
  Best: 443459 req/s (CPU: 6407.4%, Mem: 374.2MiB) ===

gin / baseline / 16384c (p=1, r=0, cpu=unlimited)
  Best: 546830 req/s (CPU: 6985.7%, Mem: 1007.0MiB) ===

gin / pipelined / 512c (p=16, r=0, cpu=unlimited)
  Best: 812971 req/s (CPU: 4856.1%, Mem: 203.9MiB) ===

gin / pipelined / 4096c (p=16, r=0, cpu=unlimited)
  Best: 962623 req/s (CPU: 6001.0%, Mem: 920.6MiB) ===

gin / pipelined / 16384c (p=16, r=0, cpu=unlimited)
  Best: 1068612 req/s (CPU: 6933.7%, Mem: 995.1MiB) ===

gin / limited-conn / 512c (p=1, r=10, cpu=unlimited)
  Best: 147306 req/s (CPU: 3176.1%, Mem: 95.8MiB) ===

gin / limited-conn / 4096c (p=1, r=10, cpu=unlimited)
  Best: 154580 req/s (CPU: 3332.0%, Mem: 96.5MiB) ===

gin / json / 4096c (p=1, r=0, cpu=unlimited)
  Best: 160061 req/s (CPU: 7556.8%, Mem: 384.7MiB) ===

gin / json / 16384c (p=1, r=0, cpu=unlimited)
  Best: 185544 req/s (CPU: 8337.5%, Mem: 771.1MiB) ===

gin / upload / 64c (p=1, r=0, cpu=unlimited)
  Best: 305 req/s (CPU: 5017.1%, Mem: 7.7GiB) ===

gin / upload / 256c (p=1, r=0, cpu=unlimited)
  Best: 297 req/s (CPU: 7596.7%, Mem: 18.8GiB) ===

gin / upload / 512c (p=1, r=0, cpu=unlimited)
  Best: 261 req/s (CPU: 8162.3%, Mem: 28.6GiB) ===

gin / compression / 4096c (p=1, r=0, cpu=unlimited)
  Best: 7654 req/s (CPU: 9787.3%, Mem: 3.3GiB) ===

gin / compression / 16384c (p=1, r=0, cpu=unlimited)
  Best: 7411 req/s (CPU: 10232.6%, Mem: 4.7GiB) ===

gin / noisy / 512c (p=1, r=0, cpu=unlimited)
  Best: 258509 req/s (CPU: 4820.2%, Mem: 113.8MiB) ===

gin / noisy / 4096c (p=1, r=0, cpu=unlimited)
  Best: 377070 req/s (CPU: 6729.9%, Mem: 440.8MiB) ===

gin / noisy / 16384c (p=1, r=0, cpu=unlimited)
  Best: 430210 req/s (CPU: 7523.3%, Mem: 903.7MiB) ===

gin / mixed / 4096c (p=1, r=5, cpu=unlimited)
  Best: 20779 req/s (CPU: 7702.2%, Mem: 482.3MiB) ===

gin / mixed / 16384c (p=1, r=5, cpu=unlimited)
  Best: 17236 req/s (CPU: 6982.2%, Mem: 1.5GiB) ===
Full log
  Reconnects: 3515
  Errors: connect 0, read 9, timeout 0
  Per-template: 1290803,838303,822816,0,3276
  Per-template-ok: 1290531,838268,0,0,0

  WARNING: 826399/2955198 responses (28.0%) had unexpected status (expected 2xx)
  CPU: 7199.6% | Mem: 859.4MiB

=== Best: 430210 req/s (CPU: 7523.3%, Mem: 903.7MiB) ===
  Input BW: 43.49MB/s (avg template: 106 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-gin
httparena-bench-gin

==============================================
=== gin / mixed / 4096c (p=1, r=5, cpu=unlimited) ===
==============================================
d2ca610bfa50265cb874b9910827e05d73ca8a265bd558622658ce8e5f1ae44a
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   174.66ms   10.20ms   640.30ms    1.36s    2.54s

  100997 requests in 5.00s, 94193 responses
  Throughput: 18.83K req/s
  Bandwidth:  595.90MB/s
  Status codes: 2xx=94193, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 94193 / 94193 responses (100.0%)
  Reconnects: 19497
  Per-template: 9203,10510,11668,12743,14078,12160,5733,5566,5734,6798
  Per-template-ok: 9203,10510,11668,12743,14078,12160,5733,5566,5734,6798
  CPU: 7208.9% | Mem: 2.0GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   145.78ms   4.54ms   518.00ms    1.32s    2.30s

  109714 requests in 5.00s, 102730 responses
  Throughput: 20.54K req/s
  Bandwidth:  524.00MB/s
  Status codes: 2xx=102730, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 102730 / 102730 responses (100.0%)
  Reconnects: 21203
  Per-template: 9508,11467,13413,15400,17350,15497,5279,3995,4656,6165
  Per-template-ok: 9508,11467,13413,15400,17350,15497,5279,3995,4656,6165
  CPU: 7269.1% | Mem: 781.7MiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   144.18ms   3.55ms   518.20ms    1.68s    3.01s

  110812 requests in 5.00s, 103895 responses
  Throughput: 20.77K req/s
  Bandwidth:  518.40MB/s
  Status codes: 2xx=103895, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 103893 / 103895 responses (100.0%)
  Reconnects: 21152
  Per-template: 9497,11510,13459,15484,17431,15613,6547,3705,4542,6105
  Per-template-ok: 9497,11510,13459,15484,17431,15613,6547,3705,4542,6105
  CPU: 7702.2% | Mem: 482.3MiB

=== Best: 20779 req/s (CPU: 7702.2%, Mem: 482.3MiB) ===
  Input BW: 2.03GB/s (avg template: 104924 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-gin
httparena-bench-gin

==============================================
=== gin / mixed / 16384c (p=1, r=5, cpu=unlimited) ===
==============================================
d5106beb566d651782168b293679a1367e03577bdba863e85b34027a1e75f0c3
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   332.81ms   26.00ms    1.39s    2.95s    4.09s

  99095 requests in 5.00s, 81446 responses
  Throughput: 16.27K req/s
  Bandwidth:  636.92MB/s
  Status codes: 2xx=81446, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 81446 / 81446 responses (100.0%)
  Reconnects: 16955
  Errors: connect 0, read 506, timeout 0
  Per-template: 8709,9683,10503,10894,10796,8490,3137,5645,7088,6501
  Per-template-ok: 8709,9683,10503,10894,10796,8490,3137,5645,7088,6501
  CPU: 6893.2% | Mem: 2.5GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   311.75ms   21.90ms    1.21s    2.93s    3.18s

  93166 requests in 5.00s, 76052 responses
  Throughput: 15.20K req/s
  Bandwidth:  627.63MB/s
  Status codes: 2xx=76052, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 76052 / 76052 responses (100.0%)
  Reconnects: 16202
  Errors: connect 0, read 394, timeout 0
  Per-template: 8126,9125,9837,9976,9756,7916,2661,5151,7197,6307
  Per-template-ok: 8126,9125,9837,9976,9756,7916,2661,5151,7197,6307
  CPU: 7209.8% | Mem: 829.7MiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     16384 (256/thread)
  Pipeline:  1
  Req/conn:  5
  Templates: 10
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   310.62ms   26.80ms    1.18s    2.47s    3.74s

  105848 requests in 5.01s, 86355 responses
  Throughput: 17.25K req/s
  Bandwidth:  618.23MB/s
  Status codes: 2xx=86355, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 86355 / 86355 responses (100.0%)
  Reconnects: 17452
  Errors: connect 0, read 504, timeout 0
  Per-template: 9157,10288,11307,11624,11601,9197,4000,6092,6441,6648
  Per-template-ok: 9157,10288,11307,11624,11601,9197,4000,6092,6441,6648
  CPU: 6982.2% | Mem: 1.5GiB

=== Best: 17236 req/s (CPU: 6982.2%, Mem: 1.5GiB) ===
  Input BW: 1.68GB/s (avg template: 104924 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-gin
httparena-bench-gin
[skip] gin does not subscribe to baseline-h2
[skip] gin does not subscribe to static-h2
[skip] gin does not subscribe to baseline-h3
[skip] gin does not subscribe to static-h3
[skip] gin does not subscribe to unary-grpc
[skip] gin does not subscribe to unary-grpc-tls
[skip] gin does not subscribe to echo-ws
[restore] Restoring CPU governor to powersave...

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Benchmarks are in! 🎉 Solid numbers for Go + Gin:

🚀 Pipelined: 1.07M req/s at 16384c — Go's goroutine scheduler scaling well
Baseline: 547K at 16384c — respectable for a full-featured framework
Limited-conn: 155K at 4096c — connection churn handled cleanly
📊 JSON: 186K at 16384c
🗜️ Compression: 7.7K — typical for Go's stdlib gzip
📤 Upload: 305 req/s at 64c — working correctly
🔀 Mixed: 21K at 4096c
🔊 Noisy: 430K at 16384c

For context, Go's net/http + Gin's router overhead puts this in a nice middle ground — not as raw as fasthttp but way more ergonomic. The memory usage is very reasonable too (117MB baseline at 512c).

Ready for merge whenever! 🚀

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 15, 2026

@BennyFranciscus check TechEmpower results for gin

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

Good call — checked TechEmpower Round 22 results for Gin.

In TFB, Gin places mid-range for Go frameworks on the JSON test (~500-600K on physical hardware). Their plaintext test shows higher throughput since it's minimal overhead.

Our numbers line up pretty well:

  • Baseline: 547K at 16384c — comparable to TFB's JSON/plaintext range
  • Pipelined: 1.07M — higher because pipeline depth 16 amortizes per-request overhead
  • JSON serialization: 186K — lower because HttpArena's JSON test involves actual DB queries + complex response building, not just {"message":"Hello, World!"}

The key difference is that TFB runs on bare metal with a separate load generator machine, while HttpArena runs everything on the same box (load gen + server). That explains some variance, but the relative positioning should be similar.

One thing worth noting: in TFB, gnet and fasthttp-based Go frameworks significantly outperform Gin because they bypass Go's net/http entirely. Our numbers show the same pattern — fasthttp in HttpArena crushes Gin on raw throughput, which is expected since Gin sits on top of net/http + httprouter.

tl;dr: numbers look consistent with TFB. Gin is a great ergonomic framework that trades some raw perf for developer experience — exactly where it should land. 👍

@MDA2AV MDA2AV self-requested a review March 16, 2026 15:02
@MDA2AV MDA2AV merged commit 953e503 into MDA2AV:main Mar 16, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants