name: 🐛 Bug Report
about: Create a report to help us improve FireForm.
title: "[BUG]: No Rate Limiting - API Accepts 500+ Requests/Second (DoS Risk)"
labels: bug, security
assignees: ''
⚡️ Describe the Bug
Hey team! Found something pretty serious while testing the API. There's no rate limiting anywhere, which means anyone can spam the endpoints as hard as they want. I managed to hit 506 requests/second on the /docs endpoint without the server even blinking.
This is a major problem because someone could:
- Spam
/forms/fill with 20+ concurrent requests and crash Ollama
- Flood
/templates/create with massive PDFs and fill up the disk
- Hammer DB queries until connection pool exhausts
I know this is a first responder tool, so availability matters a lot. A bad actor (or even just a buggy client) could take down the whole service.
👣 Steps to Reproduce
I wrote a test script to check this. Here's what happens:
- Start the API:
uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
- Run concurrent load test with 100 parallel requests
- Watch the server accept every single one without throttling
Actual test results:
Test 1: GET /docs
Total requests: 100
Successful: 100
Failed: 0
Rate limited (HTTP 429): 0
Total time: 0.20s
Requests/second: 506.88
Test 2: GET /forms/batch/test-batch-123
Total requests: 100
Rate limited (HTTP 429): 0
Requests/second: 13.14
Zero HTTP 429 responses. The server just keeps accepting everything.
📉 Expected Behavior
A production API should have rate limiting that:
- Returns
HTTP 429 Too Many Requests when limits exceeded
- Protects expensive operations (LLM calls, file uploads)
- Prevents resource exhaustion attacks
Looking at similar FastAPI projects, normal limits would be something like:
- GET endpoints: 60/minute per IP
- POST /forms/fill: 5/minute (expensive LLM operation!)
- POST /templates/create: 10/hour (disk space)
- POST /forms/fill-batch: 2/minute
🖥️ Environment Information
- OS: Windows 11
- Python: 3.13
- FastAPI: Latest from requirements.txt
- Ollama Model: Mistral (not relevant for this test, didn't need to hit LLM endpoints)
📸 Screenshots/Logs
Server logs showing it processed all 100 concurrent requests to /docs without any throttling:
(All returned HTTP 200)
When I tested /forms/batch/... it gave 500 errors because the table doesn't exist yet, but that's not the point - still zero HTTP 429s which means no rate limiting layer exists at all.
🕵️ Possible Fix
Found a library called slowapi that works great with FastAPI.
We'd need to:
- Add
slowapi to requirements.txt
- Set up the limiter in
api/main.py (tracks requests by IP, returns HTTP 429 when exceeded)
- Add decorators to routes in
api/routes/forms.py and api/routes/templates.py - like @limiter.limit("5/minute") for the expensive Ollama endpoints
I can submit a PR with the fix + tests if you want. Already wrote the test script that proves this vulnerability exists.
name: 🐛 Bug Report
about: Create a report to help us improve FireForm.
title: "[BUG]: No Rate Limiting - API Accepts 500+ Requests/Second (DoS Risk)"
labels: bug, security
assignees: ''
⚡️ Describe the Bug
Hey team! Found something pretty serious while testing the API. There's no rate limiting anywhere, which means anyone can spam the endpoints as hard as they want. I managed to hit 506 requests/second on the
/docsendpoint without the server even blinking.This is a major problem because someone could:
/forms/fillwith 20+ concurrent requests and crash Ollama/templates/createwith massive PDFs and fill up the diskI know this is a first responder tool, so availability matters a lot. A bad actor (or even just a buggy client) could take down the whole service.
👣 Steps to Reproduce
I wrote a test script to check this. Here's what happens:
uvicorn api.main:app --reload --host 0.0.0.0 --port 8000Actual test results:
Zero
HTTP 429responses. The server just keeps accepting everything.📉 Expected Behavior
A production API should have rate limiting that:
HTTP 429 Too Many Requestswhen limits exceededLooking at similar FastAPI projects, normal limits would be something like:
🖥️ Environment Information
📸 Screenshots/Logs
Server logs showing it processed all 100 concurrent requests to /docs without any throttling:
(All returned HTTP 200)
When I tested
/forms/batch/...it gave 500 errors because the table doesn't exist yet, but that's not the point - still zero HTTP 429s which means no rate limiting layer exists at all.🕵️ Possible Fix
Found a library called
slowapithat works great with FastAPI.We'd need to:
slowapito requirements.txtapi/main.py(tracks requests by IP, returns HTTP 429 when exceeded)api/routes/forms.pyandapi/routes/templates.py- like@limiter.limit("5/minute")for the expensive Ollama endpointsI can submit a PR with the fix + tests if you want. Already wrote the test script that proves this vulnerability exists.