A FastAPI server that provides secure, pooled database connections to multiple cloud providers. Supports both JSON and CSV response formats with built-in rate limiting, connection pooling, and comprehensive security features.
- Multi-Database Support: Connect to Aiven (PostgreSQL/MySQL), Supabase, and Neon databases
- Connection Pooling: Efficient connection management with configurable pool sizes
- Rate Limiting: Built-in request throttling (100/hour default, configurable)
- API Key Authentication: Secure Bearer token authentication
- Dual Response Formats: JSON (10k rows) and CSV (1M rows) with configurable limits
- Query Timeouts: 3-minute query timeout protection
- Header Logging: Comprehensive request monitoring and security logging
- Read-Only Mode: PostgreSQL connections enforce read-only transactions
- CORS Support: Configurable cross-origin resource sharing
- File Downloads: CSV export with proper headers and cleanup
For additional data analysis capabilities, visit my AI Analyst Platform at app.tigzig.com. For any questions, reach out to me at amar@harolikar.com
git clone <repository-url>
cd FASTAPI_FIXED_DATABASES
pip install -r requirements.txtCopy the example environment file and configure your settings:
cp .env.example .envEdit .env with your actual values:
# Required
API_KEY=your-secure-api-key-here
# Database connections (at least one required)
AIVEN_POSTGRES=postgresql://user:pass@host:port/db?sslmode=require
AIVEN_MYSQL=mysql://user:pass@host:port/db?ssl-mode=REQUIRED
NEON_POSTGRES=postgresql://user:pass@host:port/db?sslmode=require
SUPABASE_POSTGRES=postgresql://user:pass@host:port/db?sslmode=require
# Optional configuration
RATE_LIMIT=100/hour
MAX_JSON_ROWS=10000
MAX_CSV_ROWS=1000000uvicorn app:app --host 0.0.0.0 --port 8000Your API will be available at: http://localhost:8000
GET /sqlquery/
Include Bearer token in Authorization header:
Authorization: Bearer your-api-keysqlquery(required): SQL query to executecloud(required): Database provider (aiven_postgres,aiven_mysql,neon_postgres,supabase_postgres)format(optional): Response format (jsondefault,csv)
JSON Response:
curl -H "Authorization: Bearer your-api-key" \
"http://localhost:8000/sqlquery/?sqlquery=SELECT * FROM users LIMIT 10&cloud=supabase_postgres"CSV Download:
curl -H "Authorization: Bearer your-api-key" \
"http://localhost:8000/sqlquery/?sqlquery=SELECT * FROM users&cloud=supabase_postgres&format=csv" \
--output data.csvJSON Response:
{
"rows": [
{"id": 1, "name": "John", "email": "john@example.com"},
{"id": 2, "name": "Jane", "email": "jane@example.com"}
],
"truncated": false
}CSV Response:
- Downloadable file with proper headers
- Content-Type:
text/csv - Filename:
results.csv(configurable)
const response = await fetch('/sqlquery/?sqlquery=SELECT * FROM users&cloud=supabase_postgres', {
headers: {
'Authorization': 'Bearer your-api-key',
'Content-Type': 'application/json'
}
});
const data = await response.json();import requests
response = requests.get(
'http://localhost:8000/sqlquery/',
params={'sqlquery': 'SELECT * FROM users', 'cloud': 'supabase_postgres'},
headers={'Authorization': 'Bearer your-api-key'}
)
data = response.json()const fetchData = async () => {
const response = await fetch('/api/sqlquery', {
method: 'POST',
headers: {
'Authorization': 'Bearer your-api-key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
sqlquery: 'SELECT * FROM users',
cloud: 'supabase_postgres',
format: 'json'
})
});
return response.json();
};This API is designed to work seamlessly with Custom GPTs for data analysis and querying.
- Deploy your API server using the steps above
- Configure Custom GPT Actions using the provided schema
- Upload semantic layer files as knowledge base
- Set up system instructions for data analysis
- Action Schema:
DOCS/CUSTOM_GPT_ACTION_SCHEMA.json - System Instructions:
DOCS/CUSTOM_GPT_SYSTEM_INSTRUCTIONS.md - Cricket Data Schema:
DOCS/CRICKET_ODI_T20_DATA_SCHEMA.yaml - Cycling Data Schema:
DOCS/CYCLING_TOUR_DE_FRANCE_SCHEMA.yaml
-
Create New Action:
- Use
CUSTOM_GPT_ACTION_SCHEMA.jsonas the action schema - Update the server URL in the schema to your deployed endpoint
- Set authentication method to "API Key" with "Bearer" type
- Use the same API key from your
.envfile
- Use
-
Upload Knowledge Base:
- Upload
CRICKET_ODI_T20_DATA_SCHEMA.yamlfor cricket data analysis - Upload
CYCLING_TOUR_DE_FRANCE_SCHEMA.yamlfor cycling data analysis
- Upload
-
Set System Instructions:
- Use
CUSTOM_GPT_SYSTEM_INSTRUCTIONS.mdas the system prompt - This enables natural language querying of your databases
- Use
- Cricket Data: ODI and T20 match data (ball-by-ball)
- Cycling Data: Tour de France rider and stage history
- Custom Tables: Any tables in your connected databases
| Variable | Required | Description | Default |
|---|---|---|---|
API_KEY |
Yes | Bearer token for API authentication | - |
AIVEN_POSTGRES |
No | Aiven PostgreSQL connection string | - |
AIVEN_MYSQL |
No | Aiven MySQL connection string | - |
NEON_POSTGRES |
No | Neon PostgreSQL connection string | - |
SUPABASE_POSTGRES |
No | Supabase PostgreSQL connection string | - |
RATE_LIMIT |
No | Rate limit (e.g., "100/hour") | "100/hour" |
MAX_JSON_ROWS |
No | JSON response row limit | 10000 |
MAX_CSV_ROWS |
No | CSV response row limit | 1000000 |
LOG_LEVEL |
No | Logging level (DEBUG, INFO, WARNING, ERROR) | DEBUG |
CORS_ALLOW_ORIGINS |
No | Allowed CORS origins | "*" |
DB_POOL_MIN_SIZE |
No | Minimum connection pool size | 1 |
DB_POOL_MAX_SIZE |
No | Maximum connection pool size | 4 |
- Direct connections using
asyncpg(PostgreSQL) andaiomysql(MySQL) - Connection pooling with configurable pool sizes
- Read-only mode for PostgreSQL connections
- Statement cache disabled for PgBouncer compatibility
- Bearer token authentication with constant-time comparison
- Rate limiting with SlowAPI
- CORS support for cross-origin requests
- Request header logging for monitoring
- Query timeout protection (3 minutes)
- JSON responses with custom serialization for dates/decimals
- CSV responses with temporary file generation and cleanup
- Row limits to prevent memory issues
- Truncation flags when limits are exceeded
- Use
.env.productionfor production settings - Set
LOG_LEVEL=INFOin production - Configure proper CORS origins
- Use strong, unique API keys
- Monitor connection pool usage
- Restrict CORS origins to your actual domains
- Use read-only database users when possible
- Monitor logs for suspicious activity
- Set appropriate rate limits based on usage
- Enable HTTPS in production
Blocking Operations: This implementation has some blocking operations that could impact concurrency under high load. These will be optimized in future versions:
- JSON Serialization (lines 309-310): CPU-intensive serialization in
/sqlquery/endpoint - To be optimized usingorjsonorujson - CSV Processing (lines 320-325): Synchronous CSV generation in memory - To be optimized using
polarsasync orasyncio.to_thread() - File I/O Operations (lines 327-329): Synchronous temp file creation - To be optimized using
aiofilesorasyncio.to_thread() - Dictionary Conversion (line 299): CPU-intensive PostgreSQL result conversion - To be optimized using
asyncio.to_thread()orpolars
- Connection errors: Check database connection strings
- Authentication failures: Verify API key matches
- Rate limit exceeded: Adjust
RATE_LIMITsetting - Query timeouts: Optimize queries or increase timeout
- Memory issues: Reduce
MAX_JSON_ROWSorMAX_CSV_ROWS
- Check application logs for connection pool status
- Monitor query performance and timeouts
- Review rate limiting and authentication logs
See LICENSE file for details.