A high-performance, production-ready API layer for database management. This service provides a RESTful interface to interact with multiple database systems (MySQL and PostgreSQL) with built-in rate limiting, caching, connection management, and comprehensive logging.
- Features
- Architecture
- Prerequisites
- Environment Configuration
- Installation
- Development
- Production Deployment
- API Endpoints
- Project Structure
- Database Schema
- Security Considerations
- Contributing
- License
- Multi-Database Support: PostgreSQL and MySQL connection management
- Dynamic API Endpoints: Create custom API endpoints mapped to database queries
- Connection Testing: Validate database connections before deployment
- Schema Introspection: Retrieve database schema information
- Query Execution: Execute SQL queries with proper validation and sanitization
- Rate Limiting: Redis-based rate limiting to prevent abuse
- Caching: Intelligent caching with cache invalidation support
- Request Logging: Comprehensive logging with log rotation
- CORS Support: Configurable cross-origin resource sharing
- Origin Validation: Whitelist-based origin verification
- Subdomain Routing: Multi-tenant support via subdomain routing
- Hot Reload: Development mode with automatic code reloading
- Health Checks: Built-in health check endpoints
- Graceful Shutdown: Proper cleanup on service termination
The application follows a clean architecture pattern with clear separation of concerns:
├── cmd/
│ ├── server/ # Application entry point
│ └── internal/ # Internal packages
│ ├── config/ # Configuration management
│ ├── constants/ # Application constants
│ ├── handlers/ # HTTP handlers
│ ├── logger/ # Logging infrastructure
│ ├── middleware/ # HTTP middleware
│ ├── models/ # Data models
│ ├── repository/ # Data access layer
│ ├── services/ # Business logic
│ ├── types/ # Type definitions
│ └── utils/ # Utility functions
Before you begin, ensure you have the following installed:
- Go: Version 1.22.2 or higher
- Docker: Version 20.10 or higher
- Docker Compose: Version 2.0 or higher (for local development)
- PostgreSQL: Version 13 or higher (for the control database)
- Redis: Version 6.0 or higher (for caching and rate limiting)
- MySQL (optional): If you plan to connect to MySQL databases
Create a .env file in the root directory with the following variables:
# Server Configuration
SERVER_PORT=8080
SERVER_READ_TIMEOUT=30
SERVER_WRITE_TIMEOUT=30
SERVER_IDLE_TIMEOUT=120
# PostgreSQL Connection (Control Database)
PG_URL=postgres://username:password@localhost:5432/dblayer?sslmode=disable
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=
REDIS_DB=0
# Logging Configuration
LOG_DIRECTORY=logs
LOG_LEVEL=info
LOG_MAX_SIZE=100
LOG_MAX_BACKUPS=10
LOG_MAX_AGE=30
# CORS Configuration
CORS_ALLOW_ORIGINS=*
# Security
SUPER_SECRET_KEY=your-super-secret-key-here
JWT_SECRET=your-jwt-secret-here
# Dashboard (for origin validation)
DASHBOARD_SERVER=localhost| Variable | Description | Default |
|---|---|---|
SERVER_PORT |
Port on which the server listens | 8080 |
SERVER_READ_TIMEOUT |
Read timeout in seconds | 30 |
SERVER_WRITE_TIMEOUT |
Write timeout in seconds | 30 |
SERVER_IDLE_TIMEOUT |
Idle timeout in seconds | 120 |
PG_URL |
PostgreSQL connection string for control database | Required |
REDIS_HOST |
Redis server host | Required |
REDIS_PORT |
Redis server port | 6379 |
REDIS_PASSWORD |
Redis password (if any) | Empty |
REDIS_DB |
Redis database number | 0 |
LOG_DIRECTORY |
Directory for log files | logs |
LOG_LEVEL |
Logging level (info, warn, error) | info |
LOG_MAX_SIZE |
Maximum size of log file in MB | 100 |
LOG_MAX_BACKUPS |
Number of log backups to retain | 10 |
LOG_MAX_AGE |
Maximum age of log files in days | 30 |
CORS_ALLOW_ORIGINS |
Allowed origins for CORS | * |
SUPER_SECRET_KEY |
Key for encryption/decryption | Required |
JWT_SECRET |
Secret for JWT token validation | Required |
DASHBOARD_SERVER |
Dashboard server hostname for validation | localhost |
git clone https://github.com/scorcism/api.dblayer.dev.git
cd api.dblayer.devgo mod downloadUse the provided Docker Compose file to spin up local MySQL and PostgreSQL instances:
docker-compose -f docker-compose.dev.yml up -dThis will start:
- MySQL on port 3306
- PostgreSQL on port 5432
The project uses Air for hot reloading during development.
docker build -f Dockerfile.dev -t api.dblayer:dev .
docker run -p 8080:8080 -v $(pwd):/app api.dblayer:devOn Windows PowerShell:
docker build -f Dockerfile.dev -t api.dblayer:dev .
docker run -p 8080:8080 -v ${PWD}:/app api.dblayer:dev# Install Air
go install github.com/cosmtrek/air@latest
# Run with hot reload
air -c .air.tomlgo run cmd/server/main.gogo test ./...go vet ./...docker build -t api.dblayer:latest .docker run -d \
-p 8080:8080 \
--name api.dblayer.container \
--env-file .env \
api.dblayer:latestIf you need to connect to other containers:
# Create a shared network
docker network create shared_network
# Run the container with network
docker run -d \
-p 8080:8080 \
--name api.dblayer.container \
--network shared_network \
--env-file .env \
api.dblayer:latestThe repository includes a deployment script for easy updates:
sudo chmod +x renew_container.sh
sudo ./renew_container.shThis script will:
- Stop and remove the existing container
- Remove the old image
- Build a new image
- Start a new container
- Prune unused Docker resources
Verify the service is running:
curl http://localhost:8080/healthExpected response:
{
"status": "ok"
}GET /healthReturns the health status of the service.
GET|POST|PUT|DELETE /:projectSlug/:endpointSlugExecute a pre-configured database query through a dynamic endpoint.
Headers:
x-host: Client hostname (optional)x-origin: Request origin (optional)x-referer: Request referer (optional)
All management endpoints are prefixed with /v1.
POST /v1/connection/testTest a database connection configuration.
Request Body:
{
"connection_type": "postgres",
"config": {
"host": "localhost",
"port": 5432,
"database": "mydb",
"username": "user",
"password": "pass"
}
}POST /v1/connection/schemaRetrieve the schema of a connected database.
POST /v1/connection/pingVerify connectivity to a database.
POST /v1/endpoint/query-runExecute a SQL query directly.
Headers:
x-dblayer-runquery-mode: Set tosandboxfor testing
Request Body:
{
"project_slug": "my-project",
"endpoint_slug": "my-endpoint",
"query": "SELECT * FROM users WHERE id = $1",
"params": [1]
}POST /v1/endpoint/revalidate-cacheInvalidate cached results for an endpoint.
GET /v1/app/query-run/:slug/:resource
POST /v1/app/query-run/sandboxExecute queries from application context.
GET /The root endpoint with subdomain extraction for multi-tenant support.
.
├── cmd/
│ ├── server/
│ │ └── main.go # Application entry point
│ └── internal/
│ ├── config/
│ │ └── env.go # Environment configuration
│ ├── constants/
│ │ └── constants.go # Application constants
│ ├── handlers/
│ │ └── api_handler.go # HTTP request handlers
│ ├── logger/
│ │ └── logger.go # Logging with rotation
│ ├── middleware/
│ │ └── middleware.go # HTTP middleware
│ ├── models/
│ │ └── models.go # Database models
│ ├── repository/
│ │ ├── repository.go # Repository interface
│ │ ├── mysql.go # MySQL repository
│ │ └── pg.go # PostgreSQL repository
│ ├── services/
│ │ ├── services.go # Business logic
│ │ ├── mysql/
│ │ │ └── main.go # MySQL service
│ │ ├── pg/
│ │ │ └── main.go # PostgreSQL service
│ │ └── redis/
│ │ ├── redis.go # Redis client
│ │ └── rate_limiter.go # Rate limiting logic
│ ├── types/
│ │ └── types.go # Type definitions
│ └── utils/
│ ├── utils.go # Utility functions
│ └── validate.go # Validation utilities
├── tmp/ # Temporary build files
├── .air.toml # Air configuration
├── docker-compose.dev.yml # Development Docker Compose
├── Dockerfile # Production Dockerfile
├── Dockerfile.dev # Development Dockerfile
├── go.mod # Go module definition
├── go.sum # Go module checksums
├── renew_container.sh # Deployment script
└── README.md # This file
- All the schema are generate with drizzle orm, from dashboard.backend.dblayer.dev, Vist me for exploring all the tables.
Database connection strings are encrypted using AES encryption with the SUPER_SECRET_KEY. Never expose this key.
The service validates request origins against a whitelist stored in the project configuration. Configure allowed origins in your project settings.
Redis-based rate limiting is enforced per endpoint. Configure limits in the endpoint settings.
All queries use parameterized statements. Direct query execution requires proper authentication and authorization.
JWT-based authentication is supported. Include the JWT token in the Authorization header:
Authorization: Bearer <your-jwt-token>
Configure CORS settings via the CORS_ALLOW_ORIGINS environment variable. Use specific origins in production instead of *.
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow Go best practices and idioms
- Run
go fmtbefore committing - Ensure all tests pass
- Add tests for new features
- Update documentation as needed