A production-ready full-stack template with Next.js 15, React 19, and a comprehensive suite of modern technologies. Perfect for quickly starting new projects with all essential services pre-configured.
- Next.js 15 - React framework with App Router
- React 19 - Latest React with server components
- TypeScript - Type-safe development
- Tailwind CSS - Utility-first CSS framework
- shadcn/ui - Beautiful, accessible UI components
- PostgreSQL - Relational database (containerized)
- Drizzle ORM - Type-safe SQL ORM
- Redis - In-memory data store for caching and sessions
- BullMQ - Background job processing with Redis
- MinIO - S3-compatible object storage (self-hosted)
- Better Auth - Modern authentication with database integration
- Email/Password authentication
- OAuth support (Google, GitHub)
- Vercel AI SDK - Stream-ready AI integration
- OpenRouter - Access to 100+ AI models
- Dozzle - Real-time Docker log viewer
- Sentry - Error tracking and monitoring
- Zod - TypeScript-first schema validation
- Node.js 18+ and npm
- Docker and Docker Compose
- Git
# Clone this template
git clone <your-repo-url> my-new-project
cd my-new-project
# Install dependencies
npm install# Copy environment example
cp .env.example .env
# Edit .env with your configuration
# At minimum, update:
# - DATABASE_URL
# - BETTER_AUTH_SECRET (generate with: openssl rand -base64 32)# Start all Docker services (PostgreSQL, Redis, MinIO, Dozzle)
docker compose up -d
# Verify services are running
docker compose ps# Push database schema to PostgreSQL
npm run db:push
# Or generate and run migrations
npm run db:generate
npm run db:migratenpm run devVisit:
- App: http://localhost:3000
- Demo Dashboard: http://localhost:3000/demo
- Bull Board (Queue Monitoring): http://localhost:3000/api/bull-board
- Monitoring Dashboard: http://localhost:3000/monitoring
- MinIO Console: http://localhost:9001 (minioadmin / minioadmin)
- Dozzle Logs: http://localhost:8080
.
├── app/
│ ├── api/ # API routes
│ │ ├── auth/ # Authentication endpoints
│ │ ├── ai/ # AI chat endpoints
│ │ ├── storage/ # File upload endpoints
│ │ ├── jobs/ # Job queue endpoints
│ │ ├── bull-board/ # Bull Board monitoring UI
│ │ └── health/ # Health check endpoint
│ ├── demo/ # Demo dashboard page
│ ├── monitoring/ # Queue monitoring dashboard
│ ├── layout.tsx # Root layout
│ └── page.tsx # Home page
├── components/
│ └── ui/ # shadcn/ui components
├── lib/
│ ├── auth/ # Better Auth configuration
│ ├── ai/ # OpenRouter AI integration
│ ├── db/ # Database schema and connection
│ ├── queue/ # BullMQ job queues
│ ├── storage/ # MinIO storage helpers
│ └── bullboard.ts # Bull Board configuration
├── docker-compose.yml # Docker services configuration
├── drizzle.config.ts # Drizzle ORM configuration
└── .env.example # Environment variables template
npm run dev # Start development server with Turbopack
npm run build # Build for production
npm run start # Start production server
npm run lint # Run ESLintnpm run db:generate # Generate migrations from schema changes
npm run db:migrate # Run migrations
npm run db:push # Push schema directly to database (dev)
npm run db:studio # Open Drizzle Studio (database GUI)npm test # Run unit tests with Vitest
npm run test:ui # Open Vitest UI for interactive testing
npm run test:coverage # Run tests with coverage report
npm run test:e2e # Run E2E tests with Playwright (headless)
npm run test:e2e:ui # Open Playwright UI for interactive E2E testing
npm run test:e2e:headed # Run E2E tests in headed mode (see browser)
npm run test:e2e:debug # Debug E2E tests with Playwright Inspectordocker compose up -d # Start all services
docker compose down # Stop all services
docker compose logs -f # Follow logs
docker compose ps # List running services
docker compose restart <service> # Restart specific service- Port: 5432
- Database: template_db
- User: postgres
- Password: postgres (change in production!)
- Port: 6379
- Used for sessions, caching, and BullMQ queues
- API Port: 9000
- Console Port: 9001
- Access Key: minioadmin
- Secret Key: minioadmin (change in production!)
- Default Bucket: uploads
- Port: 8080
- View Docker logs in real-time
Better Auth is pre-configured with:
- Email/Password authentication
- Session management with Redis
- OAuth providers (Google, GitHub) - requires API keys
// Client-side usage
import { signIn, signOut, useSession } from "@/lib/auth/client";
// Sign in
await signIn.email({ email, password });
// Check session
const { data: session } = useSession();Drizzle ORM provides type-safe database access:
import { db } from "@/lib/db";
import { users } from "@/lib/db/schema";
// Insert user
const [user] = await db
.insert(users)
.values({
email: "user@example.com",
name: "John Doe",
})
.returning();
// Query users
const allUsers = await db.select().from(users);Upload files to MinIO S3-compatible storage:
import { uploadFile } from "@/lib/storage/minio";
const result = await uploadFile("uploads", "filename.jpg", buffer, "image/jpeg");Or use the API endpoint:
curl -X POST http://localhost:3000/api/storage/upload \
-F "file=@image.jpg"Create and process background jobs with BullMQ:
import { addEmailJob } from "@/lib/queue";
// Add job to queue
await addEmailJob({
to: "user@example.com",
subject: "Welcome!",
body: "Thanks for signing up",
});Monitor your BullMQ job queues with Bull Board:
Access Bull Board Dashboard:
- Bull Board UI: http://localhost:3000/api/bull-board
- Monitoring Page: http://localhost:3000/monitoring
- Demo Page: http://localhost:3000/demo (see Monitoring tab)
Features:
- Real-time queue statistics
- Job inspection and management
- Retry failed jobs
- Pause/resume queues
- Clean completed/failed jobs
- Search and filter jobs
Adding New Queues to Monitoring:
- Create queue in
lib/queue/index.ts - Add queue adapter to
app/api/bull-board/[[...path]]/route.ts:
import { myNewQueue } from "@/lib/queue";
createBullBoard({
queues: [
// ... existing queues
new BullMQAdapter(myNewQueue),
],
serverAdapter,
});Use OpenRouter for AI features:
import { streamAIText } from "@/lib/ai/openrouter";
const result = await streamAIText("Hello, AI!");Or via API:
curl -X POST http://localhost:3000/api/ai/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, AI!"}'Monitor service health:
curl http://localhost:3000/api/healthReturns status for PostgreSQL, Redis, and MinIO.
This template includes comprehensive middleware patterns for API routes, providing reusable solutions for common cross-cutting concerns.
Protects routes by requiring a valid Better Auth session.
import { withAuth } from "@/middleware/withAuth";
export const GET = withAuth(async (request, { user }) => {
// user is guaranteed to be authenticated
return NextResponse.json({
message: `Hello ${user.user.name}`,
});
});Features:
- Checks for valid session before allowing access
- Returns 401 if not authenticated
- Provides user session data to handler
- Optional variant (
withOptionalAuth) for routes that work with or without auth
Logs incoming requests and outgoing responses with structured data.
import { withLogging } from "@/middleware/withLogging";
export const GET = withLogging(async (request) => {
return NextResponse.json({ data: "example" });
});Features:
- Logs method, URL, timestamp
- Logs response status and duration
- Generates request ID for tracing
- Structured JSON logging format
- Optional request/response body logging
Configuration options:
export const POST = withLogging(
async (request) => {
return NextResponse.json({ success: true });
},
{
logBody: true, // Log request body
logResponse: true, // Log response body
jsonFormat: true, // Use JSON format
}
);Catches all errors and returns consistent error responses.
import { withErrorHandler, ValidationError } from "@/middleware/withErrorHandler";
export const POST = withErrorHandler(async (request) => {
// Any error thrown here will be caught and formatted
if (!isValid) {
throw new ValidationError("Invalid input", { field: "email" });
}
return NextResponse.json({ success: true });
});Features:
- Catches all errors in route handlers
- Returns consistent error response format
- Logs errors with stack traces in development
- Sends errors to Sentry if configured
- Hides internal errors in production
- Custom error classes (ValidationError, NotFoundError, etc.)
Limits requests using Redis-based sliding window algorithm.
import { withRateLimit, RateLimitPresets } from "@/middleware/withRateLimit";
// Using presets
export const GET = withRateLimit(
async (request) => {
return NextResponse.json({ data: "example" });
},
RateLimitPresets.api // 100 requests per minute
);
// Custom configuration
export const POST = withRateLimit(
async (request) => {
return NextResponse.json({ success: true });
},
{ max: 5, windowSeconds: 60 } // 5 requests per minute
);Features:
- Redis-based sliding window rate limiting
- Configurable limits and time windows
- IP-based by default
- Custom key generation (user ID, API key, etc.)
- Rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset)
- Returns 429 Too Many Requests when exceeded
Available presets:
RateLimitPresets.auth- 5 requests per 15 minutes (for auth endpoints)RateLimitPresets.api- 100 requests per minute (standard API)RateLimitPresets.readOnly- 1000 requests per hour (read operations)RateLimitPresets.expensive- 10 requests per hour (expensive operations)RateLimitPresets.public- 3 requests per minute (public endpoints)
Combine multiple middleware into a single middleware stack.
import { compose, withAuth, withLogging, withErrorHandler, withRateLimit } from "@/middleware";
// Compose multiple middleware
const middleware = compose(
withErrorHandler, // Outermost - catches all errors
withLogging, // Logs all requests
withRateLimit({ max: 10, windowSeconds: 60 }),
withAuth // Innermost - checks authentication
);
export const GET = middleware(async (request, { user }) => {
return NextResponse.json({ message: "Protected and logged" });
});Combine multiple middleware into one:
const middleware = compose(withErrorHandler, withLogging, withAuth);Apply middleware only when condition is true:
const middleware = compose(
withErrorHandler,
when((req) => req.headers.get("X-API-Version") === "v2", withRateLimit({ max: 50 }))
);Skip middleware when condition is true:
const middleware = unless(
(req) => req.headers.get("X-Internal") === "true",
withRateLimit({ max: 10 })
);Apply different middleware based on HTTP method:
const middleware = byMethod({
GET: withRateLimit({ max: 100 }),
POST: compose(withRateLimit({ max: 10 }), withAuth),
DELETE: withAuth,
});Apply middleware only to specific routes:
const middleware = forRoute(/\/admin\/.*/, withAuth);Middleware functions follow a simple pattern:
import { NextRequest, NextResponse } from "next/server";
type RouteHandler = (
request: NextRequest,
context?: { params?: Record<string, string> }
) => Promise<Response> | Response;
export function withCustomMiddleware(handler: RouteHandler): RouteHandler {
return async (request, context) => {
// Pre-processing logic here
console.log("Before handler");
// Execute the handler
const response = await handler(request, context);
// Post-processing logic here
console.log("After handler");
return response;
};
}import { compose, withErrorHandler, withLogging, withAuth } from "@/middleware";
const protectedRoute = compose(withErrorHandler, withLogging, withAuth);
export const GET = protectedRoute(async (request, { user }) => {
return NextResponse.json({ data: "protected" });
});import { compose, withErrorHandler, withLogging, withRateLimit } from "@/middleware";
const publicRoute = compose(
withErrorHandler,
withLogging,
withRateLimit({ max: 100, windowSeconds: 60 })
);
export const GET = publicRoute(async (request) => {
return NextResponse.json({ data: "public" });
});import {
compose,
withErrorHandler,
withLogging,
withRateLimit,
withAuth,
RateLimitPresets,
} from "@/middleware";
const adminRoute = compose(
withErrorHandler,
withLogging,
withRateLimit(RateLimitPresets.auth),
withAuth
);
export const DELETE = adminRoute(async (request, { user }) => {
// Check admin role, etc.
return NextResponse.json({ success: true });
});Try the middleware examples:
# Protected route example
curl http://localhost:3000/api/demo/protected
# Middleware examples
curl http://localhost:3000/api/demo/middleware
# Test error handling
curl "http://localhost:3000/api/demo/middleware?error=validation"
# Test rate limiting (make multiple requests)
for i in {1..11}; do curl http://localhost:3000/api/demo/middleware; done-
Order Matters: Always apply middleware in the correct order:
- Error handler first (catches everything)
- Logging second (logs all requests)
- Rate limiting third
- Authentication last
-
Reuse Common Stacks: Create reusable middleware stacks for common patterns:
const protectedRoute = compose(withErrorHandler, withLogging, withAuth);
-
Environment-Specific Behavior: Adjust middleware behavior based on environment:
withErrorHandler({ includeStack: process.env.NODE_ENV === "development", sendToSentry: process.env.NODE_ENV === "production", });
-
Custom Key Generation: For rate limiting by user instead of IP:
withRateLimit({ keyGenerator: (request) => { const userId = request.headers.get("X-User-ID"); return userId || "anonymous"; }, });
Key environment variables to configure:
DATABASE_URL- PostgreSQL connection stringREDIS_URL- Redis connection stringBETTER_AUTH_SECRET- Secret for auth tokens
OPENROUTER_API_KEY- For AI featuresGOOGLE_CLIENT_ID/GOOGLE_CLIENT_SECRET- For Google OAuthGITHUB_CLIENT_ID/GITHUB_CLIENT_SECRET- For GitHub OAuthSENTRY_DSN- For error trackingRESEND_API_KEY- For email sending
See .env.example for complete list.
- Change all default passwords (PostgreSQL, MinIO)
- Generate new
BETTER_AUTH_SECRET - Configure OAuth credentials
- Set up Sentry for error tracking
- Enable HTTPS/SSL
- Configure CORS properly
- Set up database backups
- Review and adjust rate limits
Vercel (Recommended for Next.js)
- Deploy frontend to Vercel
- Use managed PostgreSQL (Vercel Postgres, Supabase, etc.)
- Use managed Redis (Upstash, Redis Cloud, etc.)
- Deploy MinIO separately or use S3
Docker (Self-hosted)
docker compose -f docker-compose.prod.yml up -d- Update
lib/db/schema/(re-exported vialib/db/schema/index.ts):
export const posts = pgTable("posts", {
id: uuid("id").primaryKey().defaultRandom(),
title: text("title").notNull(),
// ... more fields
});- Generate and run migration:
npm run db:generate
npm run db:migrateCreate files in app/api/your-route/route.ts:
export async function GET(req: NextRequest) {
return Response.json({ message: "Hello" });
}- Define queue in
lib/queue/index.ts - Create worker in
lib/queue/workers.ts - Add job via API or directly in code
This template includes a comprehensive testing setup with both unit/integration tests and end-to-end (E2E) tests.
- Vitest - Fast unit test framework with TypeScript support
- @testing-library/react - React component testing utilities
- Playwright - Reliable E2E testing framework
- jsdom - DOM implementation for Node.js
Unit tests are located in the __tests__ directory and use Vitest:
# Run all unit tests
npm test
# Run tests in watch mode (re-runs on file changes)
npm test -- --watch
# Run tests with coverage report
npm run test:coverage
# Open Vitest UI (interactive test runner)
npm run test:uiE2E tests are located in the e2e directory and use Playwright:
# Run all E2E tests (headless)
npm run test:e2e
# Run E2E tests with browser visible
npm run test:e2e:headed
# Open Playwright UI for interactive testing
npm run test:e2e:ui
# Debug E2E tests with Playwright Inspector
npm run test:e2e:debug
# Run specific test file
npm run test:e2e -- e2e/demo.spec.ts
# Run tests on specific browser
npm run test:e2e -- --project=chromium// __tests__/lib/utils.test.ts
import { describe, it, expect } from "vitest";
import { cn } from "@/lib/utils";
describe("cn utility function", () => {
it("should merge class names correctly", () => {
const result = cn("px-4", "py-2");
expect(result).toBe("px-4 py-2");
});
});// __tests__/app/api/demo/users/route.test.ts
import { describe, it, expect, vi } from "vitest";
import { NextRequest } from "next/server";
vi.mock("@/lib/db", () => ({
db: {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
limit: vi.fn(),
},
}));
describe("Users API Route", () => {
it("should return list of users", async () => {
const { db } = await import("@/lib/db");
vi.mocked(db.limit).mockResolvedValue([{ id: "1", email: "test@example.com" }]);
const { GET } = await import("@/app/api/demo/users/route");
const response = await GET();
const data = await response.json();
expect(response.status).toBe(200);
expect(data.users).toHaveLength(1);
});
});// e2e/demo.spec.ts
import { test, expect } from "@playwright/test";
test("should load the demo page", async ({ page }) => {
await page.goto("/demo");
const heading = page.getByRole("heading", {
name: /Technology Demo Dashboard/i,
});
await expect(heading).toBeVisible();
});
test("should switch between tabs", async ({ page }) => {
await page.goto("/demo");
const redisTab = page.getByRole("tab", { name: /Redis/i });
await redisTab.click();
const redisSectionHeading = page.getByRole("heading", {
name: /Redis Cache/i,
});
await expect(redisSectionHeading).toBeVisible();
});-
Create a test file in the
__tests__directory matching your source file structure:lib/utils.ts -> __tests__/lib/utils.test.ts -
Import your functions and write tests:
import { describe, it, expect } from "vitest"; import { myFunction } from "@/lib/myModule"; describe("myFunction", () => { it("should do something", () => { expect(myFunction()).toBe(expected); }); });
-
Create a test file in the
e2edirectory:e2e/my-feature.spec.ts -
Write your test using Playwright:
import { test, expect } from "@playwright/test"; test("my feature works", async ({ page }) => { await page.goto("/my-page"); await page.click("button"); await expect(page.locator("h1")).toHaveText("Expected"); });
Configuration is in vitest.config.ts:
- Uses jsdom for DOM simulation
- Supports TypeScript and path aliases (@/)
- Includes coverage reporting
- Setup file:
vitest.setup.ts
Configuration is in playwright.config.ts:
- Tests multiple browsers (Chromium, Firefox, WebKit)
- Tests mobile viewports (Pixel 5, iPhone 12)
- Automatically starts dev server before tests
- Captures screenshots and videos on failure
- Base URL: http://localhost:3000
For continuous integration, add these commands to your CI pipeline:
# Example GitHub Actions workflow
- name: Run unit tests
run: npm test
- name: Install Playwright browsers
run: npx playwright install --with-deps
- name: Run E2E tests
run: npm run test:e2eGenerate a test coverage report:
npm run test:coverageCoverage reports are generated in:
coverage/- HTML and JSON reports- View the HTML report by opening
coverage/index.html
- Unit Tests: Test individual functions and components in isolation
- Integration Tests: Test how modules work together
- E2E Tests: Test critical user flows and features
- Mock External Services: Use mocks for database, API calls, etc.
- Keep Tests Fast: Unit tests should run in milliseconds
- Use Descriptive Names: Test names should describe what they test
- Test Edge Cases: Include tests for error conditions and edge cases
This template includes comprehensive code quality tools to maintain consistent code style and catch errors before they reach production.
- Prettier - Automatic code formatting
- ESLint - Code linting and error detection
- Husky - Git hooks to enforce quality checks
- lint-staged - Run checks only on staged files
Prettier automatically formats your code to maintain consistency across the project.
# Format all files
npm run format
# Check formatting without making changes
npm run format:checkAdd this to your VS Code settings (.vscode/settings.json):
{
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"[typescript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[typescriptreact]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[json]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}Make sure you have the Prettier VS Code extension installed.
ESLint analyzes your code for potential errors and enforces coding standards.
# Run ESLint
npm run lint
# Fix auto-fixable issues
npm run lint:fixThis project uses Husky to run code quality checks before each commit. When you try to commit, the following will happen automatically:
- Prettier formats all staged files
- ESLint checks and fixes all staged JavaScript/TypeScript files
- If any errors remain, the commit is blocked
This ensures all committed code meets quality standards.
The pre-commit hook runs lint-staged, which:
- Runs
prettier --writeon staged.js,.jsx,.ts,.tsx,.json,.css, and.mdfiles - Runs
eslint --fixon staged.js,.jsx,.ts, and.tsxfiles
In rare cases where you need to bypass the pre-commit hook:
git commit --no-verify -m "Your commit message"Warning: Only use --no-verify in emergencies or for commits that don't affect code (like documentation fixes during CI failures). Bypassing hooks can introduce formatting inconsistencies.
Located in .prettierrc.json:
{
"semi": true,
"singleQuote": false,
"tabWidth": 2,
"printWidth": 100,
"trailingComma": "es5"
}Located in .prettierignore:
node_modules/.next/dist/build/coverage/
Next.js provides ESLint configuration out of the box. The configuration is in .eslintrc.json (or eslint.config.js).
- Prettier: Handles code formatting (spacing, quotes, semicolons, etc.)
- ESLint: Handles code quality (unused variables, potential bugs, best practices)
These tools are configured to work together without conflicts. Prettier handles all formatting rules, while ESLint focuses on code quality rules.
- Format before committing: Run
npm run formatbefore pushing if you haven't set up format-on-save - Fix linting errors: Run
npm run lint:fixto automatically fix most issues - Don't bypass hooks: Let the pre-commit hooks do their job
- Configure your editor: Set up format-on-save for the best experience
- Review changes: Check what Prettier changed before committing
Add these checks to your CI pipeline:
# Example GitHub Actions workflow
- name: Check code formatting
run: npm run format:check
- name: Run ESLint
run: npm run lint# Check Docker is running
docker ps
# Check logs
docker compose logs
# Restart services
docker compose restart# Verify PostgreSQL is running
docker compose ps postgres
# Check connection
docker compose exec postgres psql -U postgres -d template_db# Find process using port
lsof -i :3000
# Or change port
PORT=3001 npm run dev- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
MIT License - feel free to use this template for any project!
For issues and questions:
- Open an issue on GitHub
- Check existing issues for solutions
Built with modern open-source technologies:
- Next.js, React, TypeScript
- Drizzle ORM, PostgreSQL, Redis
- MinIO, BullMQ, Better Auth
- And many more amazing projects
Happy Coding! Start building your next great project with this template.