Service for booking meetting rooms. Architect: @alchemmist
Try it online: rooms.alchemmist.xyz
| Command | Description |
|---|---|
make up |
Start service (docker compose, port 8080) |
make down |
Stop service and remove volumes |
make seed |
Seed database with test data |
make frontend-dev |
Run Vite dev server for frontend (port 3000) |
make test |
Run unit tests |
make integration-test |
Run integration tests (PostgreSQL in Docker) |
make test-cover |
Tests with coverage (html report in cover.html) |
make load-test |
Load testing slots endpoint |
make swagger |
Regenerate Swagger documentation |
make fmt |
Format code + auto-fix linter issues |
make check |
vet + linter + tests |
make setup-env |
Install dev tools (golangci-lint, swag, gotestsum, vegeta) |
Into project was added absolute vbie-coded frontend, via React.
Jus for fun :) and interative testing.
Start everything with Docker Compose:
make up- Frontend: http://localhost:3000
- API: http://localhost:8080
- Swagger UI: http://localhost:8080/swagger/
Nginx serves the built frontend and proxies /api/ requests to the Go backend.
Swagger/OpenAPI documentation is automatically generated from code annotations using swaggo/swag.
If you modify any handler functions or add new endpoints, regenerate the Swagger documentation:
make swaggerThis command runs swag init -g cmd/server/main.go to scan annotations in the code and update the OpenAPI specification.
All endpoints are documented with:
- Endpoint description and purpose
- Request/response schemas
- Authentication requirements (BearerAuth)
- Possible error responses and status codes
- Required and optional parameters
- Click the Authorize button in Swagger UI
- Get a test token: Call
POST /dummyLoginwith{"role":"admin"}or{"role":"user"} - In the authorization dialog, enter the token in one of these formats:
- Recommended:
Bearer <your_token_here>(with Bearer prefix) - Also supported:
<your_token_here>(token only)
- Recommended:
- Click Authorize
- Now all API calls will include the authorization token automatically
Example:
Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Authorization endpoints:
POST /dummyLogin— test token by role ({"role": "admin"}or{"role": "user"})
As part of the bonus task, the service provides full email/password registration and login with JWT tokens. Passwords are hashed with bcrypt.
Endpoints:
| Method | Path | Auth | Description |
|---|---|---|---|
POST |
/register |
No | Create a new user account |
POST |
/login |
No | Authenticate and receive JWT |
When creating a booking, users can optionally request a conference link by passing createConferenceLink: true:
curl -X POST http://localhost:8080/bookings/create \
-H 'Authorization: Bearer <token>' \
-H 'Content-Type: application/json' \
-d '{"slotId":"...","createConferenceLink":true}'The service uses an interface-based approach (ConferenceProvider) to communicate with an external Conference Service. In production, this would be replaced with a real HTTP client. Currently, a mock implementation generates deterministic fake URLs (https://meet.example.com/{bookingID}).
| Scenario | Behavior | Rationale |
|---|---|---|
| Conference service returns error (timeout, 5xx) | Booking fails with 502 CONFERENCE_SERVICE_ERROR |
The user explicitly requested a conference link. Creating a booking without it would be misleading — the user expects a meeting link. Better to fail fast so they can retry. |
| Booking succeeds but service crashes before response | Booking is not created (atomic operation) | The conference link is obtained before the DB insert. If the service crashes during the external call, no booking is created, avoiding orphan records. |
| Conference service returns invalid URL | Booking fails with 500 INTERNAL_ERROR |
URL validation happens at the mock level; in production, the real service should validate URLs before returning them. |
createConferenceLink is false or omitted |
No external call is made | Zero overhead for users who don't need conference links. |
In a production system, the conference service call should include an idempotency key (e.g., a pre-generated booking UUID) to prevent duplicate conference links on retries. The current implementation passes an empty string as the booking ID since the UUID is generated by the repository layer after the conference call. This is acceptable for the mock but should be addressed with a real service by using a two-phase approach: generate booking ID → request conference link → insert booking.
Slots are generated on demand rather than pre-calculated.
How it works:
- Trigger: When a user calls
GET /rooms/{roomId}/slots/list?date=YYYY-MM-DD. - Validation: The service checks that the room has an existing schedule and that the requested date matches one of the allowed weekdays.
- Generation: 30-minute slots are generated in memory for that single date only, based on the schedule's
startTimeandendTime. - Persistence: The generated slots are written to the database (via
INSERT ... ON CONFLICT DO NOTHING) to assign stable UUID identifiers. This allows subsequent API calls to reference the same slots. - Response: Available slots (those not already booked) are returned to the client.
Why this approach:
- No wasted storage: Slots are only created for dates that users actually query. The TZ states that 99.9% of users look at the next 7 days — there is no point in pre-generating months of slots.
- Simpler maintenance: No background cron jobs or scheduled tasks are needed. When a schedule is created, nothing else needs to happen.
- Idempotency:
ON CONFLICT DO NOTHINGguarantees that repeated calls for the same date return the same slot IDs without duplicates. - Trade-off: The first request for a date has slightly higher latency (slots are generated and inserted). Subsequent requests are fast since the slots are already cached in the database.
The project prioritizes integration tests over unit tests with mocks.
- Realism: Mocks often hide database-specific issues (e.g., constraint violations, type serialization, NULL handling). Integration tests catch these.
- Confidence: Testing against a real PostgreSQL instance verifies the entire stack (API Handler → Service → Repository → DB).
- Simplicity: No need to maintain complex mock setups that can become stale as the code evolves.
-
Unit Tests: (Pure logic like slot generation)
make test -
Integration Tests: (Requires Docker)
make integration-test
This command spins up an ephemeral PostgreSQL container, runs all handler and repository tests, and tears it down automatically.
Despite the heavy reliance on integration tests, the project uses standard tools (go test -cover) to track coverage. The current setup ensures coverage exceeds the 40% requirement mandated by the task.
The project includes a load testing tool built on vegeta to validate that the slots list endpoint meets the performance requirements (P95 ≤ 200ms, error rate ≤ 1%).
Prerequisites:
- The service must be running (
make up) - Test data must be seeded (
make seed)
make load-testThis runs a 10-second attack at 100 RPS against GET /rooms/{roomId}/slots/list and validates:
- P95 latency ≤ 200 ms
- Error rate ≤ 1%
Custom Parameters:
go run ./cmd/loadtest/main.go -rate 200 -duration 30s -p95 150ms -max-err 0.05| Flag | Default | Description |
|---|---|---|
-rate |
100 |
Requests per second |
-duration |
10s |
Duration of the attack |
-p95 |
200ms |
Max allowed P95 latency |
-max-err |
0.01 |
Max allowed error rate (1%) |
Results:
After each run, the following files are generated in the load/ directory:
| File | Description |
|---|---|
results.bin |
Raw vegeta binary results (used for plotting) |
metrics.json |
JSON summary of all calculated metrics |
report.html |
Interactive HTML latency report (requires vegeta CLI in PATH) |
Sample Results:
Requests/sec: 100.0
Success rate: 100.00%
Latency P50: 1.286804ms
Latency P95: 1.927361ms
Latency P99: 2.249748ms
Because the "Проверочные тесты" workflow need to run service with docker compose up --build -d. But in my setup this is run "production" server, which need to .env file. Of course in most real case we will put secrets to CI or change workflow pipeline commands.
