A real-time sales analytics platform that allows users to query transaction data using natural language. The system handles multi-currency transactions, regional performance tracking, and provides interactive visualizations.
The fastest way to run the project. Requires only Docker.
cp .env.example .envEdit .env and add your Gemini API key:
GOOGLE_GENERATIVE_AI_API_KEY=your_actual_api_key_here
You can get a free API key at https://aistudio.google.com/apikey
Note: Without a valid Gemini API key, the natural language search feature will not work.
# Linux/macOS
./docker/init.sh
# Windows (PowerShell)
.\docker\init.ps1This single command will:
- Build all Docker images (PostgreSQL, Redis, API, Web)
- Start the database services
- Run migrations and seed the database with sample data
- Start the application
Once complete:
- Frontend: http://localhost:3000
- API: http://localhost:3001
To start services again after stopping (skips build and migrations):
# Linux/macOS
./docker/run.sh
# Windows (PowerShell)
.\docker\run.ps1# View logs
docker compose logs -f
# View logs for a specific service
docker compose logs -f api
# Stop all services
docker compose down
# Stop and remove all data (clean slate)
docker compose down -v
# Rebuild after code changes
docker compose build && docker compose up -d api webFor local development with hot-reload. Requires Node.js 20+, pnpm, and Docker.
# Configure environment variables
cp .env.example .env
# Edit .env and add your Gemini API key (get one at https://aistudio.google.com/apikey)
# Install dependencies
pnpm install
# Start only PostgreSQL and Redis in Docker
docker compose up -d postgres redis
# Apply database schema (creates tables)
pnpm db:push
# Seed the database with sample data
pnpm db:seed
# Start both frontend and API in development mode
pnpm dev- Frontend: http://localhost:3000
- API: http://localhost:3001
# Drop all tables
pnpm db:reset
# Re-apply schema and seed
pnpm db:push
pnpm db:seed
# Open Drizzle Studio (database GUI)
pnpm --filter @gsid/db db:studioThe backend includes a comprehensive test suite using Vitest.
# Run all tests
pnpm test
# Run tests in watch mode (for development)
pnpm --filter @gsid/api test:watch
# Run tests with coverage report
pnpm test:coverage| Layer | Technology |
|---|---|
| Frontend | Next.js, Tailwind CSS, Tremor, TanStack Table |
| Backend | Hono, tRPC |
| Database | PostgreSQL, Drizzle ORM |
| Cache | Redis |
| AI | Gemini Flash, Vercel AI SDK |
| Monorepo | Turborepo, pnpm |
Starting from the project description provided by Kalos, I decided on a straightforward technical stack: Next and Tailwind for the frontend, Hono for the API server, PostgreSQL for persistence, Redis for caching, and tRPC to ensure type safety. I organized everything as a monorepo using Turborepo to make it easy to share the complete project with interviewers.
With those decisions in place, I researched frontend tools for building dashboards that would integrate well with tRPC. Using Gemini, I explored different options and settled on Tremor for data visualization components and TanStack Table for advanced table functionality. Both libraries work well with React and TypeScript.
Once I had a clear picture of the architecture, I asked Gemini to generate a blueprint.md file documenting the technical requirements and schema design. I then implemented this blueprint in Cursor, setting up the monorepo structure, database schema, API routes, and basic frontend scaffolding.
Time spent: ~1 hour
With the project structure in place, I used Cursor to implement the core functionality: the main API endpoints (transactions, sellers, dashboard KPIs) and a simple frontend with placeholder components. This gave me a working application that I could iterate on.
Time spent: ~10 minutes
I researched with Gemini the most popular and useful visualizations for sales analytics dashboards. Based on the data model we had (transactions, sellers, regions, currencies), I identified several chart types that would provide meaningful insights: KPI cards with sparklines, an interactive transactions table, activity heatmaps, revenue area charts, regional distribution donuts, seller rankings, and status funnels.
I updated the blueprint with detailed technical specifications for each visualization component, describing how they would leverage Tremor's chart library and TanStack Table. I also designed a responsive layout structure: a collapsible sidebar for navigation, a sticky top bar with the natural language search input, and a grid-based content area that adapts from multi-column on desktop to single-column on mobile.
For user comfort, I made sure to implement light and dark mode support using next-themes, with a toggle in the top bar. The color palette uses CSS custom properties so all components adapt automatically to the selected theme.
Time spent: ~30 minutes
Based on the data requirements for the frontend visualizations, I implemented a complete set of backend endpoints. Each visualization component needs specific data aggregations, so I created dedicated tRPC procedures for KPIs with trend data, activity heatmaps, regional breakdowns, status funnels, and exchange rate histories.
To optimize performance, I implemented Redis caching with a 2-second TTL for read operations, ensuring the database isn't overwhelmed while still providing near-real-time data. For the natural language search, Gemini responses are cached for 1 minute.
I also identified missing database indexes that would slow down the dashboard queries as data grows. I added indexes on the most frequently filtered columns: transaction status and timestamp (for time-range queries), seller_id and region_id (for joins and filters), and exchange rate dates.
Finally, I built a comprehensive unit test suite using Vitest. The tests cover all router procedures with mocked dependencies (database, Redis, AI SDK), allowing me to iterate on the code with confidence that existing functionality won't break. The test structure uses a factory pattern for mocks, making it easy to configure specific responses for each test case.
Time spent: ~30 minutes
With the backend fully implemented and tested, I focused on building out the frontend visualization components with a polished, professional aesthetic. Each component follows a consistent design pattern: loading states with skeleton animations, error handling, and responsive layouts that adapt from multi-column grids on desktop to single-column stacks on mobile.
For business logic, I extracted reusable functions into lib/formatters.ts: currency and number formatting, delta type classification for Tremor's BadgeDelta component, color intensity calculation for the heatmap, status configuration mapping, and status-based sorting. These pure functions contain the core logic that drives the visualizations.
I then designed a unit test suite focusing on this business logic layer. The tests cover edge cases like zero values, negative numbers, boundary conditions for delta thresholds, unknown status handling, and array immutability.
Time spent: ~1 hour
With the development environment working and the application fully functional, I added Docker containerization to simplify installation for reviewers. The goal was to make the entire stack runnable with a single command, without requiring Node.js, pnpm, or any local dependencies beyond Docker itself.
I created multi-stage Dockerfiles for both the API and Web services, using Turborepo's turbo prune feature to extract only the necessary packages for each container. This keeps image sizes small while preserving the monorepo's workspace dependencies. For build performance, I configured BuildKit cache mounts to persist pnpm, Turborepo, and Next.js caches between builds.
The docker-compose configuration orchestrates four services: PostgreSQL, Redis, API, and Web. A separate migration service handles database schema setup and seeding. Health checks ensure services start in the correct order, with the API waiting for healthy database connections before accepting traffic.
I created initialization scripts (docker/init.sh and docker/init.ps1) that handle the complete setup: building images, starting databases, running migrations, seeding data, and launching the application. A separate set of run scripts provides a quick way to restart services without rebuilding.
Time spent: ~30 minutes
The final step focused on usability improvements and refining the natural language search feature.
Prompt Engineering for Gemini
The natural language search relies heavily on prompt engineering to extract structured filters from free-form user queries. I structured the prompts in two parts:
-
System context: Defines Gemini's role as a "search filter extractor" and describes the data model (transactions with seller, customer, region, amount, currency, timestamp). Crucially, it instructs the model to return an error message for unsupported queries rather than hallucinating filters.
-
Base prompt: Specifies the exact filter schema to extract (sellerName, customerName, dateRange, region, minAmount) with examples of how to interpret relative dates ("today", "last week", "this month") and amount thresholds ("over $1000"). The current date is injected dynamically so the model can resolve relative time expressions correctly.
I also reduced the TTL for Gemini responses to 1 minute.
API Key Validation
I added graceful handling for missing or placeholder Gemini API keys. The server detects invalid keys at startup and returns a specific response when natural language search is attempted without proper configuration. The frontend displays a clear error message near the search bar, informing users that the API server needs a valid API key.
Frontend Reorganization
To improve usability, I reorganized the layout components:
- Moved the natural language search bar from the top bar into the Transactions section, making it clear that search results appear in the transactions table below
- Relocated the theme toggle from the top bar to the sidebar footer, alongside the user menu
- Simplified the top bar to only show on mobile (hamburger menu and user avatar), since desktop users have the sidebar always visible
Documentation
Updated the README with clear setup instructions, emphasizing the need to configure the Gemini API key before running the application. Both Docker and manual setup sections now include the environment configuration step.
Time spent: ~30 minutes
If this project were to continue development, these would be the logical next steps:
-
Secrets management: The project currently uses environment variables for configuration. However, these are stored in the repository and are not secure. We should use a secrets manager to store the API keys and other sensitive information.
-
Integration Tests: The current test suite focuses on unit tests with mocked dependencies. Adding integration tests that run against a real PostgreSQL instance (using Testcontainers or a dedicated test database) would catch issues that unit tests miss, such as SQL syntax errors, transaction behavior, and index performance.
-
CI/CD Pipeline: Setting up GitHub Actions (or similar) to run tests on every pull request, build Docker images on merge to main, and automatically deploy to a staging environment. This would include lint checks, type checking, unit tests, and eventually integration tests.
-
Migrate to Bun: The project currently uses Node.js with tsx for TypeScript execution. Migrating to Bun would simplify the development experience significantly: native TypeScript support without transpilation, faster package installation, and built-in test runner. This would reduce the tooling complexity and improve developer iteration speed.
