Application that finds the nearest car parks to a user's location with availability information using Singapore government data.
- Nearest Car Park Search: Find car parks near a given location with availability information
- Audit Fields: Track creation, updates, and soft deletes with user attribution
- Real-time Availability: Automatic updates from Singapore government API every 15 minutes
- Coordinate Conversion: Converts SVY21 coordinates to WGS84 (latitude/longitude) format
- Pagination Support: Efficient pagination for large result sets
- RESTful API: Clean REST endpoints with proper validation and error handling
- Docker Support: Complete containerization with MySQL, Redis, and Spring Boot
The application follows a layered architecture pattern with clear separation of concerns, optimized for performance and scalability.
graph TB
subgraph "Client Layer"
A[Web Client]
B[Mobile App]
C[API Consumer]
end
subgraph "API Gateway"
D[Spring Boot Controller]
E[Validation & Error Handling]
end
subgraph "Business Logic Layer"
F[Cached Car Park Service]
G[Car Park Availability Service]
H[Car Park Streaming Import Service]
I[Redis Geospatial Service]
end
subgraph "Data Access Layer"
J[MySQL Repository]
K[External API Repository]
L[Redis Template]
end
subgraph "Data Storage"
M[(MySQL Database)]
N[(Redis Cache)]
O[Singapore Government API]
end
A --> D
B --> D
C --> D
D --> E
E --> F
E --> G
E --> H
F --> I
G --> K
H --> J
I --> L
J --> M
L --> N
K --> O
flowchart TD
subgraph "Data Ingestion Pipeline"
A[CSV Import] --> B[Coordinate Conversion<br/>SVY21 → WGS84]
B --> C[Database Storage]
C --> D[Redis Location Cache]
E[External API] --> F[Availability Data]
F --> G[Update Existing Records]
G --> C
end
subgraph "Query Pipeline"
H[User Request] --> I[Redis Geospatial Query]
I --> J{Found in Cache?}
J -->|Yes| K[Get Details from DB]
J -->|No| L[Database Spatial Query]
L --> K
K --> M[Return Results]
end
subgraph "Scheduled Tasks"
N[15-min Timer] --> O[Fetch External API]
O --> P[Update Availability]
P --> Q[Refresh Redis Cache]
end
D --> I
G --> Q
graph LR
subgraph "Controller Layer"
A[CarParkController]
end
subgraph "Service Layer"
B[CachedCarParkService]
C[CarParkAvailabilityService]
D[CarParkStreamingImportService]
E[RedisGeospatialService]
end
subgraph "Repository Layer"
F[CarParkMySqlRepository]
G[CarParkExternalApiRepository]
end
subgraph "Entity Layer"
H[CarPark]
I[CarParkAvailability]
end
A --> B
A --> C
A --> D
B --> E
C --> G
D --> F
E --> F
F --> H
G --> I
erDiagram
CarPark {
string car_park_no PK
string address
float latitude
float longitude
string location "POINT"
int total_lots
int available_lots
string car_park_type
string type_of_parking_system
string short_term_parking
string free_parking
string night_parking
string car_park_decks
string gantry_height
string car_park_basement
string created_by
datetime created_at
string updated_by
datetime updated_at
datetime deleted_at
}
CarParkAvailability {
string carpark_number PK
int total_lots
int available_lots
string lot_type
}
CarPark ||--o{ CarParkAvailability : has
graph TD
subgraph "Redis Cache Layer"
A[carpark:locations<br/>Geospatial Data]
end
subgraph "Cache Operations"
C[Import CSV] --> D[Cache Locations]
E[Update Availability] --> F[Refresh Locations]
G[Geospatial Query] --> H[Check Cache First]
H --> I{Cache Hit?}
I -->|Yes| J[Return Cached Data]
I -->|No| K[Query Database]
K --> L[Return Results]
end
D --> A
F --> A
H --> A
graph LR
subgraph "Database Optimization"
A[Spatial Indexes<br/>SRID 4326]
B[Batch Updates<br/>CASE Statements]
C[Connection Pooling]
end
subgraph "Memory Optimization"
D[Streaming CSV Import]
E[Streaming JSON Parsing]
F[Chunked Processing]
end
subgraph "Cache Optimization"
G[Redis Geospatial<br/>GEOADD/GEORADIUS]
H[TTL-based Expiry]
I[Lazy Loading]
end
subgraph "API Optimization"
J[Pagination Support]
K[Efficient Queries]
L[Error Handling]
end
- Controller Layer: Handles HTTP requests, validation, and error handling
- Service Layer: Contains business logic and orchestrates operations
- Repository Layer: Manages data access and external API interactions
- Entity Layer: Defines data models and relationships
- Redis Geospatial: Stores car park locations for fast spatial queries
- TTL-based Expiry: Automatically refreshes cache every 15 minutes
- Fallback Mechanism: Database queries when cache misses occur
- Streaming Processing: Handles large CSV and JSON files without memory issues
- Batch Operations: Uses native SQL CASE statements for efficient updates
- Spatial Indexing: MySQL spatial indexes (SRID 4326) for location queries
- Connection Pooling: Efficient database connection management
- CSV Import: Establishes car park structure and coordinates
- External API: Updates availability data for existing car parks
- Real-time Updates: Scheduled tasks ensure data freshness
- Coordinate Conversion: Accurate SVY21 to WGS84 transformation
- Horizontal Scaling: Stateless services can be replicated
- Database Optimization: Efficient queries and indexing strategies
- Cache Distribution: Redis cluster support for high availability
- Async Processing: Non-blocking operations for better throughput
The easiest way to get started is using Docker Compose, which will set up the entire environment including MySQL, Redis, and the Spring Boot application.
-
Clone and Navigate
git clone <repository-url> cd SpringBootStarter
-
Start the Complete Stack
# Build and start all services docker compose up --build -d # Check status docker compose ps
-
Verify Services
# Check if all containers are healthy docker compose ps # View logs docker compose logs -f
-
Test the Application
# Health check curl http://localhost:8080/actuator/health
- MySQL: Port 21308 (external) → 3306 (internal)
- Redis: Port 21703 (external) → 6379 (internal)
- Spring Boot App: Port 8080
# View logs for specific service
docker compose logs carpark-api
docker compose logs mysql
docker compose logs redis
# Restart a specific service
docker compose restart carpark-api
# Stop all services
docker compose down
# Stop and remove volumes (WARNING: This will delete all data)
docker compose down -v
# View resource usage
docker statsIf you prefer to run the application locally without Docker, follow these steps.
- Java 21 (OpenJDK or Oracle JDK)
- MySQL 8.0+
- Redis 7+
- Gradle 8.5+
- Maven (optional, for dependency management)
-
Install MySQL
# macOS (using Homebrew) brew install mysql brew services start mysql # Ubuntu/Debian sudo apt update sudo apt install mysql-server sudo systemctl start mysql # Windows: Download MySQL Installer from official website
-
Create Database and User
CREATE DATABASE carpark_db; CREATE USER 'carpark_user'@'localhost' IDENTIFIED BY 'carpark_password'; GRANT ALL PRIVILEGES ON carpark_db.* TO 'carpark_user'@'localhost'; FLUSH PRIVILEGES;
-
Install Redis
# macOS brew install redis brew services start redis # Ubuntu/Debian sudo apt install redis-server sudo systemctl start redis # Windows: Download Redis for Windows or use WSL
-
Update application.properties
# Database Configuration spring.datasource.url=jdbc:mysql://localhost:3306/carpark_db?useSSL=false&serverTimezone=UTC&allowPublicKeyRetrieval=true spring.datasource.username=carpark_user spring.datasource.password=carpark_password # Redis Configuration spring.data.redis.host=127.0.0.1 spring.data.redis.port=6379
-
Run the Application
# Using Gradle ./gradlew bootRun # Or build and run JAR ./gradlew build java -jar build/libs/carpark-0.0.1-SNAPSHOT.jar
-
Start Services
# Terminal 1: Start MySQL mysql.server start # Terminal 2: Start Redis redis-server # Terminal 3: Run Spring Boot ./gradlew bootRun
-
Import Initial Data
# Import car park data from CSV curl -X POST "http://localhost:8080/v1/carparks/import"
-
Test Endpoints
# Test nearest car parks curl "http://localhost:8080/v1/carparks/nearest?latitude=1.234&longitude=103.456&page=1&per_page=5" # Update availability curl -X POST "http://localhost:8080/v1/carparks/update-availability"
Place the Singapore car park CSV file in data/carpark-data.csv with the following format:
car_park_no,address,x_coord,y_coord,car_park_type,type_of_parking_system,short_term_parking,free_parking,night_parking,car_park_decks,gantry_height,car_park_basement
A1,BLK 1/2 TANJONG PAGAR PLAZA,26076.1487,48187.2034,SURFACE CAR PARK,COUPON,WHOLE DAY,NO,YES,0,0,0The application supports configurable coordinate conversion for different regions and coordinate systems. By default, it's configured for Singapore's SVY21 coordinate system, but can be easily adapted for other regions.
Add these properties to your application.properties file:
# Origin coordinates for the coordinate system
carpark.coordinate.origin.lat=1.3666666666666667
carpark.coordinate.origin.lon=103.83333333333333
carpark.coordinate.origin.n=38744.572
carpark.coordinate.origin.e=28001.642
# Conversion factors for coordinate transformation
carpark.coordinate.factor.lat=0.0000089831
carpark.coordinate.factor.lon=0.0000111319
# Coordinate bounds validation
carpark.coordinate.bounds.lat.min=-90.0
carpark.coordinate.bounds.lat.max=90.0
carpark.coordinate.bounds.lon.min=-180.0
carpark.coordinate.bounds.lon.max=180.0To use this service with different coordinate systems:
- Identify your coordinate system's origin point (usually the southwest corner or center)
- Calculate conversion factors based on your coordinate system's scale
- Set appropriate bounds for your region
- Update the configuration in
application.properties
# UK National Grid to WGS84 conversion
carpark.coordinate.origin.lat=49.7666666666666667
carpark.coordinate.origin.lon=-7.5666666666666667
carpark.coordinate.origin.n=0.0
carpark.coordinate.origin.e=0.0
carpark.coordinate.factor.lat=0.000001
carpark.coordinate.factor.lon=0.000001
# UK bounds
carpark.coordinate.bounds.lat.min=49.0
carpark.coordinate.bounds.lat.max=61.0
carpark.coordinate.bounds.lon.min=-8.0
carpark.coordinate.bounds.lon.max=2.0# Find nearest car parks
curl -X GET "http://localhost:8080/v1/carparks/nearest" \
-H "Content-Type: application/json" \
-d '{
"latitude": 1.234,
"longitude": 103.456,
"page": 1,
"per_page": 10
}'
# Import car park data
curl -X POST "http://localhost:8080/v1/carparks/import"
# Update availability
curl -X POST "http://localhost:8080/v1/carparks/update-availability"The application is designed with scalability and performance in mind, implementing several key optimizations to ensure efficient data access and system performance.
The system leverages MySQL's spatial indexing capabilities to optimize location-based queries:
- Spatial Data Types: Uses
POINT SRID 4326for storing car park coordinates in WGS84 format - Spatial Indexes: Implements spatial indexes on the
locationfield for fast geospatial queries - Optimized Queries: Utilizes
ST_Distance_Sphere()function for accurate distance calculations with indexed spatial data - Performance Impact: Spatial queries that would take seconds without indexing now execute in milliseconds
-- Example of optimized spatial query with indexing
SELECT cp.*, ST_Distance_Sphere(?, cp.location) / 1000 AS distance_km
FROM car_parks cp
WHERE cp.deleted_at IS NULL AND cp.available_lots > 0
ORDER BY distance_km
LIMIT ? OFFSET ?Redis serves as a high-performance caching layer, particularly leveraging its geospatial features:
- Geospatial Caching: Stores car park locations in Redis using
GEOADDcommands for ultra-fast spatial queries - Cache Strategy: Implements a simplified caching approach using only
carpark:locationskey with 15-minute TTL - Read-Heavy Optimization: Given the system performs frequent read operations (user queries) and periodic writes (every 15 minutes), Redis caching enables:
- Sub-millisecond response times for location-based queries
- Reduced database load during peak usage
- Scalable concurrent access without database connection pressure
- Fallback Mechanism: Database acts as a reliable fallback when Redis data is unavailable
- Automatic Refresh: Cache is automatically refreshed every 15 minutes to maintain data consistency
// Redis geospatial caching implementation
redisTemplate.opsForGeo().add(CAR_PARK_LOCATIONS_KEY,
new Point(longitude, latitude), carParkNo);The system employs efficient data processing strategies for both import and updates:
- Streaming Processing: Uses OpenCSV with streaming to handle large CSV files without loading entire content into memory
- Batch Database Operations: Processes data in chunks to minimize memory footprint
- Coordinate Conversion: Efficiently converts SVY21 coordinates to WGS84 using optimized mathematical algorithms
- HTTP Streaming: Uses Spring WebFlux's reactive streams to process large API responses efficiently
- Chunked Processing: Processes external API responses in
DataBufferchunks to manage memory usage - True Batch Database Updates: Implements native SQL CASE statements to update multiple car parks in a single database operation
- Connection Pool Optimization: Batching minimizes concurrent database connections, reducing pressure on the connection pool and preventing potential database bottlenecks
- Memory Management: Streaming approach prevents loading large API responses entirely into memory
// Efficient streaming and batching example
request.retrieve()
.bodyToFlux(DataBuffer.class)
.reduce(new StringBuilder(), (sb, dataBuffer) -> {
// Process data in chunks
byte[] bytes = new byte[dataBuffer.readableByteCount()];
dataBuffer.read(bytes);
DataBufferUtils.release(dataBuffer);
sb.append(new String(bytes, StandardCharsets.UTF_8));
return sb;
})The current implementation demonstrates good performance characteristics:
- Data Processing: 2,084 car parks processed in ~0.6 seconds
- Cache Refresh: 2,254 locations cached in ~0.375 seconds
- Complete Update Cycle: Full availability update cycle completes in under 1 second
- Query Response: Nearest car park queries return results in milliseconds
- Memory Efficiency: Processes 300KB+ API responses without memory issues
- Database Operations: True batch updates using native SQL CASE statements for maximum efficiency
- Horizontal Scaling: Application can be deployed across multiple instances
- Database Connection Pooling: Efficient connection management for high concurrent loads
- Redis Cluster Support: Can be extended to Redis cluster for higher availability
- Asynchronous Processing: Scheduled tasks don't block user request processing
- Resource Management: Automatic cleanup of database connections and Redis resources