Skip to content

Commit ee6daab

Browse files
authored
Merge pull request #10 from topcoder-platform/fix/remove-schema-registry
Kafka Schema Registry Removal
2 parents 14b7e61 + 96cc9c1 commit ee6daab

File tree

14 files changed

+78
-561
lines changed

14 files changed

+78
-561
lines changed

.env.example

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -32,14 +32,6 @@ KAFKA_INITIAL_RETRY_TIME=300
3232
KAFKA_RETRIES=5
3333

3434
# -------------------------------------
35-
# Schema Registry Configuration
36-
# -------------------------------------
37-
# The URL for the Confluent Schema Registry
38-
SCHEMA_REGISTRY_URL=http://localhost:8081
39-
# Optional username for Schema Registry basic authentication
40-
SCHEMA_REGISTRY_USER=
41-
# Optional password for Schema Registry basic authentication
42-
SCHEMA_REGISTRY_PASSWORD=
4335

4436
# -------------------------------------
4537
# Challenge API Configuration

README.md

Lines changed: 38 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ autopilot operations with Kafka integration.
55
## Features
66

77
- **Event-Driven Scheduling**: Dynamically schedules phase transitions using `@nestjs/schedule`.
8-
- **Kafka Integration**: Utilizes Kafka and Confluent Schema Registry for robust, schema-validated messaging.
8+
- **Kafka Integration**: Utilizes Kafka with JSON serialization for robust, lightweight messaging.
99
- **Challenge API Integration**: Fetches live challenge data, with resilient API calls featuring exponential backoff and rate-limiting handling.
1010
- **Recovery and Synchronization**: Includes a recovery service on startup and a periodic sync service to ensure data consistency.
1111
- **Health Checks**: Provides endpoints to monitor application and Kafka connection health.
@@ -51,11 +51,6 @@ Open the `.env` file and configure the variables for your environment. It is cru
5151
KAFKA_BROKERS=localhost:9092
5252
KAFKA_CLIENT_ID=autopilot-service
5353
54-
# -------------------------------------
55-
# Schema Registry Configuration
56-
# -------------------------------------
57-
SCHEMA_REGISTRY_URL=http://localhost:8081
58-
5954
# -------------------------------------
6055
# Challenge API Configuration
6156
# -------------------------------------
@@ -96,7 +91,6 @@ docker compose up -d
9691
This will start:
9792
- Zookeeper (port 2181)
9893
- Kafka (ports 9092, 29092)
99-
- Schema Registry (port 8081)
10094
- Kafka UI (port 8080)
10195

10296
2. Verify Docker containers are healthy:
@@ -127,7 +121,6 @@ npm run start:dev
127121
- **API Documentation (Swagger)**: `http://localhost:3000/api-docs`
128122
- **Health Check**: `http://localhost:3000/health`
129123
- **Kafka UI**: `http://localhost:8080`
130-
- **Schema Registry**: `http://localhost:8081`
131124

132125
# Test coverage
133126

@@ -167,7 +160,7 @@ The service is composed of several key modules that communicate over specific Ka
167160
| SchedulerService | Low-level job management using setTimeout. Triggers Kafka events when jobs execute. | - | autopilot.phase.transition |
168161
| RecoveryService | Runs on startup to sync all active challenges from the API, scheduling them and processing overdue phases. | - | autopilot.phase.transition |
169162
| SyncService | Runs a periodic cron job to reconcile the scheduler's state with the Challenge API. | - | - |
170-
| KafkaService | Manages all Kafka producer/consumer connections and schema registry interactions. | All | All |
163+
| KafkaService | Manages all Kafka producer/consumer connections with JSON serialization. | All | All |
171164

172165

173166
## Project Structure
@@ -204,6 +197,42 @@ test/ # Test files
204197
205198
.env # Environment variables
206199
.env.example # Example env template
200+
```
201+
202+
## JSON Messaging Architecture
203+
204+
The service uses a simplified JSON-based messaging approach that aligns with organizational standards and provides several benefits:
205+
206+
### Benefits of JSON Messaging
207+
208+
- **Simplified Infrastructure**: No need for Schema Registry, reducing deployment complexity
209+
- **AWS Compatibility**: Works seamlessly with AWS-native Kafka solutions like MSK
210+
- **Standard Format**: Uses widely-adopted JSON format for better interoperability
211+
- **Reduced Dependencies**: Eliminates Confluent-specific dependencies
212+
- **Enhanced Performance**: Lower overhead compared to Avro serialization with Schema Registry lookups
213+
214+
### Message Structure
215+
216+
All Kafka messages follow a consistent JSON structure:
217+
218+
```json
219+
{
220+
"topic": "autopilot.phase.transition",
221+
"originator": "auto_pilot",
222+
"timestamp": "2023-12-01T10:00:00.000Z",
223+
"mimeType": "application/json",
224+
"payload": {
225+
// Topic-specific payload data
226+
}
227+
}
228+
```
229+
230+
### Message Validation
231+
232+
- **TypeScript Interfaces**: Strong typing maintained through TypeScript interfaces
233+
- **Class Validators**: Runtime validation using class-validator decorators
234+
- **JSON Schema**: Implicit schema validation through TypeScript compilation
235+
- **Error Handling**: Robust error handling for malformed JSON messages with DLQ support
207236
package.json # Dependencies and scripts
208237
tsconfig.json # TypeScript config
209238
README.md # Documentation

docker-compose.dev.yml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
services:
2+
autopilot-app:
3+
build:
4+
context: .
5+
dockerfile: Dockerfile.dev
6+
ports:
7+
- "3000:3000"
8+
env_file:
9+
- .env
10+
environment:
11+
- KAFKA_BROKERS=host.docker.internal:9092
12+
volumes:
13+
- .:/app
14+
- /app/node_modules
15+
extra_hosts:
16+
- "host.docker.internal:host-gateway"

docker-compose.yml

Lines changed: 1 addition & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -46,44 +46,20 @@ services:
4646
volumes:
4747
- kafka_data:/var/lib/kafka/data
4848

49-
schema-registry:
50-
image: confluentinc/cp-schema-registry:7.4.0
51-
ports:
52-
- "8081:8081"
53-
environment:
54-
SCHEMA_REGISTRY_HOST_NAME: schema-registry
55-
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:29092
56-
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
57-
SCHEMA_REGISTRY_DEBUG: "true"
58-
depends_on:
59-
kafka:
60-
condition: service_healthy
61-
healthcheck:
62-
test: curl -f http://localhost:8081 || exit 1
63-
interval: 30s
64-
timeout: 10s
65-
retries: 5
66-
volumes:
67-
- schema_registry_data:/etc/schema-registry
68-
6949
kafka-ui:
7050
image: provectuslabs/kafka-ui:latest
7151
ports:
7252
- "8080:8080"
7353
environment:
7454
KAFKA_CLUSTERS_0_NAME: local
7555
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
76-
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schema-registry:8081
7756
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
7857
DYNAMIC_CONFIG_ENABLED: "true"
7958
depends_on:
8059
kafka:
8160
condition: service_healthy
82-
schema-registry:
83-
condition: service_healthy
8461

8562
volumes:
8663
zookeeper_data:
8764
zookeeper_log:
88-
kafka_data:
89-
schema_registry_data:
65+
kafka_data:

package.json

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,6 @@
2020
"test:e2e": "jest --config ./test/jest-e2e.json"
2121
},
2222
"dependencies": {
23-
"@kafkajs/confluent-schema-registry": "^3.8.0",
2423
"@nestjs/axios": "^4.0.1",
2524
"@nestjs/cli": "^11.0.0",
2625
"@nestjs/common": "^11.1.1",
@@ -34,7 +33,6 @@
3433
"@nestjs/swagger": "^7.4.0",
3534
"@nestjs/terminus": "^11.0.0",
3635
"@types/express": "^5.0.0",
37-
"avsc": "^5.7.7",
3836
"axios": "^1.9.0",
3937
"class-transformer": "^0.5.1",
4038
"class-validator": "^0.14.2",

0 commit comments

Comments
 (0)