Skip to content

Architecture Overview

synthaicode edited this page Oct 25, 2025 · 5 revisions

🏗️ Architecture Overview (Layers and Responsibilities)

This document is an internal design reference for the Kafka.Ksql.Linq OSS. It clarifies the overall architecture and the responsibilities of each layer.

⚠️ It targets contributors who modify or extend the OSS itself. It is not a user-facing DSL overview.


🗂️ Layer list

  1. Application layer
  2. Context definition layer
  3. Entity metadata management layer
  4. Query and stream composition layer
  5. Messaging layer
  6. Kafka Streams API layer
  7. Kafka / Schema Registry / ksqlDB platform layer

📊 Layer diagram

flowchart TB
    A["Application<br/>EventSet&lt;T&gt;() and OnModelCreating"] --> B["Context Definition<br/>KsqlContext & KsqlModelBuilder<br/>MappingRegistry"]
    B --> C["Entity Metadata Management<br/>MappingRegistry"]
    C --> D["Query & Stream Composition<br/>LINQ → KSQL, KStream/KTable"]
    D --> E["Messaging<br/>Serialization, DLQ"]
    E --> F["Kafka Streams API"]
    F --> G["Kafka / Schema Registry / ksqlDB"]
Loading

🧱 Layer responsibilities

Layer Primary responsibilities Representative namespaces / classes
Application layer DSL usage (KsqlContext inheritance + OnModelCreating + EventSet&lt;T&gt;()) samples, src/Application
Context definition layer DSL parsing and model construction (KsqlContext, KsqlModelBuilder, MappingRegistry) src/Core
Entity metadata management layer Analyze POCO attributes and manage Kafka/Schema Registry settings via MappingRegistry src/Mapping
Query & stream composition layer Parse LINQ → KSQL, build KStream/KTable topologies, handle windows, joins, finals src/Query, src/EventSet
Messaging layer Serialize/deserialize messages, interface with DLQ, bridge to Kafka Streams src/Messaging
Kafka Streams API layer Execute Kafka Streams topologies, send queries to ksqlDB Streamiz.Kafka.Net
Kafka / Schema Registry / ksqlDB platform layer Cluster operations, schema management, KSQL runtime Kafka, Schema Registry, ksqlDB

🔄 Typical flow across layers

The sequence below shows a representative path from EventSet&lt;T&gt;() registration to the Kafka platform.

Concise runtime sequence from registration to platform:

  • Register entities via EventSet<T>() and OnModelCreating (Application)
  • Build context and metadata (KsqlContext, KsqlModelBuilder, MappingRegistry)
  • Compose queries and generate KSQL/topologies (LINQ → KSQL/topologies (LINQ → KSQL, KStream/KTable)
  • Produce/consume with serializers and DLQ handling (Messaging)
  • Execute via Streamiz Kafka Streams API
  • Persist and query on Kafka / Schema Registry / ksqlDB

Layer-specific structure and key classes are documented under Reference pages.


🔁 Related documents

  • Configuration-Reference: appsettings ↔ DSL mapping
  • dev_guide.md: implementation rules for extending the DSL or adding features
  • Reference: responsibilities and extension points per namespace

This overview supports structural understanding and acts as the starting index when extending the system. Diagrams and dependency maps will be added separately.


POCO design, primary keys, and serialization policy

This section summarizes how the library handles POCO design, key management, and serialization/deserialization. MappingRegistry applies these rules automatically when entities are registered via EventSet&lt;T&gt;().

1. POCO design principles

  • Business POCOs remain pure business data structures; do not attach key-related attributes.
  • Design them freely without worrying about Kafka key schema.

2. Primary-key rules

  • Key schema is derived purely from the property declaration order in the DTO/POCO.
  • Remove Key attributes; composite-key order follows the DTO property order.
  • Allowed key types: int, long, string, Guid. Convert others at the application level.
  • Align the key order with logical keys used in LINQ (group by, etc.).
  • If the key order from GroupBy/Join differs from the DTO property order, initialization throws InvalidOperationException with the message "GroupBy key order must match the output DTO property order."

3. Serialization / deserialization policy

  • POCO ↔ key/value struct conversions are fully automated.
  • Produce: automatically split DTOs into key and value parts and serialize them.
  • Consume: deserialize Kafka key/value pairs and reconstruct the DTO/POCO.
  • Cache serializers/deserializers per type/schema for performance.

4. Operational notes

  • Document these policies across guides and ensure consistent application in code and reviews.

Related documents

Clone this wiki locally