The Event struct in the AimX protocol includes a dropped field for backpressure signaling, but it's currently hardcoded to None. We need to implement proper tracking of dropped events when the subscription queue becomes full.
Background
The AimX protocol (defined in protocol.rs lines 139-156) includes the Event struct with a documented dropped field:
/// Event message from server (subscription push)
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Event {
pub subscription_id: String,
pub sequence: u64,
pub data: JsonValue,
pub timestamp: String,
/// Number of dropped events since last delivery (optional)
#[serde(skip_serializing_if = "Option::is_none")]
pub dropped: Option<u64>,
}
Currently in handler.rs:1075, this field is always set to None:
let event = Event {
subscription_id: subscription_id.clone(),
sequence,
data: json_value,
timestamp,
dropped: None, // TODO: Implement dropped event tracking
};
Problem Statement
When a subscription's queue (sized by config.subscription_queue_size, default 100) becomes full, new events may be dropped depending on the buffer type (SPMC Ring, SingleLatest, or Mailbox). Without tracking these drops:
- Clients have no visibility into whether they're keeping up with the event stream
- Backpressure signals are missing, making it difficult to diagnose slow consumers
- Protocol compliance is incomplete - the feature is documented but not implemented
- Monitoring gaps - operators can't detect subscription performance issues
Current Behavior
- Events flow from
subscribe_record_updates() through an mpsc::Receiver<serde_json::Value>
- If the receiver is slow, events may be dropped by the underlying buffer
- The
dropped field is always None, providing no indication of data loss
Desired Behavior
- Track the number of events dropped since the last successful delivery
- Include this count in the
Event.dropped field when non-zero
- Reset the counter after reporting
- Provide clients with actionable backpressure information
Technical Approach
Option 1: Sequence Gap Detection (Recommended)
Track the last expected sequence number and compare against received values:
async fn stream_subscription_events(
subscription_id: String,
mut value_rx: tokio::sync::mpsc::Receiver<serde_json::Value>,
event_tx: tokio::sync::mpsc::UnboundedSender<Event>,
) {
let mut sequence: u64 = 1;
let mut last_received_sequence: Option<u64> = None;
while let Some(json_value) = value_rx.recv().await {
// Detect gaps in sequence numbers (if buffer includes metadata)
let dropped = if let Some(last_seq) = last_received_sequence {
let expected = last_seq + 1;
if sequence > expected {
Some(sequence - expected)
} else {
None
}
} else {
None
};
last_received_sequence = Some(sequence);
let event = Event {
subscription_id: subscription_id.clone(),
sequence,
data: json_value,
timestamp: generate_timestamp(),
dropped,
};
// Send event...
sequence += 1;
}
}
Option 2: Buffer-Level Instrumentation
Modify subscribe_record_updates() to return a channel that includes drop count metadata:
pub struct SubscriptionValue {
pub data: serde_json::Value,
pub dropped_since_last: u64,
}
pub fn subscribe_record_updates(
&self,
type_id: TypeId,
queue_size: usize,
) -> DbResult<(
tokio::sync::mpsc::Receiver<SubscriptionValue>,
tokio::sync::oneshot::Sender<()>,
)>
This requires changes to the buffer implementations to track drops.
Option 3: Try-Receive Pattern
Use try_recv() in a loop to detect and count drops:
while let Some(json_value) = value_rx.recv().await {
let mut dropped = 0;
// Drain any queued messages (indicates backpressure)
while let Ok(_) = value_rx.try_recv() {
dropped += 1;
}
let event = Event {
dropped: if dropped > 0 { Some(dropped) } else { None },
// ...
};
}
Implementation Checklist
Test Cases
- No drops: Fast consumer should see
dropped: None for all events
- Slow consumer: Simulate slow recv() and verify
dropped: Some(n) appears
- Recovery: After drops, verify counter resets and continues correctly
- Multiple subscriptions: Ensure drop tracking is independent per subscription
- Buffer types: Test with SPMC Ring, SingleLatest, and Mailbox buffers
Related Files
aimdb-core/src/remote/handler.rs:1075 - TODO comment location
aimdb-core/src/remote/protocol.rs:154 - Event struct definition
aimdb-core/src/database.rs - subscribe_record_updates() implementation
aimdb-core/src/buffer/*.rs - Buffer implementations
Acceptance Criteria
The
Eventstruct in the AimX protocol includes adroppedfield for backpressure signaling, but it's currently hardcoded toNone. We need to implement proper tracking of dropped events when the subscription queue becomes full.Background
The AimX protocol (defined in
protocol.rslines 139-156) includes theEventstruct with a documenteddroppedfield:Currently in
handler.rs:1075, this field is always set toNone:Problem Statement
When a subscription's queue (sized by
config.subscription_queue_size, default 100) becomes full, new events may be dropped depending on the buffer type (SPMC Ring, SingleLatest, or Mailbox). Without tracking these drops:Current Behavior
subscribe_record_updates()through anmpsc::Receiver<serde_json::Value>droppedfield is alwaysNone, providing no indication of data lossDesired Behavior
Event.droppedfield when non-zeroTechnical Approach
Option 1: Sequence Gap Detection (Recommended)
Track the last expected sequence number and compare against received values:
Option 2: Buffer-Level Instrumentation
Modify
subscribe_record_updates()to return a channel that includes drop count metadata:This requires changes to the buffer implementations to track drops.
Option 3: Try-Receive Pattern
Use
try_recv()in a loop to detect and count drops:Implementation Checklist
stream_subscription_events()Test Cases
dropped: Nonefor all eventsdropped: Some(n)appearsRelated Files
aimdb-core/src/remote/handler.rs:1075- TODO comment locationaimdb-core/src/remote/protocol.rs:154- Event struct definitionaimdb-core/src/database.rs-subscribe_record_updates()implementationaimdb-core/src/buffer/*.rs- Buffer implementationsAcceptance Criteria
droppedfield correctly reports event loss