Skip to content

Conversation

@samikshya-db
Copy link
Collaborator

Summary

This stacked PR builds on #319 and implements Phases 6-7 of the telemetry system, completing the full pipeline.

Stack: Part 2 of 2


Phase 6: Metric Collection & Aggregation ✅

New Files

errors.go (108 lines)

  • isTerminalError() - Non-retryable error detection
  • classifyError() - Error categorization
  • ✅ HTTP error handling utilities

interceptor.go (146 lines)

  • BeforeExecute() / AfterExecute() hooks
  • ✅ Context-based metric tracking
  • ✅ Latency measurement
  • ✅ Tag collection
  • ✅ Error swallowing

aggregator.go (242 lines)

  • ✅ Statement-level aggregation
  • ✅ Batch processing (size: 100)
  • ✅ Background flush (interval: 5s)
  • ✅ Thread-safe operations
  • ✅ Immediate flush for terminal errors

client.go (updated)

  • ✅ Full pipeline integration
  • ✅ Graceful shutdown

Phase 7: Driver Integration ✅

Configuration Support

internal/config/config.go (+18 lines)

  • EnableTelemetry field
  • ForceEnableTelemetry field
  • ✅ DSN parameter parsing
  • DeepCopy() support

Connection Integration

connection.go, connector.go (+20 lines)

  • ✅ Telemetry field in conn struct
  • ✅ Initialization in Connect()
  • ✅ Cleanup in Close()

Helper Module

driver_integration.go (59 lines)

  • InitializeForConnection() - Setup
  • ReleaseForConnection() - Cleanup
  • ✅ Feature flag checking
  • ✅ Resource management

Integration Flow

DSN: "host:port/path?enableTelemetry=true"
    ↓
connector.Connect()
    ↓
telemetry.InitializeForConnection()
    ├─→ Feature flag check (5-level priority)
    ├─→ Get/Create telemetryClient (per host)
    └─→ Create Interceptor (per connection)
    ↓
conn.telemetry = Interceptor
    ↓
conn.Close()
    ├─→ Flush pending metrics
    └─→ Release resources

Changes

Total: +1,073 insertions, -48 deletions (13 files)

Phase 6:

  • telemetry/errors.go (108 lines) - NEW
  • telemetry/interceptor.go (146 lines) - NEW
  • telemetry/aggregator.go (242 lines) - NEW
  • telemetry/client.go (+27/-9) - MODIFIED

Phase 7:

  • telemetry/driver_integration.go (59 lines) - NEW
  • internal/config/config.go (+18) - MODIFIED
  • connection.go (+10) - MODIFIED
  • connector.go (+10) - MODIFIED
  • telemetry/DESIGN.md - MODIFIED

Testing

All tests passing

  • ✅ 70+ telemetry tests (2.018s)
  • ✅ No breaking changes
  • ✅ Compilation verified
  • ✅ Thread-safety verified

Usage Example

// Enable telemetry via DSN
dsn := "host:443/sql/1.0?enableTelemetry=true"
db, _ := sql.Open("databricks", dsn)

// Or force enable
dsn := "host:443/sql/1.0?forceEnableTelemetry=true"

Related Issues


Checklist

  • Phase 6: Collection & aggregation
  • Phase 7: Driver integration
  • Configuration support
  • Resource management
  • All tests passing
  • No breaking changes
  • DESIGN.md updated

samikshya-db and others added 3 commits January 30, 2026 10:11
…gation

This commit implements Phase 6 (metric collection and aggregation) for the
telemetry system.

Phase 6: Metric Collection & Aggregation
- Implement error classification (errors.go)
  - isTerminalError() for identifying non-retryable errors
  - classifyError() for categorizing errors for telemetry
  - HTTP error handling utilities

- Implement telemetry interceptor (interceptor.go)
  - beforeExecute() / afterExecute() hooks for statement execution
  - Context-based metric tracking with metricContext
  - Latency measurement and tag collection
  - Connection event recording
  - Error swallowing with panic recovery

- Implement metrics aggregator (aggregator.go)
  - Statement-level metric aggregation
  - Batch size and flush interval logic
  - Background flush goroutine with ticker
  - Thread-safe metric recording with mutex protection
  - Immediate flush for connection and terminal errors
  - Aggregated counts (chunks, bytes, polls)

- Update telemetryClient (client.go)
  - Wire up aggregator with exporter
  - Automatic aggregator start in constructor
  - Graceful shutdown with 5s timeout
  - getInterceptor() for per-connection interceptors

Architecture:
- Each connection gets its own interceptor instance
- All interceptors share the same aggregator (per host)
- Aggregator batches metrics and flushes periodically
- Exporter sends batched metrics to Databricks
- Circuit breaker protects against endpoint failures

Testing:
- All 70+ existing tests continue to pass
- Compilation verified, no breaking changes

Note: Phase 7 (driver integration) will be completed separately to allow
careful review and testing of hooks in connection.go and statement.go.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit implements Phase 7 (driver integration) for the telemetry system,
completing the full telemetry pipeline from driver operations to export.

Phase 7: Driver Integration
- Add telemetry configuration to UserConfig
  - EnableTelemetry: User opt-in flag (respects server feature flags)
  - ForceEnableTelemetry: Force enable flag (bypasses server checks)
  - DSN parameter parsing in ParseDSN()
  - DeepCopy support for telemetry fields

- Add telemetry support to connection
  - Add telemetry field to conn struct (*telemetry.Interceptor)
  - Initialize telemetry in connector.Connect()
  - Release telemetry resources in conn.Close()
  - Graceful shutdown with pending metric flush

- Export telemetry types for driver use
  - Export Interceptor type (was interceptor)
  - Export GetInterceptor() method (was getInterceptor)
  - Export Close() method (was close)

- Create driver integration helper (driver_integration.go)
  - InitializeForConnection(): One-stop initialization
  - ReleaseForConnection(): Resource cleanup
  - Encapsulates feature flag checks and client management
  - Reference counting for per-host resources

Integration Flow:
1. User sets enableTelemetry=true or forceEnableTelemetry=true in DSN
2. connector.Connect() calls telemetry.InitializeForConnection()
3. Telemetry checks feature flags and returns Interceptor if enabled
4. Connection uses Interceptor for metric collection (Phase 8)
5. conn.Close() releases telemetry resources

Architecture:
- Per-connection: Interceptor instance
- Per-host (shared): telemetryClient, aggregator, exporter
- Global (singleton): clientManager, featureFlagCache, circuitBreakerManager

Opt-In Priority (5 levels):
1. forceEnableTelemetry=true - Always enabled (testing/internal)
2. enableTelemetry=false - Always disabled (explicit opt-out)
3. enableTelemetry=true + server flag - User opt-in with server control
4. Server flag only - Default Databricks-controlled behavior
5. Default - Disabled (fail-safe)

Testing:
- All 70+ telemetry tests passing
- No breaking changes to existing driver tests
- Compilation verified across all packages
- Graceful handling when telemetry disabled

Note: Statement hooks (beforeExecute/afterExecute) will be added in follow-up
for actual metric collection during query execution.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants