[DataLoader] Emit OpenTelemetry metrics for read operations#582
Open
ShreyeshArangath wants to merge 8 commits into
Open
[DataLoader] Emit OpenTelemetry metrics for read operations#582ShreyeshArangath wants to merge 8 commits into
ShreyeshArangath wants to merge 8 commits into
Conversation
Wire OpenTelemetry instruments into the dataloader read path so operators can observe per-split and per-batch latency, throughput, and failure rates per table. Adds load_table/plan_files duration & attempt counters around the retry loop, and split/batch duration, row, byte, batch-count and error instruments around DataLoaderSplit and _TimedBatchIter. Common attributes (database, table, branch, and caller-provided execution_context under openhouse.ctx.*) are attached via a single build_attributes helper. execution_context is plumbed through TableScanContext so worker-side splits carry the same labels.
DataLoaderContext gains a new ``metric_attributes`` field; values are attached verbatim to every emitted metric so callers pick exactly which dimensions are dimensions (and at what cardinality), rather than the loader auto-promoting every ``execution_context`` key. ``execution_context`` keeps its original transformer-only role. Drops the unused MappingProxyType empty-context singleton in favor of ``default_factory=dict`` per Python dataclass idiom, removes the now-superfluous branch attribute, and documents why metric names stay lowercase — OSS OpenTelemetry SDK normalizes names to lowercase on export, which would turn LinkedIn-spec PascalCase into unreadable concatenated tokens (``OpenHouse.DataLoader.LoadTableTime`` → ``openhouse.dataloader.loadtabletime``). PascalCase is only preserved by the linkedin.opentelemetry wrapper, which this OSS library does not depend on.
Drops the dedicated metrics/instruments.py module; instruments are now
declared at the top of the file that uses them (data_loader.py for the
load_table / plan_files pair, data_loader_split.py for the split / batch
pairs). Matches the pattern used by OSS Python OTel libraries that emit
their own metrics — no central inventory file, no extra import indirection.
Metric and attribute names switch to LinkedIn's
``{Domain}.{Subsystem}.{MetricName}`` PascalCase convention
(``OpenHouse.DataLoader.LoadTableTime``, ``OpenHouse.Database``, etc.) per
observability-instrumentation/concepts/metrics.md. METER_NAME becomes
``OpenHouse.DataLoader``. The OSS opentelemetry-sdk lowercases instrument
names at registration time, so the stored/exported form ends up as
``openhouse.dataloader.loadtabletime``; tests assert against that lowercased
form and a docstring on the test helper points to the SDK source so the
discrepancy is obvious to a future reader. Source matches LinkedIn
schema-registry convention; runtime form is whatever the SDK emits.
build_attributes now merges caller-provided extras verbatim — no
``openhouse.ctx.*`` namespacing — so callers control dimension naming
entirely.
Replaces internal identifiers (load_table, plan_files, DataLoaderSplit, RecordBatch) in instrument descriptions with prose suited to dashboard labels. Also reverts the metrics package docstring to match master.
# Conflicts: # integrations/python/dataloader/src/openhouse/dataloader/data_loader.py
Replaces DataLoaderContext.metric_attributes with metric_attribute_keys, a whitelist of keys pulled from execution_context. Single source of truth for caller-supplied identifiers; callers opt in to which keys become metric dimensions, keeping cardinality controlled.
The build_attributes helper merged OpenHouse.Database / OpenHouse.Table with caller-supplied extras at every emission boundary. Move that merge into the loader's cached _resolved_metric_attributes property so the table-level identifier is part of the dict from the start, and drop the helper plus its module.
robreeves
reviewed
May 14, 2026
| unit="s", | ||
| description="Time spent loading the Iceberg table from the catalog.", | ||
| ) | ||
| _load_table_attempts = _meter.create_counter( |
Collaborator
There was a problem hiding this comment.
Technically we can derive successful and failed table loads from this, but to me it feels more intuitive and easier to alert on have a metric for load_table_success and load_table_failure instead of attempts. WDYT?
Same comment for file planning
| unit="1", | ||
| description="Errors raised while reading a record batch.", | ||
| ) | ||
|
|
Collaborator
There was a problem hiding this comment.
How about timing for the transformation when table transformer is provided too?
…form timing - Replace LoadTable/PlanFiles Attempts counters with explicit Success and Failure counters so alerting can target final outcome directly without deriving from durations. - Add a TransformTime histogram emitted per-batch from inside the existing _timed_transform generator so transformer execution is observable when a table transformer is configured.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
DataLoader reads have no built-in observability today, so callers cannot track read latency, retries, or errors. This PR emits OpenTelemetry metrics from the DataLoader so operators can monitor table-reading workloads.
Changes
For all the boxes checked, please include additional details of the changes made in this pull request.
Testing Done
For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.
Additional Information
For all the boxes checked, include additional details of the changes made in this pull request.