-
Notifications
You must be signed in to change notification settings - Fork 0
Add reference documentation #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds comprehensive reference documentation for the NI Measurement Data Services, including entity definitions, workflow guidance, and example links. The documentation covers the complete measurement data lifecycle from setup through analysis.
Key changes include:
- Added detailed entity reference documentation for both metadata and data stores
- Created a comprehensive workflow guide showing typical usage patterns
- Integrated example notebook links into the documentation structure
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
docs/reference/using-measurement-data-services.md | Complete workflow documentation covering setup, test execution, and analysis phases |
docs/reference/ni-metadata-store.md | Entity reference for metadata store components (operators, hardware, UUTs, etc.) |
docs/reference/ni-data-store.md | Entity reference for data store components (test results, measurements, conditions) |
docs/reference/index.rst | Reference section index page |
docs/examples/index.rst | Examples section with links to Jupyter notebooks |
docs/index.rst | Updated main index to include new reference and examples sections |
docs/conf.py | Added Jupyter checkpoint exclusion pattern |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
@@ -0,0 +1,389 @@ | |||
# NI Metadata Store | |||
|
|||
The NI Metadata Store support the digital thread weaving together measurement results with metadata that describes **who**, **what**, **where**, and **how** tests are performed. This creates a complete context around your test data, enabling traceability and analysis across your entire test ecosystem. |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected verb agreement: 'support' should be 'supports' to match the singular subject 'NI Metadata Store'.
The NI Metadata Store support the digital thread weaving together measurement results with metadata that describes **who**, **what**, **where**, and **how** tests are performed. This creates a complete context around your test data, enabling traceability and analysis across your entire test ecosystem. | |
The NI Metadata Store supports the digital thread weaving together measurement results with metadata that describes **who**, **what**, **where**, and **how** tests are performed. This creates a complete context around your test data, enabling traceability and analysis across your entire test ecosystem. |
Copilot uses AI. Check for mistakes.
```python | ||
# Find measurements from specific equipment that failed | ||
equipment_failures = data_store_client.query_measurements( | ||
f"$filter=outcome eq 'OUTCOME_FAILED' and contains(hardware_item_ids, '{dmm_id}')" |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The OData query syntax may be incorrect for array field queries. The 'contains' function typically requires the field name as the second parameter and the value as the first parameter, or may not work with array fields depending on the OData implementation.
f"$filter=outcome eq 'OUTCOME_FAILED' and contains(hardware_item_ids, '{dmm_id}')" | |
f"$filter=outcome eq 'OUTCOME_FAILED' and hardware_item_ids/any(id: id eq '{dmm_id}')" |
Copilot uses AI. Check for mistakes.
@nick-beer @ccifra I wanted y'all to take a look at these. They are mostly AI-generated as a first pass. Please review for general content, direction, terminology, accuracy etc. For the most part, the documentation for metadata store and data store are not Python-specific. The one about using NI MDS has some Python API calling code but that could be more pseudo-code if we want to generalize this documentation and use it for something not specific to LV or Python, i.e. conceptual documentation. |
…rence-documentation
id = "https://example.com/scope.schema.toml" | ||
|
||
[hardware_item] | ||
bandwidth = "*" # Required field - must be provided |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe the use of ?
here is supported.
*
indicates that any value is acceptable for the extension property. Whether or not an extension property is required is specified using the required
key.
[hardware_item]
bandwidth = "*"
manufacture_date = "*"
required = ["bandwidth"]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any documentation of this format? Or do you know where I can look in the code to see what the schema schema is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DataStoreMetadataProviderTestStateBase
may be a helpful reference point, in terms of both showing an example schema (both TOML and JSON) and in programmatically building a TOML schema string. sample_schema.json
in the current repo is also a pretty representative example of a JSON schema.
**Field Requirement Indicators:** | ||
- `"*"` = **Required** - Must be provided when creating the entity | ||
- `"?"` = **Optional** - Can be provided but not mandatory | ||
- Additional validation rules can be specified in JSON Schema format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe there should be parity between the TOML and JSON formats, so I'm not clear on what this means.
family="Power", | ||
manufacturers=["ACME Corp"], | ||
part_number="PS-v2.1-001", | ||
extensions={ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we need to specify a schema_id
if we are specifying extension values. Similar comment elsewhere above and below.
data_store_client.publish_condition( | ||
condition_name="Temperature", | ||
type="Environment", | ||
value=23.5, # °C |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Consider] We could show how to use Scalar
to ensure that the unit information is published as well?
) | ||
``` | ||
|
||
#### **Batch Measurements** *(For Parametric Sweeps)* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to be careful here to indicate that the non-batched publish RPCs can also be used for parametric sweeps -- the decision about whether to use batching is more about when you are doing the publishing -- as you go (non-batched) or all at once, sometime later (batched).
```python | ||
# Update test result with final outcome | ||
test_result = data_store_client.get_test_result(test_result_id) | ||
test_result.end_date_time = datetime.now() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe these are values that we expect the client to set, but perhaps @nick-beer can confirm.
f"$filter=contains(test_result_id, '{test_result_id}')" | ||
) | ||
|
||
# Find tests by operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it is just retrieving the operator itself--not finding tests.
|
||
#### **Test Results Analysis** | ||
```python | ||
# Find all test results for a UUT model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think a UUT model is factoring in here(?)
#### **Equipment Usage Tracking** | ||
```python | ||
# Find all hardware items due for calibration | ||
equipment_due = metadata_store_client.query_hardware_items( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CalibrationDueDate
is currently a free text field, so I don't think that this sort oflt
comparison would be expected to work(?)- Thinking about the
HardwareItem
information tracked by the metadata store more broadly, it essentially provides snapshots of the state of a given hardware item at a given point in time, so I don't think a question about the current state of the hardware is well-suited to be answered by a query like this.
```python | ||
# Track performance over time for a UUT model | ||
uut_instances = metadata_store_client.query_uut_instances( | ||
f"$filter=uut_id eq '{power_supply_uut_id}'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broad comment about the OData filter queries: I don't think that they use the _
naming convention. (UutId
instead of uut_id
.)
|
||
```python | ||
# 1. Register the schema | ||
schema_content = load_schema_from_file("scope_schema.toml") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just use metadata_store_client.register_schema_from_file
here instead(?)
#### **UUT (Product Definitions)** | ||
```python | ||
# Define the product being tested | ||
power_supply_uut_id = metadata_store_client.create_uut(UUT( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The expected capitalization is Uut
and UutInstance
. That should be reflected in these examples.
```python | ||
# Create aliases for easy reference | ||
metadata_store_client.create_alias(Alias( | ||
alias_name="Primary_DMM", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
create_alias
takes an alias_name
and alias_target
, rather than an Alias
.
```python | ||
# Get measurement data using moniker | ||
for measurement in measurements: | ||
if measurement.data_type == "Scalar": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the actual type that is returned here is more fully qualified--can you confirm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking more about this particular example, we should probably change it so that we read back Vector
, since single Scalar
values aren't read back from the service in isolation.
Access the actual measured values: | ||
|
||
```python | ||
# Get measurement data using moniker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Optional] We are indeed using the Moniker
under the hood, but from the client's perspective, the Moniker
is never directly referenced here. With that in mind, we could update this documentation/comment to reflect that.
|
||
* `publish_measurement.ipynb <https://github.com/ni/datastore-python/blob/main/examples/notebooks/overview/publish_measurement.ipynb>`_ - Basic measurement publishing | ||
* `alias.ipynb <https://github.com/ni/datastore-python/blob/main/examples/notebooks/alias/alias.ipynb>`_ - Working with aliases | ||
* `custom_metadata.ipynb <https://github.com/ni/datastore-python/blob/main/examples/notebooks/custom-metadata/custom_metadata.ipynb>`_ - Custom metadata examples |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Minor] We should probably move in the direction of referring to these as extensions, rather than custom metadata. (I realize there are some pre-existing references to custom metadata in the repo--I'm mostly thinking about the new content that we add.)
What does this Pull Request accomplish?
Adds documentation about the data and metadata store entities with real-world examples. Also adds a 'workflow' document on how the NI Measurement Data Services can be used.
Why should this Pull Request be merged?
First pass at some docs to help customers with general understanding.
What testing has been done?
Just docs build and PR.