You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, hardware services are only passive, and no telemetry is automatically submitted to the telemetry database without an application doing so. We want to distribute something that will do some telemetry collection for you, but are wondering how best to do this, since we don't know what should be collected from each piece of hardware.
Also, hardware providers/integrators typically know the most important items that should be regularly collected to provide a comprehensive view of the current status of the hardware. We want to allow them to structure the service is such a way to make that collection easy.
There are some ideas for how to make it possible for hardware services to dictate what should be collected without have to rebuild the collection application for each configuration. We can:
Add a field in the config.toml underneath each service that lists the fields that should be regularly collected
Have a dedicated query for telemetry that should be regularly collected that returns a key value JSON object of all that telemetry
Add a dedicated query in the service that allows you to retrieve a fragment, which then you can use to query the service
Classify the telemetry to be collected in a separate object from the rest of the telemetry, and retrieve the schema from the hardware service, which then allows you to inspect what's in that object and query everything in it
I think it should be included in the config.toml (options 1 or 2), so it can be changed without requiring a rebuild of the service either, and mission operators can change it to meet their mission needs without having to edit the source. This way it's entirely controlled through the config file and nothing in the collection process needs to be rebuilt to augment what's collected.
The text was updated successfully, but these errors were encountered:
Another method we could explore is differentiating between "historical" telemetry and "live" telemetry.
For "live" telemetry - you have to retrieve it directly from the hardware service that provides it. This allows us to optimize the services with that assumption.
For "historical" telemetry - We could possibly incorporate it as a special form of logging configuration with rsync that allows anyone using our logging setup to "log" telemetry through a special type of log message that could be rotated automatically and downlinked as necessary.
Currently, hardware services are only passive, and no telemetry is automatically submitted to the telemetry database without an application doing so. We want to distribute something that will do some telemetry collection for you, but are wondering how best to do this, since we don't know what should be collected from each piece of hardware.
Also, hardware providers/integrators typically know the most important items that should be regularly collected to provide a comprehensive view of the current status of the hardware. We want to allow them to structure the service is such a way to make that collection easy.
There are some ideas for how to make it possible for hardware services to dictate what should be collected without have to rebuild the collection application for each configuration. We can:
I think it should be included in the config.toml (options 1 or 2), so it can be changed without requiring a rebuild of the service either, and mission operators can change it to meet their mission needs without having to edit the source. This way it's entirely controlled through the config file and nothing in the collection process needs to be rebuilt to augment what's collected.
The text was updated successfully, but these errors were encountered: