Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fetching housekeeping telemetry automatically from services #364

Open
jacoffey3 opened this issue May 23, 2019 · 2 comments
Open

Fetching housekeeping telemetry automatically from services #364

jacoffey3 opened this issue May 23, 2019 · 2 comments

Comments

@jacoffey3
Copy link
Contributor

jacoffey3 commented May 23, 2019

Currently, hardware services are only passive, and no telemetry is automatically submitted to the telemetry database without an application doing so. We want to distribute something that will do some telemetry collection for you, but are wondering how best to do this, since we don't know what should be collected from each piece of hardware.

Also, hardware providers/integrators typically know the most important items that should be regularly collected to provide a comprehensive view of the current status of the hardware. We want to allow them to structure the service is such a way to make that collection easy.

There are some ideas for how to make it possible for hardware services to dictate what should be collected without have to rebuild the collection application for each configuration. We can:

  1. Add a field in the config.toml underneath each service that lists the fields that should be regularly collected
  2. Add a field in the config.toml that gives a GraphQL Fragment of the telemetry that should be collected: https://graphql.org/learn/queries/#fragments
  3. Have a dedicated query for telemetry that should be regularly collected that returns a key value JSON object of all that telemetry
  4. Add a dedicated query in the service that allows you to retrieve a fragment, which then you can use to query the service
  5. Classify the telemetry to be collected in a separate object from the rest of the telemetry, and retrieve the schema from the hardware service, which then allows you to inspect what's in that object and query everything in it

I think it should be included in the config.toml (options 1 or 2), so it can be changed without requiring a rebuild of the service either, and mission operators can change it to meet their mission needs without having to edit the source. This way it's entirely controlled through the config file and nothing in the collection process needs to be rebuilt to augment what's collected.

@jacoffey3
Copy link
Contributor Author

Another method we could explore is differentiating between "historical" telemetry and "live" telemetry.

For "live" telemetry - you have to retrieve it directly from the hardware service that provides it. This allows us to optimize the services with that assumption.

For "historical" telemetry - We could possibly incorporate it as a special form of logging configuration with rsync that allows anyone using our logging setup to "log" telemetry through a special type of log message that could be rotated automatically and downlinked as necessary.

@jacoffey3
Copy link
Contributor Author

H&S Beacon App for KubOS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant