Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REQUEST] Expose Ingestion Statistics as a Prometheus Server #39

Open
zprobst opened this issue Jun 28, 2023 · 0 comments
Open

[REQUEST] Expose Ingestion Statistics as a Prometheus Server #39

zprobst opened this issue Jun 28, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@zprobst
Copy link
Contributor

zprobst commented Jun 28, 2023

Is your feature request related to a problem? Please describe.
For perpetually running pipelines in particular, its hard to get a sense of how much work the pipeline is doing against the database. It would be ideal if the user

Describe the solution you'd like
Expand the nodestream run command to configure and run a prometheus metrics server:

nodestream run <<pipeline>> --prometheus-server-addr 127.0.0.1:8080

We already collect metrics as part of the PipelineContext api accessible via get_context() in nodestream.pipeline.meta. This means that we can reuse a lot of existing logic for Prometheus. As well as pipeline or application level metrics, it would be ideal to include some process/system level metrics such as memory and cpu usage.

Describe alternatives you've considered
Currently there is max memory usage logged as part of the message but that isn't really good enough. the only other alternative is to monkey patch the system, or use custom components for everything.

Additional context
n/A

@zprobst zprobst added the enhancement New feature or request label Jun 28, 2023
@zprobst zprobst self-assigned this Jun 28, 2023
@zprobst zprobst added this to the Before 1.0.X milestone Jan 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Backlog
Development

Successfully merging a pull request may close this issue.

1 participant