-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Buffer Fill Percentage #1817
Comments
I think this is a useful approach and it follows other metrics such as CPU usage. I take it that this going to be a per-buffer metric. Do you think there is value for supporting a more aggregate metric in the future? Perhaps maximum bufferUsage? |
I was initially thinking per buffer is sufficient enough for now. There may be value in a buffer usage across pipelines however we would need to create a standard metric name across the service. Ideally if we support a more persistent buffer solution we may be able to generalize the buffer across pipelines. Again all for the future though. |
Should all buffer plugins support this metric? This would require standardizing buffer plugins to have max buffer size or capacity attribute (
Alternatively I could create this metric just for the |
I think we should start with |
Is your feature request related to a problem? Please describe.
Buffer fill percentage is the primary indicator for horizontal scaling for data prepper. However, the metrics reported by micrometer are with a gauge and with no concept of upper bounds as defined in the pipeline configuration. Determining when to scale and reviewing metrics in a dashboards requires knowledge of the
blockingBuffer
pipeline configuration buffer size values in relation to the currentblockingBuffer.recordsInBuffer
metric value.Describe the solution you'd like
A new metrics for blockingBuffer.bufferUsage` which tracks the utilization rate of the buffer based on the number of records in the buffer and the buffer size.
Describe alternatives you've considered (Optional)
The text was updated successfully, but these errors were encountered: