Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide Performance metrics for Model Service #495

Open
1 of 2 tasks
axel7083 opened this issue Mar 11, 2024 · 1 comment
Open
1 of 2 tasks

Provide Performance metrics for Model Service #495

axel7083 opened this issue Mar 11, 2024 · 1 comment
Assignees

Comments

@axel7083
Copy link
Contributor

axel7083 commented Mar 11, 2024

I think it would be a nice touch to show a proper histogram for each resource when the inference server is running

Here is what I was really liking in Lens, being able to see clearly the CPU/RAM usage, seeing pics etc.

image

Since we have containers/podman-desktop#6212 merged, we could keep a few minutes of history for some stats and display it for the user

@slemeur slemeur changed the title Show an histogram for the resources given a container Provide Performance metrics for Model Server Mar 15, 2024
@slemeur slemeur changed the title Provide Performance metrics for Model Server Provide Performance metrics for Model Service Mar 15, 2024
@axel7083 axel7083 self-assigned this Mar 15, 2024
@axel7083
Copy link
Contributor Author

Here is a POC of a monitoring component that could be placed above the inference container details

showcase.monitoring.memory.container.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant