-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VTAdmin API] Cache cluster.GetTablets call for experimental tablet debug vars #12801
Comments
There are a couple of other things to check
cc @ajm188 |
No. VTAdmin has one connection to a vtctld, process-wide, through which all RPCs go. This is managed by the resolver, so reconnects are managed by grpc-go if it's detected connectivity issues.
No, we should keep it around, because constant setup/tear down of the underlying connections will strain both the vtctld and vtadmin. That would manifest similarly to what I fixed in #8368 |
We should instead I think rework the way the web side makes api requests to get the tablet vars so we are only loading what's being currently viewed (vs what the user could view by switching to another tab for ex) so it's only requesting what it needs for the given context |
@ajm188 we can do that, and explore caching if that still isn't sufficient. |
Right now, if a user keeps the workflow streams UI open, we make a lot of calls to VTAdmin API's experimental debug tablet vars endpoint.
This endpoint first makes a
GetTablets
request to vtctld, and uses the result to put together tablet URL in order to make an HTTP request to the tablet's API to get the debug vars.The issue is that all the
cluster.GetTablets calls
cause great strain on a cluster's vtctld.We should cache these calls since they rarely change, to avoid hammering vtctld.
Links:
GetTablet
method: https://github.com/vitessio/vitess/blob/main/go/vt/vtadmin/api.go#L1198The text was updated successfully, but these errors were encountered: