New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CONSOLE-3171, CONSOLE-3329: Re-initialize graphql client on cluster change #12425
Conversation
startGQLClient(); | ||
dispatch(clearSSARFlags()); | ||
dispatch(detectFeatures()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should create a function which will do all the necessary dispatches as these 3 function calls are being repeated in couple more places.
Also I think we should re-run API discovery
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to add the api discovery here. We only left it out because we knew it wouldn't work yet :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will create a separate PR for that. There is some caching involved on frontend side so Im not sure that running just dispatch(startAPIDiscovery())
will be correct.
handle(graphQLEndpoint, authHandlerWithUser(func(user *auth.User, w http.ResponseWriter, r *http.Request) { | ||
cluster := serverutils.GetCluster(r) | ||
k8sProxyConfig := s.getK8sProxyConfig(cluster) | ||
k8sProxy := proxy.NewProxy(k8sProxyConfig) | ||
k8sResolver := resolver.K8sResolver{K8sProxy: k8sProxy} | ||
rootResolver := resolver.RootResolver{K8sResolver: &k8sResolver} | ||
schema := graphql.MustParseSchema(string(graphQLSchema), &rootResolver, opts...) | ||
handler := graphqlws.NewHandler() | ||
handler.InitPayload = resolver.InitPayload | ||
graphQLHandler := handler.NewHandlerFunc(schema, &relay.Handler{Schema: schema}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TheRealJon you already had the backend part implemented :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm about to change this implementation a little, but it should be an easy adjustment. Instead of creating a new proxy instance on every request, we should create one on the first request, then reuse the same proxy instance on subsequent requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is happening already IIUC. The handle function is executed once, when graphl client tries to connect to graphql server - we create a proxy and graphql handler here. The rest of the communication is done via websocket and the graphql handler is always using the same proxy instance.
When the cluster is changed in the UI, the websocket is closed and new one is created, executing the handle function again (with new cluster as parameter)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but I think the number of proxy instances still scales up with the number of requests, which is probably not a scalable solution. With many users/sessions, proxy instances increase.
b7a2416
to
7cdd4da
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of questions and nits, but this is awesome!
handle(graphQLEndpoint, authHandlerWithUser(func(user *auth.User, w http.ResponseWriter, r *http.Request) { | ||
cluster := serverutils.GetCluster(r) | ||
k8sProxyConfig := s.getK8sProxyConfig(cluster) | ||
k8sProxy := proxy.NewProxy(k8sProxyConfig) | ||
k8sResolver := resolver.K8sResolver{K8sProxy: k8sProxy} | ||
rootResolver := resolver.RootResolver{K8sResolver: &k8sResolver} | ||
schema := graphql.MustParseSchema(string(graphQLSchema), &rootResolver, opts...) | ||
handler := graphqlws.NewHandler() | ||
handler.InitPayload = resolver.InitPayload | ||
graphQLHandler := handler.NewHandlerFunc(schema, &relay.Handler{Schema: schema}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm about to change this implementation a little, but it should be an easy adjustment. Instead of creating a new proxy instance on every request, we should create one on the first request, then reuse the same proxy instance on subsequent requests.
startGQLClient(); | ||
dispatch(clearSSARFlags()); | ||
dispatch(detectFeatures()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to add the api discovery here. We only left it out because we knew it wouldn't work yet :)
7cdd4da
to
a312dee
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
QE Approver:
/assign @yapei
Docs Approver:
/assign @opayne1
PX Approver:
/assign @RickJWagner
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jhadvig, rawagner The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/label docs-approved
/label px-approved |
/test e2e-gcp-console |
@rawagner Could you help rebase to master? |
a312dee
to
6916fae
Compare
New changes are detected. LGTM label has been removed. |
websocket connection to
/label qe-approved |
/test e2e-gcp-console |
@rawagner: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During our testathon, these changes were deployed and a few regressions were noted that are related:
- User settings fail to load when managed clusters are selected since the user UID changes. We are only fetching the user settings configmap from the hub cluster, and we use the user UID as part of the configmap name. We probably need to either track the hub cluster user UID separately to make sure we fetch the correct configmap, or update the user settings logic to fall back to local storage in a multicluster environment.
- I experienced 401 responses from graphql when logged in as a user with cluster viewer permissions.
- API discovery still needs to be addressed.
We can probably merge this and address the issues in a follow-on PR, but I just wanted to make sure we have these documented.
/hold |
Great findings, @TheRealJon |
I'm hesitant to merge this without at least addressing 1 and 2 because those are pretty serious regressions. The API discovery issue might already have a bug in Jira and I'm fine tracking it separately since it's not introduced by these changes. |
Multicluster feature has been deferred. Closing. |
/jira refresh |
@TheRealJon: This pull request references CONSOLE-3329 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Both related Jira issues: |
No description provided.