You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
That also revealed another bug, which this issue is about
The resource.error message is now returned when we try to watch a resource we should not
For instance the metrics.k8s.io.nodemetrics resource used in the node detail page
The resource.error though isn't one we let stick, so we just try to resource.start / watch again... which results in the same resource.error, repeat ad nauseam
Impact
This happens on navigate to the node detail page, as opposed to only when refreshing on the detail page
The spam does not stop by going from the detail page to the list page. Going from the node detail page where we watch an individual node metric to the node list where we watch all node metrics should behave the same as things like pod ... but it doesn't. Pods will stop the individual watch and start a watch for all.
This applies to all resources that we can't watch (no schema watch verb, about 8 out of 182 in a DS rke2 cluster)
To Reproduce
Bring up a rancher instance at or newer than 793118 (7th March)
Navigate to the node list
Bring up dev tools
Navigate to a node's detail page
Result
View the messages sent over the cluster socket repeat the resource.start, resource.stop / resource.error, resource.start cycle
Expected Result
We should not attempt to watch resources the user cannot watch
Screenshots
The text was updated successfully, but these errors were encountered:
gaktive
changed the title
[backport v2.8.next2] Steve socket is spammed on fresh visit to node detail page
[backport v2.8.next1] Steve socket is spammed on fresh visit to node detail page
Apr 2, 2024
Tested version v2.8-1a77fcd152f1a7bad3ae8e98bb96d96259f41510-head in a k3s single-node cluster on localhost. The resource.error described in the screenshot are no longer displayed. Just wanted to confirm that I can close the issue, but I still see the following resource.errors in the node details page:
Also, the following resource.errors in the node lists:
The Resource error warnings in the console can be ignored, they at some point should not be sent to the UI (see rancher/rancher#40627).
The resource.error messages are sent over web socket, so will appear in dev tools --> Network tab --> WS tab --> /k8s/clusters/<cluster id>/v1/subscribe row's Messages tab
The issue should be resolved if there are no resource.error messages there
This is a backport issue for #10668, automatically created via GitHub Actions workflow initiated by @richard-cox
Original issue body:
Setup
Describe the bug
resource.error
message when watching an individual resource over websocketresource.error
oftoo old
which would result in socket spamresource.error
message is now returned when we try to watch a resource we should notmetrics.k8s.io.nodemetrics
resource used in the node detail pageresource.error
though isn't one we let stick, so we just try toresource.start
/ watch again... which results in the same resource.error, repeat ad nauseamTo Reproduce
Result
messages
sent over the cluster socket repeat the resource.start, resource.stop / resource.error, resource.start cycleExpected Result
Screenshots
![image](https://private-user-images.githubusercontent.com/18697775/314603209-7bb47a6b-a665-4c62-94fe-fa0d81b7b5cc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg3OTU1NjQsIm5iZiI6MTcxODc5NTI2NCwicGF0aCI6Ii8xODY5Nzc3NS8zMTQ2MDMyMDktN2JiNDdhNmItYTY2NS00YzYyLTk0ZmUtZmEwZDgxYjdiNWNjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjE5VDExMDc0NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk5OTg4MWU2MjYwNWM4YmQ4Mjk5YjAwNWY5ZDg2ZGQ3MTYwY2RiYTE0ZTUxZTMwMDA4MmUxYWFhMGVlMzk5ZjgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.LrSyx-BiwPllImfF9cUrIKpTs--6Coez5jb3dH4bt8k)
The text was updated successfully, but these errors were encountered: