-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All services with a selector for the mgr daemon should be updated if there are multiple mgr daemons and the active mgr changes #7988
Comments
@psavva which node IP address you are using for calling the dashboard service? You should use the IP address of the Node on which the MGR pod is created. |
I'm certainly using the correct IP address, you can see I've hit the service and the response is a redirect. I've made sure by Access via port forwarding and node IP, both methods fail. I'm able to access my other dashboards fine. This is a new bug. I have simulated it on 2 different clusters. |
Are the other clusters are using the same Rook and Ceph version? Or Is it like you are able to access your first dashboards for the And if their version mismatch you can try to upgrade the Ceph version to the v15.2.12 as there are recent fixes on it, or it will be by default added into Rook v1.6.4. |
@psavva If you're getting a redirect, the response is coming from the standby mgr. The active mgr would respond properly, but the standby mgr only responds with redirects. When two mgrs are deployed, Rook periodically updates the dashboard service to direct traffic to the active mgr. If you're defining your own dashboard service based on a node port, you would also need to update it to only direct traffic to the active mgr. |
Thank you for this info, I'll update my configuration in the morning, and report back. It however seems that this should be automated somehow... Maybe the use of a new label to indicate the active manager would be a good solution, also would require an update on the current deployment manifest for Kubernetes |
Rook does automatically update the |
@travisn I'm trying to figure out which is the active manager.
and
|
@psavva The labels on the mgr pods are not updated when the active mgr changes, but the labels on the |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
FYI -- I was having issues with the dashboard:
I am using a loadbalancer service to access the dashboard from a host separage from the k8s cluster, and the comments above about MGRs switching triggered an 'aha!' moment: I had just increased the MGR count from 1 to 2 when it started breaking. Not sure why, but it seems that I was actually talking to BOTH MGRs -- but only one of them was acutally using the service? I reduced back to 1 MGR and the dashboard started working again. Rook v1.7.4 |
@dredwilliams Were you referencing the |
I had created the loadbalancer service using "dashboard-loadbalancer.yaml" ... which (looking now) created a new service 'rook-ceph-mgr-dashboard-loadbalancer' ... so that was probably my problem. I guess I expected that if I used a provided capability, it would respond appropriately ... Thanks! |
Agreed, Rook should be able to update any service that has a selector for the edit: Issue title is updated to reflect the proposal |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
@travisn any updates on this? |
I'm hoping to look at it next week now. |
Is this a bug report or feature request?
Deviation from expected behavior:
I have setup my rook-ceph cluster, and have enabled the dashboard in the CephCluster CRD.
I have also installed the External Dashboard nodeport service
When visiting the ceph dashboard, i'm redirected to a wrong URL.
You will notice that I'm accessing my internal IP and nodeport, and I'm redirected to the url: rook-ceph-mgr-a-84c875bd95-svhnd This is the bug....
See here:
Expected behavior:
The Ceph Dashboard should appear OK.
Environment:
uname -a
):Linux DGCVM01 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
* Cloud provider or hardware configuration:rook version
inside of a Rook Pod):ceph -v
):kubectl version
):ceph health
in the Rook Ceph toolbox):The text was updated successfully, but these errors were encountered: