Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue #7142 #7144

Merged
merged 3 commits into from
Feb 2, 2023
Merged

Fix issue #7142 #7144

merged 3 commits into from
Feb 2, 2023

Conversation

MarcialRosales
Copy link
Contributor

@MarcialRosales MarcialRosales commented Feb 1, 2023

Fixes #7142

The issue is that users retrieved with the intention to be listed in the limits view are not paged hence they are not wrapped
around a paging struct where users would be under items attribute. The limits template thinks that users are paged and when it executes users.items.length , it cannot find items.

Types of Changes

What types of changes does your code introduce to this project?
Put an x in the boxes that apply

  • Bug fix (non-breaking change which fixes issue #NNNN)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause an observable behavior change in existing systems)
  • Documentation improvements (corrections, new content, etc)
  • Cosmetic change (whitespace, formatting, etc)
  • Build system and/or CI

The issue is that users retrieved with
the intention to list in the limits view
are not paged hence they are not wrapped
around a paging struct where users would be
under items attribute.

Pending selenium tests
@mergify mergify bot added the make label Feb 1, 2023
@MarcialRosales MarcialRosales marked this pull request as ready for review February 1, 2023 17:30
Copy link
Collaborator

@lukebakken lukebakken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, I just started via make run-broker with this patch and the issue is resolved. I also set and cleared the max-connections limit for the guest user and / vhost :shipit:

@michaelklishin
Copy link
Member

@MarcialRosales @lukebakken do we know if v3.10.x is affected by #7142?

michaelklishin added a commit that referenced this pull request Feb 2, 2023
@kuPyxa
Copy link

kuPyxa commented Feb 2, 2023

@MarcialRosales @lukebakken do we know if v3.10.x is affected by #7142?

3.9.28 has this bug too.

@MarcialRosales
Copy link
Contributor Author

MarcialRosales commented Feb 2, 2023

Yes, v3.10.x is also affected and 3.9.28. cc @michaelklishin @kuPyxa

@mirobertod
Copy link

The v3.10.18 has this bug too. Do we an ETA for when the bug will be fixed for 3.10.x?
I see that the v3.11.9, which include the fix, has been released the same date of v3.10.18

@michaelklishin
Copy link
Member

New releases are shipped every few weeks, there is no fixed schedule.

@GBreply
Copy link

GBreply commented Mar 21, 2023

Hi all, are you aware that version 3.11.10 has the same proble for Admin/Policies:
image
? If so, do you have a milestone (e.g. 3.11.12) for the fix?

@MarcialRosales
Copy link
Contributor Author

MarcialRosales commented Mar 21, 2023

I cannot reproduce it. Can you specify the reproducer steps? have you also tried to clear your cache?
My steps are:
. rub docker run -p 15672:15672 rabbitmq:3.11.10-management

  • login with guest:guest
  • click on admin tab and then on "Policies".

@kuPyxa
Copy link

kuPyxa commented Mar 21, 2023

Maybe something got cached. Try Ctrl + F5.

@GBreply
Copy link

GBreply commented Mar 21, 2023

I upgraded my kubernetes installation with helm from chart version bitnami/rabbitmq 11.7.0 (where I had the bug on the limits page) to bitnami/rabbit 11.11.1 which uses the image bitnami/rabbitmq:3.11.10-debian-11-r5. I then also tried with the same chart but using the image bitnami/rabbitmq:3.11.11-debian-11-r0.

It seems like clearing the cache manually worked, thanks

@illotum
Copy link

illotum commented Mar 21, 2023

I can confirm this issue affecting 3.10. Considering other reports, perhaps this need to be tagged for backporting to 3.10 and 3.9? @michaelklishin

@michaelklishin
Copy link
Member

There will be no more 3.9 releases.

@michaelklishin
Copy link
Member

michaelklishin commented Mar 21, 2023

This was backported to v3.10.x

@rabbitmq rabbitmq deleted a comment from mergify bot Mar 21, 2023
@michaelklishin
Copy link
Member

3.10.19 should include the backport.

@eduardomb08
Copy link

I'm having the same issue for Users and Policies after upgrading a cluster to 3.11.13. It's funny that I updated a different cluster to the same version and that one is working just fine. PS: I'm using the Kubernetes RabbitMQ Operator.

image

@michaelklishin
Copy link
Member

@eduardomb08 clear all local storage and cache for the management UI.

@eduardomb08
Copy link

It seems the discussion was locked.. so I thought registering my comment here would be the best way to contribute.

All of the systems and frameworks I worked with have feature flags working as switches, and not irreversible changes. That being said, I don't remember working with feature flags in any other distributed system.

I don't really care about the name. But the behavior should be clear in the UI. If enabling a feature flag is designed to be irreversible in RabbiqMQ, it would be nice to have not only a callout to that fact in the UI, but also a confirmation dialog or something when clicking the Enable button.

Lastly, irreversible flags will make it impossible for K8S clusters to rollback upgrades. Given that the upgrade worked without any issues in one of my clusters, but failed in another, no amount of testing in the first environment would guarantee the second would work as well. It that is the case, I believe anyone would feel extremely insecure (to say the least), to turn a feature flag ON in a cluster. Your idea of a CLI command to override the value seems a step in the direction to allowing rollbacks, but still now clear how that would happen, since all of that is abstracted by K8S.

@michaelklishin , given that changes may work in a cluster but not in another, how can a change be safely tested in a separate cluster and then safely to production?

PS: No emotions here besides the frustration of some unexpected rework. And I'm trying to push for any change here, but I would love to know how to make it work since I love and recommend RabbitMQ.

PS2: I was finally able to revert to v3.9, but lost stuff. I'm just thankful that this happened between two of my test clusters and not between real environments.

@michaelklishin
Copy link
Member

@eduardomb08 you keep assuming that RabbitMQ supports downgrades. It does not, in multiple places. Besides full disk snapshots on all cluster nodes, there is no way to roll back an upgrade, on Kubernetes or not.

Some feature flags can be semantically disabled but their purpose is to enable rolling upgrades, during which a cluster becomes mixed version (e.g. RabbitMQ 3.11 nodes run alongside a group of 3.10 nodes, all the way until all nodes are upgraded). Some
feature releases require older feature flags to be enabled before
the upgrade. In RabbitMQ, feature flags are not a mechanism that supports reversion
in the vast majority of cases. From our guide on Feature flags:

It is impossible to disable a feature flag once it is enabled.

Blue/Green upgrades is the safest upgrade option because the original cluster is not going away.

On Kubernetes specifically, trying out a feature flag in a new cluster should not be difficult. In fact, in new clusters all feature flags common across all nodes are usually automatically enabled, so you don't have to do that manually.

RABBITMQ_FEATURE_FLAGS can be used to start a (new) node with only a subset of the flags.

@eduardomb08
Copy link

@eduardomb08 you keep assuming that RabbitMQ supports downgrades. It does not, in multiple places. Besides full disk snapshots on all cluster nodes, there is no way to roll back an upgrade, on Kubernetes or not.

Some feature flags can be semantically disabled but their purpose is to enable rolling upgrades, during which a cluster becomes mixed version (e.g. RabbitMQ 3.11 nodes run alongside a group of 3.10 nodes, all the way until all nodes are upgraded). Some feature releases require older feature flags to be enabled before the upgrade. In RabbitMQ, feature flags are not a mechanism that supports reversion in the vast majority of cases. From our guide on Feature flags:

It is impossible to disable a feature flag once it is enabled.

Blue/Green upgrades is the safest upgrade option because the original cluster is not going away.

On Kubernetes specifically, trying out a feature flag in a new cluster should not be difficult. In fact, in new clusters all feature flags common across all nodes are usually automatically enabled, so you don't have to do that manually.

RABBITMQ_FEATURE_FLAGS can be used to start a (new) node with only a subset of the flags.

Thanks for the clarification, @michaelklishin! That's very helpful.

Given you last comment, I believe the UI changes I suggested become even more important. An accidental click could cause great disruption.

Again, thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Issue with Limit Tab in 3.11.8
9 participants