-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Group scalability upgrades #22700
Group scalability upgrades #22700
Conversation
We have also been working on the UI changes that go along with these changes. Having the group providers work separated from the UI work procedurally makes sense. But without the UI changes Groups UI will break. Would you agree? |
core/src/main/java/org/keycloak/representations/idm/GroupRepresentation.java
Outdated
Show resolved
Hide resolved
...gration/admin-client-jee/src/main/java/org/keycloak/admin/client/resource/GroupResource.java
Outdated
Show resolved
Hide resolved
model/infinispan/src/main/java/org/keycloak/models/cache/infinispan/GroupAdapter.java
Outdated
Show resolved
Hide resolved
} | ||
subGroups.add(subGroup); | ||
} | ||
return subGroups.stream().sorted(GroupModel.COMPARE_BY_NAME); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't better to leave sorting to the caller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm totally fine with that! I was trying to maintain parity with the existing method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean ordering at the storage layer? Let's also see what the @keycloak/store-maintainers think about it too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just mean that other methods in this class sort the stream at the end so I just followed that same pattern
model/infinispan/src/main/java/org/keycloak/models/cache/infinispan/RealmCacheSession.java
Outdated
Show resolved
Hide resolved
// TODO: this batch size could potentially be stored somewhere (config?) as a variable | ||
// calculate out batches of subgroups to delete from the parent group to avoid grinding the server to a halt at large scale | ||
// especially helpful for freeing up some amount of database time for other requests | ||
long batches = (long) Math.ceil(session.groups().getSubGroupsCount(realm, group.getId()) / 1000.0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't necessary to also flush at the end of the batch statement? Otherwise, they are just being added to the EntityManager
without any real benefit during the commit phase.
If we really want running batches I would agree we need to resolve the size dynamically as it depends a lot on how you are using the server. Also, do we really want to be smart and calculate the size based on the subgroup count or just set a default integer and allow changing it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't pick up on that feature of the entity manager but it totally makes sense, will resolve that.
As far as the calculations go, I used the count as an indicator of where to stop but we could also just check if there are any groups left after each batch, it would result in more calls to the DB I think, though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's see what the store team thinks about it. Perhaps there is a mechanism/pattern there already for batching.
What I was saying is to just include a provider configuration option with a default batch size and allow users to change it accordingly. We have this approach in other places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of calculating the number of batches beforehand - with "read committed" isolation, the items listed can change in a concurrent setup, so I'd prefer to loop until the result is empty.
I wonder if you've added em.flush()
and the batching here to avoid performance problems. Calling em.flush()
will write the changes to the current database transaction, still the entities will remain the Hibernate persistence context. Usually the large persistence context causes problems in performance, and only calling em.flush()
doesn't solve this. Whenever one selects data from the database, a em.flush()
is called anyway, so there is no need to call it here again IMHO.
There is currently no good solution to deletion and batching available. I could imagine an async deletion of groups, that would iteratively create a group and the subgroups in a background task, but that would require more thoughts and would be out-of-scope for this.
model/jpa/src/main/java/org/keycloak/models/jpa/JpaRealmProvider.java
Outdated
Show resolved
Hide resolved
server-spi/src/main/java/org/keycloak/storage/group/GroupLookupProvider.java
Outdated
Show resolved
Hide resolved
This comment was marked as outdated.
This comment was marked as outdated.
…ocs to reflect changes in next version of keycloak, move deprecated usages of RealmModel methods to the corresponding KeycloakSession method
Signed-off-by: Michal Hajas <mhajas@redhat.com>
c63605e
to
ba1d7c2
Compare
@Redhat-Alice there were some test failures. I added some fixes. I hope it will be ok now. I need to drop now I will be able to have a look tomorrow if anything breaks. Sorry for the delays. |
No worries! I'll keep an eye out on GHA for the next hour or so and try to fix anything that comes up in the meantime! |
Looks like everything is green now 🎉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes to the front-end still look good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @Redhat-Alice! This is a great performance improvement when working with groups!
Resolves #22372