Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OrchardCore Clusters #13633

Draft
wants to merge 55 commits into
base: main
Choose a base branch
from
Draft

OrchardCore Clusters #13633

wants to merge 55 commits into from

Conversation

jtkech
Copy link
Member

@jtkech jtkech commented May 2, 2023

Fixes #13636

Distributes requests across tenant clusters by using the Microsoft Yarp.ReverseProxy.

Work in progress but some first info.

  • We first use the Yarp Configuration allowing to define Routes and Clusters with many options. Each Route is tied to a Cluster composed of Destination(s) on which load balancing can be applied ...

  • We only need one catch-all RouteTemplate and multiple Clusters on which we can configure a custom SlotRange[min, max] property (up to 16384 slots).

  • Each Tenant has an unique slot hash, so an unique Slot and then belongs to the Cluster having the slot in its SlotRange, the Cluster having multiple Destination(s). Note: We could have used a Cluster having Nodes but we follow the Yarp Config having a Clusters list of Cluster type.

  • The same application can run as a proxy or behind it (we check the headers), the advantage with our distributed services, is when as a proxy we are still aware of all tenants data. So on a request we can use the same RunningShellTable to know the Tenant, then select the right Cluster based on the Tenant slot hash (in a custom middleware), and let Yarp select one of its Destination(s).

  • To compute a Tenant slot hash we use the CRC-16/XMODEM algorithm (as Redis use for clustering keys) applied on the new TenantId property, it allows to automatically spread out new tenants on the slots and then on the configured Clusters. This knowing that the CRC-16 is fast to compute and always return the same number for the same TenantId, so a tenant stays on the same Cluster.

  • The distribution is not perfect with few tenants but gets better and better as the number increases.

TODO: Also coupled to a simple feature allowing to release a Tenant if not requested since a given time.

@jtkech
Copy link
Member Author

jtkech commented May 11, 2023

@sebastienros Just for info

  • I removed the proxy Hosts options as discussed in the meeting.

  • So, for now we can add Tenants dynamically and we would also want to add Clusters dynamically.

    Because a Yarp Cluster as many properties, as we already have a RouteTemplate, we could also have a ClusterTemplate config on which we could tweak the Destinations. Hmm, but the Cluster SlotRange would also need to be dynamic, for example to evenly distribute Tenants.

  • On Azure a VM or instance level Public IP addresses (PIP) can be configured and could be used in our current configuration. But when instances are created dynamically, how an OC running instance would know that its IP is reachable by our reverse Proxy, maybe a config defining a range of IPs.

  • I also thought about the case that a Proxy targets an instance that should also act as a Proxy, maybe too much otherwise the header check to prevent loop would not be good. We would need a config to know which instances should act as a Proxy, maybe by reusing the proxy Hosts config option.

Already too much questions, I will think about it ;)

When I will have time, as a first step I will try the Yarp in memory configuration that allow to update dynamically the Yarp config by passing Routes and Clusters, maybe make it easier by exposing a kind of Destinations provider interface, so that when the list of Destinations change we update the config.

jtkech added 10 commits May 27, 2023 02:56
# Conflicts:
#	OrchardCore.sln
#	test/OrchardCore.Tests/Shell/ShellHostTests.cs
# Conflicts:
#	src/OrchardCore/OrchardCore.Abstractions/Shell/ShellSettings.cs
#	src/OrchardCore/OrchardCore/Modules/ModularTenantContainerMiddleware.cs
#	src/OrchardCore/OrchardCore/Modules/ModularTenantRouterMiddleware.cs
#	test/OrchardCore.Tests/Modules/OrchardCore.Tenants/Services/TenantValidatorTests.cs
#	test/OrchardCore.Tests/Shell/ShellHostTests.cs
# Conflicts:
#	src/OrchardCore.Build/Dependencies.props
#	src/OrchardCore/OrchardCore/Modules/ModularTenantRouterMiddleware.cs
# Conflicts:
#	src/OrchardCore/OrchardCore.Abstractions/Shell/Builders/ShellContext.cs
# Conflicts:
#	OrchardCore.sln
#	src/OrchardCore.Build/Dependencies.props
#	src/OrchardCore.Themes/TheAdmin/Views/Layout-Login.cshtml
#	src/OrchardCore/OrchardCore/Modules/ModularTenantRouterMiddleware.cs
Copy link

This pull request has merge conflicts. Please resolve those before requesting a review.

@hishamco
Copy link
Member

@Piedone is this related to Lombiq? If yes can anyone else continue on this, otherwise what's the progress?

@Piedone
Copy link
Member

Piedone commented Mar 18, 2024

Nope, and we haven't worked on this with JT. It would be useful though for multi-node hosting.

@Piedone Piedone marked this pull request as draft March 21, 2024 21:13
Copy link

It seems that this pull request didn't really move for quite a while. Is this something you'd like to revisit any time soon or should we close? Please comment if you'd like to pick it up and remove the "stale" label.

@github-actions github-actions bot added the stale label May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

OrchardCore Clusters
3 participants