Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split push notifications across multiple oxenmq instances #33

Merged
merged 4 commits into from Sep 18, 2023

Conversation

jagerman
Copy link
Member

The number of push notifications we are getting is sometimes hitting a limit on how many requests we can handle through the oxenmq proxy server at once. This suggests oxenmq's proxy thread needs some attention for optimization, but this change in the meantime deals with it in hivemind by starting up OxenMQ instances for push notification handling to distribute the load.

This also reduces the number of threads when in such a mode for both the push and the main oxenmq instances as it makes no sense to use hardware_concurrency on each of multiple oxenmq servers, especially when running on a 24-thread server.

The number of push notifications we are getting is sometimes hitting a
limit on how many requests we can handle through the oxenmq proxy server
at once.  This suggests oxenmq's proxy thread needs some attention for
optimization, but this change in the meantime deals with it in hivemind
by starting up OxenMQ instances for push notification handling to
distribute the load.

This also reduces the number of threads when in such a mode for both the
push and the main oxenmq instances as it makes no sense to use
hardware_concurrency on *each* of multiple oxenmq servers, especially
when running on a 24-thread server.
We can get large bursts of incoming messages that overload the default
200 queue; this increases it substantially (to 4000 or 6000, depending
on the oxenmq mode) for the SPNS.
Restarting everything was a bit screwy because READY wasn't sent until
hivemind was done waiting for notifiers, but notifiers are set to start
after hivemind (via `After`), so we would always timeout and then, as
soon as the timeout expires, the notifiers would connect, which was
wrong behaviour.
@jagerman
Copy link
Member Author

This has been live (and working well) on our production server for a few days now; merging it.

@jagerman jagerman merged commit 3d1d778 into oxen-io:v2 Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant