-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: SIGNAL-5811 add time interval bucket feature to ThrottleAndTimed and make loop_interval optional #39
feat: SIGNAL-5811 add time interval bucket feature to ThrottleAndTimed and make loop_interval optional #39
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few suggestions and one question about avoiding a potential race condition
def handle_cast({:throttle, new_args}, %{args: args} = state) do | ||
{:noreply, %{state | args: update_args(args, new_args)}} | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something I'm thinking about is whether there might be issues with using handle_cast
in a situation where were continuously getting throttle messages and the timeout occurs in the middle of a burst of them. Is there a chance we could drop some of the new things being added to state or overwrite them when we call update_state_with_work_result/2
.
I'd have to think about it and maybe test it out to know for sure if that's possible (or not possible), but maybe you've already considered this 🙂 and done they heavy mental lifting for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to test it and was wondering why I wasn't getting any new data into the state.
Then I realized that the beauty of handle_continue
is that incoming messages will be kept getting stored in the inbox until handle_continue
finishes running.
After the handle_continue
is finished, the GenServer will then start ingesting the backed up messages in the inbox - this was confirmed by checking the Process.info(pid, :message)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I realized that the beauty of handle_continue is that incoming messages will be kept getting stored in the inbox until handle_continue finishes running.
❤️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the handle_continue is finished, the GenServer will then start ingesting the backed up messages in the inbox - this was confirmed by checking the Process.info(pid, :message).
Could we measure/monitor the size of the inbox queue? That could be valuable lest we run into an issue where messages get dropped. I'm trying to find documentation on the behavior as the mailbox size grows but having trouble for some reason 🙃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about it more, we could probably implement this in the handler itself during handle_throttle
if we want to capture it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💥 I'll try plugging this into the wms-service and test it on a branch unless you already have one you're working on that I could use
@kinson I'll set one up gimme a sec UPDATE: here it is: https://github.com/stordco/wms-service/tree/SIGNAL-5811-with-buffy-pr - I tweaked |
def handle_cast({:throttle, new_args}, %{args: args} = state) do | ||
{:noreply, %{state | args: update_args(args, new_args)}} | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about it more, we could probably implement this in the handler itself during handle_throttle
if we want to capture it
@seungjinstord this worked like a charm ✨ |
An automated release has been created for you. --- ## [2.2.0](v2.1.1...v2.2.0) (2024-03-13) ### Features * SIGNAL-5811 add time interval bucket feature to ThrottleAndTimed and make loop_interval optional ([#39](#39)) ([3d48d04](3d48d04)) ### Miscellaneous * Sync files with stordco/common-config-elixir ([#27](#27)) ([d7cffde](d7cffde)) * Sync files with stordco/common-config-elixir ([#38](#38)) ([c127668](c127668)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Change
Buffy.ThrottleAndTimed
to be able to modify data in state for eachthrottle()
invoked.This is additive with additional
defoverridable
functions - original API is intact.:loop_interval
option has looser requirement as it became optional.Example use case: let's say for some event handler operation, we process an id by querying into the DB.
If that happens a thousand times per second into a connection that takes a lot of CPU/memory, like Postgres DB,
then making a thousand query connections per second would be significantly expensive compared to one connection with a list of thousand ids - let's say the same query ends up returning a list, regardless of one id or list of ids (logic only differs by input size of ids list).
In that case we need a way to collect ids across some set of events.
Buffy.ThrottleAndTimed
already has bulk of the timing mechanism in place:defoverridable
functions for key generating, args updating, and state update upon successful work operations.Example use (also noted in moduledoc):
Also see the added test for a more thorough use case.