Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] Allow sending webhook events in batches #249

Merged
merged 11 commits into from
Jan 8, 2022
Merged

[feature] Allow sending webhook events in batches #249

merged 11 commits into from
Jan 8, 2022

Conversation

Daynnnnn
Copy link
Contributor

@Daynnnnn Daynnnnn commented Jan 7, 2022

Currently when firing off web-hooks based on events, there's no batching in place, so 1000 client events === 1000 web-hooks. This can cause a lot of overhead at a high scale. This PR introduces a feature which allows you to send multiple events in one web-hook, if the events were made at very similar times (within 50ms of each-other).

In our case, we're receiving 100,000 client events over 10 seconds, so this reduces a lot of overhead.

I've attached an image, where the red box shows invocations of my lambda "web-hook" when WEBHOOK_BATCHING is set to 0, and the green box shows the same action taken when set to 1.

I'm pretty new to this, so let me know if you want me to add, remove, or rework anything :)

Screenshot 2022-01-07 at 18 57 09

Daynnnnn and others added 2 commits January 7, 2022 19:03
DRY concept on sendSimple and sendWebhookByBatching
Renames
@rennokki rennokki changed the title Allow batching of events in webhooks [feature] Allow sending webhook events in batches Jan 7, 2022
@rennokki
Copy link
Member

rennokki commented Jan 7, 2022

This PR is awesome. 😀

So for the sake of simplicity, I added WEBHOOKS_BATCHING (disabled by default) and WEBHOOKS_BATCHING_DURATION (50 by default), perhaps it will be useful to tune it according to your needs.

I've cleaned up some code and moved it into its own class and rewrote it a bit to not look so duplicated.

I have to say that increasing the duration of batch-keeping to more than SHUTDOWN_GRACE_PERIOD will surely get into some cases where the running instance would be signaled to close and won't have time to send the batches within the memory, losing them forever. I think that < 1s is convenient. However, signals like SIGKILL won't allow the instance to terminate the connections and send the batches it currently has under leadership, but I guess you're aware of this. 😁

Even if I re-wrote a bit the PR, you did a great job. Please let me know if this version works well for you.

@codecov
Copy link

codecov bot commented Jan 7, 2022

Codecov Report

Merging #249 (22a32d9) into master (32dab3b) will decrease coverage by 0.42%.
The diff coverage is 65.38%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #249      +/-   ##
==========================================
- Coverage   85.00%   84.57%   -0.43%     
==========================================
  Files          40       40              
  Lines        1881     1900      +19     
  Branches      337      343       +6     
==========================================
+ Hits         1599     1607       +8     
- Misses        272      282      +10     
- Partials       10       11       +1     
Impacted Files Coverage Δ
src/server.ts 87.56% <ø> (ø)
tests/utils.ts 93.63% <ø> (ø)
src/webhook-sender.ts 79.79% <65.38%> (-5.21%) ⬇️
src/rate-limiters/cluster-rate-limiter.ts 87.50% <0.00%> (-9.38%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 32dab3b...22a32d9. Read the comment docs.

@Daynnnnn
Copy link
Contributor Author

Daynnnnn commented Jan 7, 2022

Wow, that was quick! 😁 Yeah all those changes make sense! and the batch duration is great - wish pusher had that!

@rennokki
Copy link
Member

rennokki commented Jan 7, 2022

I was already trying to merge some PRs so I could respond quickly. 😁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants