-
Notifications
You must be signed in to change notification settings - Fork 43
[INF-5971] Add latency architecture page #2682
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome, nice work @emptyhammond .
Initial thoughts are:
- We shoudl be a bit more punchy in selling ourselves. Hpapy to loop in marketing to help?
- We have a Performance section in our docs, yet we're not linking to this, and vice versa. Should we consolidate / cross link?
|
||
## How latency is measured | ||
|
||
Ably employs sophisticated measurement methodologies to accurately capture latency performance across the global platform: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Sophisticated" sounds a little like we're punching our chests and hyperbole :)
Perhaps just rigourous?
|
||
Message delivery latency measures the time from when a message is published to Ably until it is delivered to a subscriber. This metric focuses on the core messaging performance without including the return path. | ||
|
||
 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P50 is not a well known metric, can we just go with median and p90?
meta_description: "Understand Ably's latency performance metrics and how they ensure consistent, low-latency message delivery across the global platform." | ||
--- | ||
|
||
Latency is the most critical performance metric for realtime messaging platforms. Ably's architecture is specifically designed to minimize latency and ensure consistent, predictable message delivery times across the global infrastructure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need some more confidence here and we can ask marketing to help.
But oin short, I think we need to be bolder.
- Our platform is designed to optimise transit latency, we're a realtime system after all,
- We consistently deliver the lowest latencies of any pub/sub websockets platform. Latency is not about one off performance, it's about consistency perforamnce.
- We realise our competitors make outrageous claims like this too - so we stick with data and are proud of numbers we welcome you to validate.
- Our global median latency is 37ms, 3x lower than what the human eye can detect.
|
||
## Key latency metrics | ||
|
||
Ably measures and optimizes for several critical latency metrics to ensure consistent performance across the platform: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a call out here, 37ms global median latency.
|
||
### Statistical analysis | ||
|
||
Latency data is analyzed across multiple percentiles to ensure comprehensive performance understanding: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to provide some indication of how many measurements zare made
|
||
This multi-percentile approach ensures that Ably's latency performance is consistent and predictable across all use cases and load conditions. | ||
|
||
## Global infrastructure optimization |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From this point forwards, isn't this dovered in our other performance docs? Shoud this be one doc / multiple?
Description
WIP: Add Latency page to the platform architecture documentation. Images will be swapped out for embedded Metabase reports in due course.
Checklist