fix(CubeProxy): initialize random seed in worker phase#138
Merged
Conversation
chenhengqi
reviewed
May 6, 2026
Comment on lines
+1
to
+3
| -- Seed the random number generator for each worker to ensure | ||
| -- that cache TTL jitter (math.random) works correctly. | ||
| math.randomseed(ngx.now() * 1000 + ngx.worker.id()) |
Collaborator
There was a problem hiding this comment.
Please elaborate more on this change in commit message. I still don't get the point of this change.
Collaborator
There was a problem hiding this comment.
This is essential for features like cache TTL jitter to work as intended and avoid synchronized cache expiration stampedes.
The cache is shared by all workers. How does this change help with the issue describe above?
Contributor
Author
There was a problem hiding this comment.
I've updated the above.
In OpenResty, all worker processes inherit the same state from the master process. Without explicitly seeding the random number generator in the init_worker phase, each worker starts with the same default seed. This results in math.random() producing the exact same sequence of numbers across all workers. Seeding with (ngx.now() * 1000 + ngx.worker.id()) ensures that each worker has a unique, time-varying seed. This is essential for features like cache TTL jitter to work as intended and avoid synchronized cache expiration stampedes. Signed-off-by: novahe <heqianfly@gmail.com>
Collaborator
|
The change itself looks good, but I still doubt the rationale. What's the real issue in the following scenario? |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Seed the random number generator for each worker to ensure that cache TTL jitter (math.random) works correctly.
For example, in our
cubeProxylogic, we usemath.randomto calculate the cache TTL jitter:(Reference: rewrite_phase.lua#L20-L22)
Without a unique seed per worker, all OpenResty workers initialize with the exact same default internal state. If multiple workers receive concurrent requests and calculate this timeout, they will all call
math.random()and receive the exact same value. This causes all workers to set identical expiration times in the shared cache, leading to a synchronized stampede to the backend when that specific time is reached, completely defeating the purpose of adding jitter.Here is a simple demo illustrating how the lack of unique seeds synchronizes the behavior across workers:
Demo:
Output:
By uniquely seeding each worker based on its
ngx.worker.id()and the current time, we ensure the random distribution works as intended across the entire proxy layer, keeping backend refreshes safely distributed.Verification with Real-world Data (from dev environment)
To further validate this, I performed an end-to-end test in a live environment with multiple workers handling concurrent requests.
Case 1: Without Fix (Seeding commented out)
As expected, different workers generated identical timeout sequences, which would trigger a synchronized stampede:
Case 2: With Fix (Unique seeding applied)
Each worker now generates its own unique, distributed timeout value even when requests are handled simultaneously:
This confirms that the fix successfully decouples the workers' random state, ensuring that the cache jitter works as intended across the entire proxy layer.
ref: #135 (comment)