-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prefer a simple counter-based lock. #47257
Conversation
9bb3c91
to
6e612c7
Compare
@@ -2,9 +2,61 @@ | |||
|
|||
module ActionView | |||
class CacheExpiry | |||
class ExecutionLock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't be possible to contribute this kind of lock into the concurrent-ruby
itself? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong opinion about it, but I don't think it's enough of a general implementation to make sense to upstream to concurrent-ruby
. That being said, we could ask? It's quite specific to the use case of the cache layer.
2377eb6
to
4b25293
Compare
0f6445d
to
d24cbb0
Compare
This allows `acquire_read_lock` and `release_read_lock` to be called from different fibers/threads (which can be the case for falcon).
d24cbb0
to
6fdc8d8
Compare
This PR specifically includes tests for the faulty behaviour in the linked issue, so it definitely fixes that specific issue and can prevent regressions. Additionally, I wrote a few more tests to confirm the correct behaviour of the execution lock. |
|
||
execution_lock.with_write_lock do | ||
assert_equal 1, execution_lock.write_count | ||
assert_equal 1, execution_lock.read_count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the read_count
always be 0 inside the with_write_lock
block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because you can upgrade a read lock to a write lock (the reentrant part of the original design).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, having the write lock is considered having both and here it's tracked explicitly in both counts.
end | ||
|
||
# Wait for both the threads to be waiting for the lock: | ||
Thread.pass until reader.status == "sleep" && writer.status == "sleep" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would the reader
ever wait/sleep? It can acquire the read lock straight away and the same for the rest of its body, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, you are right, I should ensure the write lock has acquired first.
Which test would fail with the previous lock and falcon? |
The lack of safety checks here makes me nervous (e.g. This also feels more like a symptom of a more general possible concern. Is there something we should be doing to ensure both calls occur in the same execution context? Beyond any locks like this, it seems like it would also break thread/fiber variables, and thus |
Indeed, I also wonder about the need to change this. |
I agree, but this is the design of Rail's Executor mechanism (which I assume we would prefer not to change). In addition, this code path is only used in development environments so the risk is low.
There is no guarantee that the
It's impossible to change this due to the way it's used and designed. Personally, I don't think the executor model should be taking long-lived locks, it's a bad design IMHO, but I also understand what is trying to be achieved. The long lived lock can cause problems in many ways... the design of the caching layer to depend on that seems like a mistake to me. Rather than reloading before each request, why not just not doing any caching if it's not requested for example. There are no other use case in Rails like this one so if we could change this behaviour it would be vastly simpler.
Sometimes |
- Ensure test writer thread is waiting before reader.
I think we should not do this. This reloader really should not have its own lock in the first place (it's not significantly enough different from other things we need to reload). I'll open a PR shortly to move us to reusing the same reloader infrastructure as elsewhere. It's possible that will work around the immediate error, but I agree with the concerns about other issues. In general I think everything expects the request to remain on the same thread/fiber (DB connection, CurrentAttributes, much more) it may just be that this is the most obvious way in which it fails. |
Thanks for the review. I'll close this PR. Can you please link me to your proposed fix? Thanks! |
I think it may be more OK to write/stream the response in a separate fiber, but in terms of structured concurrency and safe handling of resources, it seems important/valuable that whatever "opens" the resource closes it on the same Fiber. |
Can you elaborate on why that's important? In my case, passing the stream between different clients and servers (in the case of a proxy etc) is fairly useful. The server that's writing the response may not be the same as the one that generated it, etc. |
That's this bit:
|
Yeah, and I also meant it does break structured concurrency, i.e., it's unstructured concurrency if the "parent" Fiber which "opens" the resource doesn't close it, and Fibers inside do not end before that parent Fiber. |
This allows
acquire_read_lock
andrelease_read_lock
to be called from different fibers/threads (which can be the case for falcon).See 9a4c1e2 which introduced the current implementation.
Motivation / Background
This is causing problems as outlined in detail here: socketry/falcon#166
Checklist
Before submitting the PR make sure the following are checked:
[Fix #issue-number]