-
Notifications
You must be signed in to change notification settings - Fork 28
(fix): refactor for unicorn threads #230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, but looks like tests are failing
@nil_count = 0 | ||
@use_pop = false | ||
|
||
def process_events |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Would it be more apt to call this process_queue
instead since you can have non-event items in there?
…n there is something to process
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have a couple of questions
NotificationCenter::NOTIFICATION_TYPES[:LOG_EVENT], | ||
log_event | ||
) | ||
Thread.new do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we introducing thread here? Isn't Forwarding Event Processor meant to work in the legacy style? Dispatching 1 event at a time in the main thread?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Synchronously 1 at a time on the same thread can't scale and so no one will ever use it. However, we could make this configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this makes sense, but does it make sense to do in a follow-up PR? Seems like it'll break a lot of tests. That way we can keep just the batching fix in this PR
|
||
next unless interval.positive? | ||
|
||
@wait_mutex.synchronize { @resource.wait(@wait_mutex, interval) } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we run into a race condition where flush, stop or process is called from the main thread and hence the resource is signaled while we are processing the queue? And this thread waits later. In this scenario, the thread would wait up to the interval before processing any events. That may become an issue if the interval is long.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each of those elements adds an element to the queue. in the case of stop, the process exits, if we are not in the synchronize block and receive a signal, it is ignored. So, this would have to happen during interval calculation time. But, I could add a test to see if the queue length is positive before the wait.
@thomaszurkan-optimizely As per docs, the Queue class implements locking mechanism internally. Aren't we doing same as what event_queue.pop in blocking mode would do? That is suspend the thread. |
@oakbani the queue class does have a lock but no timeout. so, if no other event came the queue would never be flushed on timeout interval. |
Summary
Test plan
Pass all existing tests
Issues