-
Notifications
You must be signed in to change notification settings - Fork 28
(fix): refactor for unicorn threads #230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
3410567
bdb018d
7d457b3
33d4ec5
4784a79
2d22d0a
779ed8d
974546a
0a6febf
40db209
b1f60dc
af7fb6f
245d067
05c8aee
a14c22b
b128c36
3bc3cfb
2601125
0f6531a
8a1349b
bf2264b
a94c4e6
061e37c
174d2d9
2b1f53e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -30,11 +30,14 @@ def process(user_event) | |
log_event = Optimizely::EventFactory.create_log_event(user_event, @logger) | ||
|
||
begin | ||
@event_dispatcher.dispatch_event(log_event) | ||
@notification_center&.send_notifications( | ||
NotificationCenter::NOTIFICATION_TYPES[:LOG_EVENT], | ||
log_event | ||
) | ||
@notification_center&.send_notifications(NotificationCenter::NOTIFICATION_TYPES[:LOG_EVENT], log_event) | ||
Thread.new do | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why are we introducing thread here? Isn't Forwarding Event Processor meant to work in the legacy style? Dispatching 1 event at a time in the main thread? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Synchronously 1 at a time on the same thread can't scale and so no one will ever use it. However, we could make this configurable. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this makes sense, but does it make sense to do in a follow-up PR? Seems like it'll break a lot of tests. That way we can keep just the batching fix in this PR |
||
begin | ||
@event_dispatcher.dispatch_event(log_event) | ||
rescue StandardError => e | ||
@logger.log(Logger::ERROR, "Error dispatching event: #{log_event} #{e.message}.") | ||
end | ||
end | ||
rescue StandardError => e | ||
@logger.log(Logger::ERROR, "Error dispatching event: #{log_event} #{e.message}.") | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we run into a race condition where flush, stop or process is called from the main thread and hence the resource is signaled while we are processing the queue? And this thread waits later. In this scenario, the thread would wait up to the interval before processing any events. That may become an issue if the interval is long.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each of those elements adds an element to the queue. in the case of stop, the process exits, if we are not in the synchronize block and receive a signal, it is ignored. So, this would have to happen during interval calculation time. But, I could add a test to see if the queue length is positive before the wait.