Skip to content

Conversation

thomaszurkan-optimizely
Copy link
Contributor

Summary

  • Refactor to sleep remaining time before flush.
  • break out process events

Test plan

Pass all existing tests

Issues

@coveralls
Copy link

coveralls commented Dec 9, 2019

Coverage Status

Coverage decreased (-0.04%) to 99.493% when pulling 2b1f53e on refactorForUnicorn into 86a7aae on master.

Copy link
Contributor

@mikeproeng37 mikeproeng37 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, but looks like tests are failing

@nil_count = 0
@use_pop = false

def process_events
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Would it be more apt to call this process_queue instead since you can have non-event items in there?

Copy link
Contributor

@oakbani oakbani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have a couple of questions

NotificationCenter::NOTIFICATION_TYPES[:LOG_EVENT],
log_event
)
Thread.new do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we introducing thread here? Isn't Forwarding Event Processor meant to work in the legacy style? Dispatching 1 event at a time in the main thread?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Synchronously 1 at a time on the same thread can't scale and so no one will ever use it. However, we could make this configurable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this makes sense, but does it make sense to do in a follow-up PR? Seems like it'll break a lot of tests. That way we can keep just the batching fix in this PR


next unless interval.positive?

@wait_mutex.synchronize { @resource.wait(@wait_mutex, interval) }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we run into a race condition where flush, stop or process is called from the main thread and hence the resource is signaled while we are processing the queue? And this thread waits later. In this scenario, the thread would wait up to the interval before processing any events. That may become an issue if the interval is long.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each of those elements adds an element to the queue. in the case of stop, the process exits, if we are not in the synchronize block and receive a signal, it is ignored. So, this would have to happen during interval calculation time. But, I could add a test to see if the queue length is positive before the wait.

@oakbani
Copy link
Contributor

oakbani commented Dec 18, 2019

@thomaszurkan-optimizely As per docs, the Queue class implements locking mechanism internally. Aren't we doing same as what event_queue.pop in blocking mode would do? That is suspend the thread.

@thomaszurkan-optimizely
Copy link
Contributor Author

@oakbani the queue class does have a lock but no timeout. so, if no other event came the queue would never be flushed on timeout interval.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants