-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TaskScheduler breaks when 2 tasks with identical timestamps are added #67
Comments
pgbsl
pushed a commit
to pgbsl/btm
that referenced
this issue
Aug 12, 2016
pgbsl
pushed a commit
to pgbsl/btm
that referenced
this issue
Aug 12, 2016
pgbsl
pushed a commit
to pgbsl/btm
that referenced
this issue
Aug 12, 2016
pgbsl
pushed a commit
to pgbsl/btm
that referenced
this issue
Aug 12, 2016
pgbsl
pushed a commit
to pgbsl/btm
that referenced
this issue
Aug 12, 2016
pgbsl
pushed a commit
to pgbsl/btm
that referenced
this issue
Aug 13, 2016
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
If two tasks (e.g. 2 PoolShrinkingTask instances) are scheduled, via the TasksScheduler.addTask( ) method, and the tasks have an identical executionTime, then the second task will not be added to the scheduler's task set (even though debug logging indicates that the task has actually been scheduled).
Given that tasks are self scheduling, this means that no further tasks of this type will be scheduled for this resource.
As a concrete example, consider the following scenario.
DataSoure1 and DataSource2 are both being managed by Btx TM. The bitronix-task-scheduler thread attempts to schedule pool shrinking for both data sources
The above log snippet shows the Bitronix logging for scheduling a PoolShrinkingTask for each datasource. Note that the timestamps are identical (see the logging timestamps at the start of each line and you will see milliseconds are identical). The second line in each log pair above shows some diagnostic logging that I have added. The boolean value is the result of the call to
add
on the taskScheduler's Set.This is happening due a combination of the
Set
implementation (ConcurrentSkipListSet
) and theTask
objectscompareTo
implementation which compares based uponexecutionTime
.This Set implementation uses the result of compareTo to both order the set, and decide if the value is already in the Set.
The net result of this, is that we have pools with a small minPoolSize, that never shrink back down again, if the duplicate timestamp issue occurs. I assume, but have not verified, that this will hold for transaction timeouts as well.
I'm assuming that the Set implementation is the correct implementation for this use-case, so I think that the best approach to fix this issue is to look at the Comparator. The following implementation will fix this, as it uses a discriminator (UUID) to decide the result, in the case of the timestamps being equal.
The text was updated successfully, but these errors were encountered: