Skip to content
This repository

Make connection pool fair #6488

Closed
wants to merge 8 commits into from

4 participants

pmahoney Jonathan Rochkind Rafael Mendonça França Yasuo Honda
pmahoney

This is a second attempt of #6416

It makes the connection pool "fair" with respect to waiting threads. I've done some more measurements here: http://polycrystal.org/2012/05/24/activerecord_connection_pool_fairness.html The patch is also cleaned up compared to the first attempt; the code is much more readable.

It includes some test fixes from @yahonda that this patch triggered (though the failures seem unrelated to the code)

I am still getting test failures, but I see the same failures against master: https://gist.github.com/2788538 And none of these seem related to the connection pool.

and others added some commits
Jonathan Rochkind

Awesome. This definitely deals with some troubles i've been having.

  1. We don't need strict fairness: if two connections become
    available at the same time, it's fine of two threads that were
    waiting acquire the connections out of order.

    What keeps you from being strictly fair? Strict fairness (or close to it barring edge cases) would work out even better for me, although this will still be a huge improvement. Per our previous conversation where I think you said that if multiple threads were waiting, #signal would always wake up the one that was waiting the longest (verified in both jruby and mri?) -- what prevents strict fairness from being implemented? (The kind of fairness enforced here is still much better, and possibly good enough even for my use case, just curious if it can be improved yet further).

    • I notice even though your comments say you don't care about strict fairness -- your test actually does verify strict order with the order test, no? Are the comments outdated, strict fairness really is being guaranteed by the test?
  2. What's the @cond.broadcast needed for? What was failing without this that required this? Related to first point above? I ask because previous implementations (3-2-stable as well as master) did not use a broadcast, but it didn't seem to cause any problems -- the threads that ended up waiting indefinitely in master previously were not caused by a lack of broadcast, they were caused by the situation you fixed with @num_wait and your semi-fair guarantee, as well as code that didn't keep track of total time waiting so threads would keep loop-waiting indefinitely when other threads 'stole' connections.

pmahoney

I think you said that if multiple threads were waiting, #signal would always wake up the one that was waiting the longest (verified in both jruby and mri?) -- what prevents strict fairness from being implemented?

Yes, that is true. By "not strict" I mean that if two connections become available at the same time, and thread1 and thread2 are waiting in line, the order in which they re-acquire the monitor is not guaranteed (but thread3 will not be able to "steal" because @num_waiting check forces it to wait).

What's the @cond.broadcast needed for?

I don't see this in the diff. There was a broadcast in the original patch, but this new one should have removed it, unless I missed one.

Jonathan Rochkind

Aha, so if two connections become available more or less at once, it's not guaranteed whether thread1 or thread2 goes first, but they are both guaranteed to get a connection ahead of thread3? If that's so, that's plenty good enough.

I don't see this in the diff. There was an @broadcast in the original patch,

here is where I see it. I see now it's actually only in the #reap implementation. I don't trust the semantics of the reap implementation already, and lack of fairness when reaping (which if it works right ought to only apply to code that violates AR's contract in the first place) is not too much of a concern.

But I think it could be replaced by counting up how many times the reaper reaped, and doing that many signals instead of a broadcast, would that be better?

Jonathan Rochkind

I'm actually still confused about the fairness.

  • If thread1, thread2, and thread3 are all waiting (yep, thread3 is already waiting too)
  • and then two connections become avail at more or less the same time
  • are both thread1 and thread2 guaranteed to get a connection before thread3 (which was also waiting?)

your order check in the test seems to guarantee this in fact is true, I think?

pmahoney

Ah. It was removed here. The combined diff for the pull request is better: https://github.com/rails/rails/pull/6488/files

... counting up how many times the reaper reaped ...

That's what @available.add checkout_new_connection if @available.any_waiting? (in #remove which is called by #reap) is supposed to do, though I admit I have not done any testing of the reaper. The reaper attempts to remove stale connections, so I attempt to then create new ones to replace those that have been removed. But what happens if someone checks in a presumed-leaked connection that has been removed? Ugh.

Jonathan Rochkind

ugh, sorry, ignore on broadcast, I see I wasn't looking at the final version, which has no broadcast at all in the reap. okay then.

still curious about nature of guarantees, but this is a good patch regardless, I think.

I actually run into this problem in MRI not jruby -- I'll try to run your test in MRI 1.9.3 next week, cause i'm curious -- i fully expect based on my experiences, it will show similar benefit in MRI.

Jonathan Rochkind

I am suspicious of the reaper in general, personally, although @tenderlove may disagree.

But I personally don't think the reaper does anything particularly useful atm, so making sure it does what it does properly for fairness... I dunno.

The reaper right now will reap only if a connection is still checked out, was last checked out more than @dead_connection_checkout seconds ago (default 5), and has been closed by the actual rdbms (I think that's what active? checks?)

Most rdbms have a very long timeout for idleness, MySQL by default (with AR mysql2) will wait hours before closing an idle connection. Which is in fact ordinarily what you want, I think?

So I'm not sure how the reaper does anything useful -- it won't reap a 'leaked' connection, under normal conditions, for many minutes or even hours after it was leaked (typo fixed).

I may be missing something? Maybe you're expected to significantly reduce the rdbm's idle timeout to make use of the reaper?

There are of course times when a connection may be closed because of network or server problems or rdbms restart, unrelated to leaked connections. But the reaper's not meant to deal with that, i don't think, and probably isn't the right way to anyway. (There's already an automatic_reconnect key for some of AR's adapters, although it's semantics aren't entirely clear).

Anyhow, this is really a different ticket, I just mention it before you dive into making things 'right' with the reaper, and because you seem to understand this stuff well and I hadn't gotten anyone else to consider or confirm or deny my suspicions about current reaper func yet. :)

pmahoney

I'm actually still confused about the fairness.

If thread1, thread2, and thread3 are all waiting (yep, thread3 is already waiting too)
and then two connections become avail at more or less the same time
are both thread1 and thread2 guaranteed to get a connection before thread3 (which was also waiting?)

your order check in the test seems to guarantee this in fact is true, I think?

The test_checkout_fairness_by_group is a better test of this. What happens (I think) is that a ConditionVariable does guarantee that the longest waiting thread is the first to wake up. But the first thing a thread does after being woken up is re-acquire the monitor. It's this second action that is a free-for-all. So, yes, thread1 and thread2 will get the connection ahead of thread3 in your example, because thread3 will not be woken up.

pmahoney

I just mention it before you dive into making things 'right' with the reaper

I was planning on just ignoring that :-P

pmahoney

@jrochkind Oh, and thanks a bunch for taking a look at this. I greatly appreciate the second set of eyes.

Jonathan Rochkind

Okay, I think i understand the fairness issue, and it seems pretty damn good. Def understand the issue where it's unpredictable which thread will get the lock first -- that's what requires the @num_waiting in the first place. And I understand how that guards against a thread that wasn't waiting at all 'stealing' a connection from one that was. (Yes, I have had this problem too).

I think I'm understanding right that your code will be pretty close to fair -- if there are multiple threads waiting, there's no way the oldest thread waiting will get continually bumped in favor of newer waiters. The issue is only when N>1 threads are checked in at very close to the same time, and even then the first N waiters will all get connections before the N+1st and subsequent waiters. That seems totally good enough.

On the reaper.... man, looking at the mysql2 adapter code specifically, I don't think the reaper will ever reap anything, i can't figure out how a connection could ever not be active? even if the rdbms has closed it for idleness -- active? is implemented adapter-specific, but in mysql2 it seems to me a connection will always be active? unless manually disconnected.

That's really a different issue and for @tenderlove to consider I guess, since he added the code. Personally, I would not ever use the reaper at all, which fortunately is easy to do by making sure reap_frequency is unset.

Jonathan Rochkind

@pmahoney thanks a lot for doing it, man! I've been struggling with this stuff for a while, and not managing to solve it, and not managing to find anyone else to review my ideas/code for AR either! (I am not a committer, in case that was not clear, def not).

Concurrency is def confusing.

Rafael Mendonça França
Owner

@pmahoney I think you will need to squash your commits.

@tenderlove could you review this one?

pmahoney pmahoney closed this
pmahoney

Here's a mostly squashed version: #6492

pmahoney pmahoney referenced this pull request
Merged

Fair connection pool2 #6492

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Showing 8 unique commits by 2 authors.

May 25, 2012
pmahoney Make connection pool fair with respect to waiting threads.
Conflicts:

	activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb
8d04939
pmahoney Refactor fair queue into Queue class. Clean up checkout method. Clean…
… up fairness tests.
b767418
pmahoney Clean up connection pool fair Queue class. f3e2add
pmahoney Restore Queue#any_waiting deleted by mistake. e409cd1
Yasuo Honda Cache metadata in advance to avoid extra sql statements while testing.
Reason: If metadata is not cached extra sql statements
will be executed, which causes failures tests with assert_queries().
873b0e2
pmahoney Cache metadata in advance to avoid extra sql statements while testing.
Reason: If metadata is not cached extra sql statements
will be executed, which causes failures tests with assert_queries().
3f74ba3
pmahoney Correct documentation re: which exception is raised. ab54a4a
pmahoney Use @checkout_timeout rather than obsolete @timeout. 2884c6b
This page is out of date. Refresh to see the latest.
211  activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb
@@ -2,7 +2,6 @@
2 2
 require 'monitor'
3 3
 require 'set'
4 4
 require 'active_support/core_ext/module/deprecation'
5  
-require 'timeout'
6 5
 
7 6
 module ActiveRecord
8 7
   # Raised when a connection could not be obtained within the connection
@@ -70,6 +69,131 @@ module ConnectionAdapters
70 69
     #   after which the Reaper will consider a connection reapable. (default
71 70
     #   5 seconds).
72 71
     class ConnectionPool
  72
+      # Threadsafe, fair, FIFO queue.  Meant to be used by ConnectionPool
  73
+      # with which it shares a Monitor.  But could be a generic Queue.
  74
+      #
  75
+      # The Queue in stdlib's 'thread' could replace this class except
  76
+      # stdlib's doesn't support waiting with a timeout.
  77
+      class Queue
  78
+        def initialize(lock = Monitor.new)
  79
+          @lock = lock
  80
+          @cond = @lock.new_cond
  81
+          @num_waiting = 0
  82
+          @queue = []
  83
+        end
  84
+
  85
+        # Test if any threads are currently waiting on the queue.
  86
+        def any_waiting?
  87
+          synchronize do
  88
+            @num_waiting > 0
  89
+          end
  90
+        end
  91
+
  92
+        # Return the number of threads currently waiting on this
  93
+        # queue.
  94
+        def num_waiting
  95
+          synchronize do
  96
+            @num_waiting
  97
+          end
  98
+        end
  99
+
  100
+        # Add +element+ to the queue.  Never blocks.
  101
+        def add(element)
  102
+          synchronize do
  103
+            @queue.push element
  104
+            @cond.signal
  105
+          end
  106
+        end
  107
+
  108
+        # If +element+ is in the queue, remove and return it, or nil.
  109
+        def delete(element)
  110
+          synchronize do
  111
+            @queue.delete(element)
  112
+          end
  113
+        end
  114
+
  115
+        # Remove all elements from the queue.
  116
+        def clear
  117
+          synchronize do
  118
+            @queue.clear
  119
+          end
  120
+        end
  121
+
  122
+        # Remove the head of the queue.
  123
+        #
  124
+        # If +timeout+ is not given, remove and return the head the
  125
+        # queue if the number of available elements is strictly
  126
+        # greater than the number of threads currently waiting (that
  127
+        # is, don't jump ahead in line).  Otherwise, return nil.
  128
+        #
  129
+        # If +timeout+ is given, block if it there is no element
  130
+        # available, waiting up to +timeout+ seconds for an element to
  131
+        # become available.
  132
+        #
  133
+        # Raises:
  134
+        # - ConnectionTimeoutError if +timeout+ is given and no element
  135
+        # becomes available after +timeout+ seconds,
  136
+        def poll(timeout = nil)
  137
+          synchronize do
  138
+            if timeout
  139
+              no_wait_poll || wait_poll(timeout)
  140
+            else
  141
+              no_wait_poll
  142
+            end
  143
+          end
  144
+        end
  145
+
  146
+        private
  147
+
  148
+        def synchronize(&block)
  149
+          @lock.synchronize(&block)
  150
+        end
  151
+
  152
+        # Test if the queue currently contains any elements.
  153
+        def any?
  154
+          !@queue.empty?
  155
+        end
  156
+
  157
+        # A thread can remove an element from the queue without
  158
+        # waiting if an only if the number of currently available
  159
+        # connections is strictly greater than the number of waiting
  160
+        # threads.
  161
+        def can_remove_no_wait?
  162
+          @queue.size > @num_waiting
  163
+        end
  164
+
  165
+        # Removes and returns the head of the queue if possible, or nil.
  166
+        def remove
  167
+          @queue.shift
  168
+        end
  169
+
  170
+        # Remove and return the head the queue if the number of
  171
+        # available elements is strictly greater than the number of
  172
+        # threads currently waiting.  Otherwise, return nil.
  173
+        def no_wait_poll
  174
+          remove if can_remove_no_wait?
  175
+        end
  176
+
  177
+        # Waits on the queue up to +timeout+ seconds, then removes and
  178
+        # returns the head of the queue.
  179
+        def wait_poll(timeout)
  180
+          @num_waiting += 1
  181
+
  182
+          t0 = Time.now
  183
+          elapsed = 0
  184
+          loop do
  185
+            @cond.wait(timeout - elapsed)
  186
+
  187
+            return remove if any?
  188
+
  189
+            elapsed = Time.now - t0
  190
+            raise ConnectionTimeoutError if elapsed >= timeout
  191
+          end
  192
+        ensure
  193
+          @num_waiting -= 1
  194
+        end
  195
+      end
  196
+
73 197
       # Every +frequency+ seconds, the reaper will call +reap+ on +pool+.
74 198
       # A reaper instantiated with a nil frequency will never reap the
75 199
       # connection pool.
@@ -100,21 +224,6 @@ def run
100 224
       attr_accessor :automatic_reconnect, :checkout_timeout, :dead_connection_timeout
101 225
       attr_reader :spec, :connections, :size, :reaper
102 226
 
103  
-      class Latch # :nodoc:
104  
-        def initialize
105  
-          @mutex = Mutex.new
106  
-          @cond  = ConditionVariable.new
107  
-        end
108  
-
109  
-        def release
110  
-          @mutex.synchronize { @cond.broadcast }
111  
-        end
112  
-
113  
-        def await
114  
-          @mutex.synchronize { @cond.wait @mutex }
115  
-        end
116  
-      end
117  
-
118 227
       # Creates a new ConnectionPool object. +spec+ is a ConnectionSpecification
119 228
       # object which describes database connection information (e.g. adapter,
120 229
       # host name, username, password, etc), as well as the maximum size for
@@ -137,9 +246,18 @@ def initialize(spec)
137 246
         # default max pool size to 5
138 247
         @size = (spec.config[:pool] && spec.config[:pool].to_i) || 5
139 248
 
140  
-        @latch = Latch.new
141 249
         @connections         = []
142 250
         @automatic_reconnect = true
  251
+
  252
+        @available = Queue.new self
  253
+      end
  254
+
  255
+      # Hack for tests to be able to add connections.  Do not call outside of tests
  256
+      def insert_connection_for_test!(c) #:nodoc:
  257
+        synchronize do
  258
+          @connections << c
  259
+          @available.add c
  260
+        end
143 261
       end
144 262
 
145 263
       # Retrieve the connection associated with the current thread, or call
@@ -197,6 +315,7 @@ def disconnect!
197 315
             conn.disconnect!
198 316
           end
199 317
           @connections = []
  318
+          @available.clear
200 319
         end
201 320
       end
202 321
 
@@ -211,6 +330,10 @@ def clear_reloadable_connections!
211 330
           @connections.delete_if do |conn|
212 331
             conn.requires_reloading?
213 332
           end
  333
+          @available.clear
  334
+          @connections.each do |conn|
  335
+            @available.add conn
  336
+          end
214 337
         end
215 338
       end
216 339
 
@@ -234,23 +357,10 @@ def clear_stale_cached_connections! # :nodoc:
234 357
       # Raises:
235 358
       # - PoolFullError: no connection can be obtained from the pool.
236 359
       def checkout
237  
-        loop do
238  
-          # Checkout an available connection
239  
-          synchronize do
240  
-            # Try to find a connection that hasn't been leased, and lease it
241  
-            conn = connections.find { |c| c.lease }
242  
-
243  
-            # If all connections were leased, and we have room to expand,
244  
-            # create a new connection and lease it.
245  
-            if !conn && connections.size < size
246  
-              conn = checkout_new_connection
247  
-              conn.lease
248  
-            end
249  
-
250  
-            return checkout_and_verify(conn) if conn
251  
-          end
252  
-
253  
-          Timeout.timeout(@checkout_timeout, PoolFullError) { @latch.await }
  360
+        synchronize do
  361
+          conn = acquire_connection
  362
+          conn.lease
  363
+          checkout_and_verify(conn)
254 364
         end
255 365
       end
256 366
 
@@ -266,8 +376,9 @@ def checkin(conn)
266 376
           end
267 377
 
268 378
           release conn
  379
+
  380
+          @available.add conn
269 381
         end
270  
-        @latch.release
271 382
       end
272 383
 
273 384
       # Remove a connection from the connection pool.  The connection will
@@ -275,12 +386,14 @@ def checkin(conn)
275 386
       def remove(conn)
276 387
         synchronize do
277 388
           @connections.delete conn
  389
+          @available.delete conn
278 390
 
279 391
           # FIXME: we might want to store the key on the connection so that removing
280 392
           # from the reserved hash will be a little easier.
281 393
           release conn
  394
+
  395
+          @available.add checkout_new_connection if @available.any_waiting?
282 396
         end
283  
-        @latch.release
284 397
       end
285 398
 
286 399
       # Removes dead connections from the pool.  A dead connection can occur
@@ -293,11 +406,35 @@ def reap
293 406
             remove conn if conn.in_use? && stale > conn.last_use && !conn.active?
294 407
           end
295 408
         end
296  
-        @latch.release
297 409
       end
298 410
 
299 411
       private
300 412
 
  413
+      # Acquire a connection by one of 1) immediately removing one
  414
+      # from the queue of available connections, 2) creating a new
  415
+      # connection if the pool is not at capacity, 3) waiting on the
  416
+      # queue for a connection to become available.
  417
+      #
  418
+      # Raises:
  419
+      # - PoolFullError if a connection could not be acquired (FIXME:
  420
+      #   why not ConnectionTimeoutError?
  421
+      def acquire_connection
  422
+        if conn = @available.poll
  423
+          conn
  424
+        elsif @connections.size < @size
  425
+          checkout_new_connection
  426
+        else
  427
+          t0 = Time.now
  428
+          begin
  429
+            @available.poll(@checkout_timeout)
  430
+          rescue ConnectionTimeoutError
  431
+            msg = 'could not obtain a database connection within %0.3f seconds (waited %0.3f seconds)' %
  432
+              [@checkout_timeout, Time.now - t0]
  433
+            raise PoolFullError, msg
  434
+          end
  435
+        end
  436
+      end
  437
+
301 438
       def release(conn)
302 439
         thread_id = if @reserved_connections[current_connection_id] == conn
303 440
           current_connection_id
7  activerecord/test/cases/associations/eager_test.rb
@@ -962,6 +962,10 @@ def test_eager_loading_with_order_on_joined_table_preloads
962 962
   end
963 963
 
964 964
   def test_eager_loading_with_conditions_on_joined_table_preloads
  965
+    # cache metadata in advance to avoid extra sql statements executed while testing
  966
+    Tagging.first
  967
+    Tag.first
  968
+
965 969
     posts = assert_queries(2) do
966 970
       Post.scoped(:select => 'distinct posts.*', :includes => :author, :joins => [:comments], :where => "comments.body like 'Thank you%'", :order => 'posts.id').all
967 971
     end
@@ -1011,6 +1015,9 @@ def test_eager_loading_with_select_on_joined_table_preloads
1011 1015
 
1012 1016
   def test_eager_loading_with_conditions_on_join_model_preloads
1013 1017
     Author.columns
  1018
+    
  1019
+    # cache metadata in advance to avoid extra sql statements executed while testing
  1020
+    AuthorAddress.first
1014 1021
 
1015 1022
     authors = assert_queries(2) do
1016 1023
       Author.scoped(:includes => :author_address, :joins => :comments, :where => "posts.title like 'Welcome%'").all
3  activerecord/test/cases/associations/has_many_associations_test.rb
@@ -1339,6 +1339,9 @@ def test_custom_primary_key_on_new_record_should_fetch_with_query
1339 1339
     author = Author.new(:name => "David")
1340 1340
     assert !author.essays.loaded?
1341 1341
 
  1342
+    # cache metadata in advance to avoid extra sql statements executed while testing
  1343
+    Essay.first
  1344
+
1342 1345
     assert_queries 1 do
1343 1346
       assert_equal 1, author.essays.size
1344 1347
     end
2  activerecord/test/cases/connection_adapters/abstract_adapter_test.rb
@@ -36,7 +36,7 @@ def test_expire_mutates_in_use
36 36
 
37 37
       def test_close
38 38
         pool = ConnectionPool.new(ConnectionSpecification.new({}, nil))
39  
-        pool.connections << adapter
  39
+        pool.insert_connection_for_test! adapter
40 40
         adapter.pool = pool
41 41
 
42 42
         # Make sure the pool marks the connection in use
104  activerecord/test/cases/connection_pool_test.rb
@@ -200,6 +200,110 @@ def test_checkout_behaviour
200 200
         end.join
201 201
       end
202 202
 
  203
+      # The connection pool is "fair" if threads waiting for
  204
+      # connections receive them the order in which they began
  205
+      # waiting.  This ensures that we don't timeout one HTTP request
  206
+      # even while well under capacity in a multi-threaded environment
  207
+      # such as a Java servlet container.
  208
+      #
  209
+      # We don't need strict fairness: if two connections become
  210
+      # available at the same time, it's fine of two threads that were
  211
+      # waiting acquire the connections out of order.
  212
+      #
  213
+      # Thus this test prepares waiting threads and then trickles in
  214
+      # available connections slowly, ensuring the wakeup order is
  215
+      # correct in this case.
  216
+      def test_checkout_fairness
  217
+        @pool.instance_variable_set(:@size, 10)
  218
+        expected = (1..@pool.size).to_a.freeze
  219
+        # check out all connections so our threads start out waiting
  220
+        conns = expected.map { @pool.checkout }
  221
+        mutex = Mutex.new
  222
+        order = []
  223
+        errors = []
  224
+
  225
+        threads = expected.map do |i|
  226
+          t = Thread.new {
  227
+            begin
  228
+              conn = @pool.checkout # never checked back in
  229
+              mutex.synchronize { order << i }
  230
+            rescue => e
  231
+              mutex.synchronize { errors << e }
  232
+            end
  233
+          }
  234
+          Thread.pass until t.status == "sleep"
  235
+          t
  236
+        end
  237
+
  238
+        # this should wake up the waiting threads one by one in order
  239
+        conns.each { |conn| @pool.checkin(conn); sleep 0.1 }
  240
+
  241
+        threads.each(&:join)
  242
+
  243
+        raise errors.first if errors.any?
  244
+
  245
+        assert_equal(expected, order)
  246
+      end
  247
+
  248
+      # As mentioned in #test_checkout_fairness, we don't care about
  249
+      # strict fairness.  This test creates two groups of threads:
  250
+      # group1 whose members all start waiting before any thread in
  251
+      # group2.  Enough connections are checked in to wakeup all
  252
+      # group1 threads, and the fact that only group1 and no group2
  253
+      # threads acquired a connection is enforced.
  254
+      def test_checkout_fairness_by_group
  255
+        @pool.instance_variable_set(:@size, 10)
  256
+        # take all the connections
  257
+        conns = (1..10).map { @pool.checkout }
  258
+        mutex = Mutex.new
  259
+        successes = []    # threads that successfully got a connection
  260
+        errors = []
  261
+
  262
+        make_thread = proc do |i|
  263
+          t = Thread.new {
  264
+            begin
  265
+              conn = @pool.checkout # never checked back in
  266
+              mutex.synchronize { successes << i }
  267
+            rescue => e
  268
+              mutex.synchronize { errors << e }
  269
+            end
  270
+          }
  271
+          Thread.pass until t.status == "sleep"
  272
+          t
  273
+        end
  274
+
  275
+        # all group1 threads start waiting before any in group2
  276
+        group1 = (1..5).map(&make_thread)
  277
+        group2 = (6..10).map(&make_thread)
  278
+
  279
+        # checkin n connections back to the pool
  280
+        checkin = proc do |n|
  281
+          n.times do
  282
+            c = conns.pop
  283
+            @pool.checkin(c)
  284
+          end
  285
+        end
  286
+
  287
+        checkin.call(group1.size)         # should wake up all group1
  288
+
  289
+        loop do
  290
+          sleep 0.1
  291
+          break if mutex.synchronize { (successes.size + errors.size) == group1.size }
  292
+        end
  293
+
  294
+        winners = mutex.synchronize { successes.dup }
  295
+        checkin.call(group2.size)         # should wake up everyone remaining
  296
+
  297
+        group1.each(&:join)
  298
+        group2.each(&:join)
  299
+
  300
+        assert_equal((1..group1.size).to_a, winners.sort)
  301
+
  302
+        if errors.any?
  303
+          raise errors.first
  304
+        end
  305
+      end
  306
+
203 307
       def test_automatic_reconnect=
204 308
         pool = ConnectionPool.new ActiveRecord::Base.connection_pool.spec
205 309
         assert pool.automatic_reconnect
Commit_comment_tip

Tip: You can add notes to lines in a file. Hover to the left of a line to make a note

Something went wrong with that request. Please try again.