Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pools insufficient demand to initialize #23

Open
LupusUmbrae opened this issue Mar 16, 2018 · 9 comments
Open

Pools insufficient demand to initialize #23

LupusUmbrae opened this issue Mar 16, 2018 · 9 comments
Labels

Comments

@LupusUmbrae
Copy link

Hello,

We're having an odd issue when using the pool (see debug output below).

After awhile we start to see database queries just not running (VisualVM confirms no JDBC activity other than the database ping. It looks as though the pool is getting stuck but we're really not sure how/why. It does appear to recover after a period of time

Currently trying to reproduce this in a small simple test

This is the output we get when the database activity stops

09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - subscribed
09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - drain called
09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - drain loop starting
09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - requested=2
09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - poll of available members returns null
09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - scheduling member creation
09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - poll of available members returns null
09:51:44.065 [mcl-4] DEBUG org.davidmoten.rx.pool.MemberSingle - insufficient demand to initialize DecoratingMember [value=null]

What we normally get when working

09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - subscribed
09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - drain called
09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - drain loop starting
09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - requested=1
09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - poll of available members returns DecoratingMember [value=conn21: url=jdbc:h2:tcp://localhost:9123/Primary-SCDBSIM user=SA]
09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - trying to emit member
09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - schedule.now=1521193904065, lastCheck=1521193904060
09:51:44.065 [mcl-1] DEBUG org.davidmoten.rx.pool.MemberSingle - no health check required for DecoratingMember [value=conn21: url=jdbc:h2:tcp://localhost:9123/dbname user=SA]

Pool configuration

Pools.nonBlocking().url(url).scheduler(Schedulers.from(executorService))
                .maxIdleTime(0, TimeUnit.MINUTES).healthCheck(c -> {
                    try {
                        return c.prepareStatement("select 1").execute();
                    } catch (SQLException e) {
                        LOG.debug("SQLException Occurred whilst invoking a health check on the database", e);
                    }
                    return false;
                }).idleTimeBeforeHealthCheck(1, TimeUnit.SECONDS).maxPoolSize(100).build();

I have tried adjusting the above idle times and pool size but it doesn't appear to have any effect.

The executor service has 10 threads.

Any help/thoughts would be appreciated. We'll keep working to get you a test case for it as well.

Note we are using 0.1-rc19, so going to grab the latest version and test it again..

@LupusUmbrae
Copy link
Author

Just tried with RC32 and get the same problem;

10:34:14.668 [mcl-9] DEBUG org.davidmoten.rx.jdbc.SqlInfo - sqlAfterSubs=SELECT TOP 1 table1.col AS "col", at.col1, at.col2, sa.col3 FROM schema.table2 AS at INNER JOIN schema.table3 AS bm  ON at.col1=bm.Name  INNER JOIN schema.table1 AS table1  ON bm.Name=table1.col  INNER JOIN schema.table2 AS sa  ON at.col2=sa.col2   WHERE table1.col=? AND sa.col3=?

10:34:14.668 [mcl-9] DEBUG org.davidmoten.rx.pool.MemberSingle - subscribed
10:34:14.668 [mcl-9] DEBUG org.davidmoten.rx.pool.MemberSingle - drain called
10:34:14.668 [mcl-9] DEBUG org.davidmoten.rx.pool.MemberSingle - drain loop starting
10:34:14.668 [mcl-9] DEBUG org.davidmoten.rx.pool.MemberSingle - requested=4
10:34:14.668 [mcl-9] DEBUG org.davidmoten.rx.pool.MemberSingle - poll of available members returns null

@davidmoten
Copy link
Owner

Thanks for the report. I'll have a look soon keeping an eye out for possible stalls.

davidmoten added a commit that referenced this issue Mar 20, 2018
davidmoten added a commit that referenced this issue Mar 20, 2018
@davidmoten
Copy link
Owner

I've

  • reduced complexity by grouping requested with the observers array to ensure they are changed together atomically
  • called drain() after removal of an observer from observers to ensure no stall occurs

Please give 0.1-RC33 a go (available on Maven Central in about 10 minutes).

@davidmoten
Copy link
Owner

Ah, hold off on Rc33, I don't think I finished the job.

@davidmoten
Copy link
Owner

Yep, finished the review (just checking the insufficient demand part). Please try 0.1-RC33.

@LupusUmbrae
Copy link
Author

Doesn't seem to have fixed our issue, however I've had a closer look and it appears the issue only occurs if we using a blocking call somewhere down the line.

In this case we're calling a method that subsequently requests several hundred database queries but eventually calls a blockingForEach. If we switch this to just a forEach it appears to run ok (just testing it now).

With the blocking operation we get the following output;

10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.jdbc.Util - closing org.davidmoten.rx.jdbc.pool.internal.ConnectionNonBlockingMemberPreparedStatement@53504604
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.jdbc.Util - closing org.davidmoten.rx.jdbc.pool.internal.PooledConnection@16cfcd5a
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - checking in DecoratingMember [value=conn3: url=jdbc:h2:tcp://localhost:9123/Primary-SCDBSIM user=SA]
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - drain called
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - drain loop starting
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - requested=7
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - poll of available members returns DecoratingMember [value=conn3: url=jdbc:h2:tcp://localhost:9123/Primary-SCDBSIM user=SA]
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - trying to emit member
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - schedule.now=1521628683272, lastCheck=1521628683272
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - no health check required for DecoratingMember [value=conn3: url=jdbc:h2:tcp://localhost:9123/Primary-SCDBSIM user=SA]
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.pool.MemberSingle - poll of available members returns null
10:38:03.272 [TP-2] DEBUG org.davidmoten.rx.jdbc.Util - closing rs97: null
10:38:03.286 [RxCachedThreadScheduler-1] DEBUG org.davidmoten.rx.pool.MemberSingle - subscribed
10:38:03.286 [RxCachedThreadScheduler-1] DEBUG org.davidmoten.rx.pool.MemberSingle - drain called
10:38:03.286 [RxCachedThreadScheduler-1] DEBUG org.davidmoten.rx.pool.MemberSingle - drain loop starting
10:38:03.286 [RxCachedThreadScheduler-1] DEBUG org.davidmoten.rx.pool.MemberSingle - requested=7
10:38:03.286 [RxCachedThreadScheduler-1] DEBUG org.davidmoten.rx.pool.MemberSingle - poll of available members returns null

@davidmoten
Copy link
Owner

Ah yes, that could definitely get you into trouble. You might be able to call blockingForEach if you call .subscribeOn(Schedulers.io()).blockingForEach() so that you don't block one of the worker threads for the pool. Better still is to use forEach.

@LupusUmbrae
Copy link
Author

Thanks.. trying to get all our blocking calls switched over.

@davidmoten
Copy link
Owner

Good conversation to have, I think I need to make a section in the README about the trouble we can get into with blocking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants