Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task slick.backend.DatabaseComponent rejected from java.util.concurrent.ThreadPoolExecutor #1183

Open
Salmondx opened this issue Jun 26, 2015 · 25 comments

Comments

@Salmondx
Copy link

I'm using slick 3.0.0 version.
Exception:

java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2@33f65f2 rejected from java.util.concurrent.ThreadPoolExecutor@1311d5eb[Running, pool size = 20, active threads = 20, queued tasks = 1000, completed tasks = 73]

As i think, this problem occurred due to lack of free threads in thread pool but i have no idea how to increase them. In hikariCP for my computer spec i need exactly 20 threads in pool.
Any recommendations?

@what-the-functor
Copy link

I'm getting this exception thrown in specs.

@onsails
Copy link

onsails commented Jul 30, 2015

I caught this problem when I run DBIO.from inside a transactionally DBIOAction.

@sunil-123
Copy link

I'm getting same exception when running specs. Using H2 database for specs.

@alexlee85
Copy link

I caught this problem when I use transaction like db.run(action.transactionally). if I remove the transactionally, it will be fine.

@maesenka
Copy link

Same here: large number of transactional DBIOActions in quick succession triggered the same exception. Removing the .transactionally makes the problem go away, but I'd like to keep the transactional guarantee.

@szeiger
Copy link
Member

szeiger commented Aug 26, 2015

This is the intended behavior. It happens when too many actions queue up. You can increase the queueSize in the database configuration to avoid it (there was a bug fix for resolving the correct config value in 3.0.2). The default is 1000.

However, I have no idea why it would behave differently for transactional vs auto-commit actions. Maybe the transactional versions are simply slower in the database so you reach the limit sooner? Slick prioritizes all follow-on actions (e.g. the second part of an andThen or flatMap) over new actions so that running action sequences do not get rejected at a later point, but this should not be any different for transactions (a.transactionally is just StartTransaction.andThen(a).cleanUp(eo => if(eo.isEmpty) Commit else Rollback))

@calippo
Copy link
Contributor

calippo commented Nov 16, 2015

@szeiger I'm having the same issue, and I'd like to change the rejection policy. I tried with property: rejectionPolicy = "caller-runs" but it didn't work.
It returns (java.beans.IntrospectionException: Method not found: setRejectionPolicy). Is there a way to overcome this problem?

@kwark
Copy link
Contributor

kwark commented Nov 21, 2015

I think this issue is related to #1274 and was closed prematurely.
The fact that people complain about this only happening when running with transactions was never explained and to easily dismissed.

In that other issue a quick succession of transactional DBIO actions causes the connection pool to be completely consumed and the slick executor threads are blocked during the connection timeout period.
If load keeps getting applied then the executor queue keeps growing.

So, in my case I first see a couple of SqlTimeoutExceptions followed by an onslaught of RejectedExecutionException.

@RichyHBM
Copy link

Did this ever get resolved?

I am seeing the same issue when unit testing with Specs, using Play 2.4 and Play-Slick 1.1.1

FYI I am using a FakeApplication in my tests, which may be where the error comes from as this doesn't happen when running via play normally

def fakeApp: FakeApplication = FakeApplication(additionalConfiguration = Map(
  "slick.dbs.default.driver" -> "slick.driver.H2Driver$",
  "slick.dbs.default.db.driver" -> "org.h2.Driver",
  "slick.dbs.default.db.url" -> "jdbc:h2:mem:test;MODE=PostgreSQL;DATABASE_TO_UPPER=FALSE"
))  

@JavierCane
Copy link

JavierCane commented Apr 20, 2016

I have this very same issue. In my case, I've implemented a batch migration script using standard streams just because of the back-pressure feature, so I was expecting to avoid this kind of errors.

Here you have the implementation in order to read from a huge table and insert the records in a destination database. Both of them running MySQL:

    val query = for {
      c <- SourceTableRow
      p <- SourceTable2Row if c.foreignKey === p.primaryKey
    } yield (c, p.otherField)

    def enableStream(statement: java.sql.Statement): Unit = {
      statement match {
        case s: com.mysql.jdbc.StatementImpl => s.enableStreamingResults()
        case _ =>
      }
    }

    val publisher = sourceDb.stream(query.result.withStatementParameters(statementInit = enableStream))

    val source = Source.fromPublisher(publisher)

    val saveResult = source.runForeach {
      case (sourceTable1Row, sourceTable2Field) =>

        val model = DestinationModel(
          // ...
        )

        val result = destinationDb.run(Persistence.model += model)

As I said, I was expecting to have the back-pressure feature by default. Am I wrong? how do I have to deal with this?

If I increase the allowed amount of task in queue, I'll be only moving the problem forward because it's a matter of time that the script reaches this limitation :-/

EDIT: I've found this in the Slick homepage:

Back-pressure is controlled efficiently through a queue (of configurable size) for database I/O actions, allowing a certain number of requests to build up with very little resource usage and failing immediately once this limit has been reached

So I think that the back-pressure behavior implemented by Slick is not the kind of back-pressure behaviour I was expecting (tell the publisher to wait while it's processing the current received rows).

@szeiger
Copy link
Member

szeiger commented Jul 8, 2016

@JavierCane I think execution based on consuming a Reactive Stream instead of calling db.run or db.stream directly would be useful. We could propagate back-pressure to the caller to prevent the queue from overflowing. The current design doesn't really take this into account. It was built for an interactive / web / web service app where requests come in randomly and you want them to fail quickly under load, with the queue size controlling the trade-off between availability and latency. Could you open a separate ticket for this?

@dsounded
Copy link

dsounded commented Jul 10, 2016

Have the same with my specs: just after FakeRequest I've got the error, it works good for 1 spec, but if there are 2 or more it fails with exception

@dsounded
Copy link

dsounded commented Jul 14, 2016

If you use OneAppPerTest, replace it with OneAppPerSuite

@pedrorijo91
Copy link

@RichyHBM how did you overcome your issue? having the same problem on specs with fake application...

def fakeApplication = FakeApplication(additionalConfiguration =
    Map(
      "slick.dbs.default.driver" -> "slick.driver.H2Driver$",
      "slick.dbs.default.db.driver" -> "org.h2.Driver",
      "slick.dbs.default.db.url" -> "jdbc:h2:mem:play"
    ))

@pedrorijo91
Copy link

instead of running the fake application for each test I'm using step(Play.start(fakeApplication)) at the beginning of the test suite and step(Play.stop(fakeApplication)) at the end.

seems to work (because it's always the same application/db connection I suppose) - in case someone needs a workaround

@cvogt
Copy link
Member

cvogt commented Jul 27, 2016

@d6y do you think this could be solved by better documentation? Reopening this as a documentation issue for now.

@RichyHBM
Copy link

RichyHBM commented Aug 4, 2016

@pedrorijo91 That is pretty much how I solved it, just have the 1 play application running for all of your tests

@dsounded
Copy link

dsounded commented Aug 4, 2016

Yeah, stop/start workaround works for me as well, not a brilliant solution, but anyway

@pedrorijo91
Copy link

the problem is that the test database (H2) is dirty between each test on the test suite...but I guess I can clear (all) tables, or split into several test suite

@dsounded
Copy link

dsounded commented Aug 4, 2016

Yeah, I've done my small DatabaseCleaner for this stuff, which gets List of tables which should be truncated

@yurique
Copy link

yurique commented May 16, 2017

With Slick 3.2 (hikaricp, default settings) we had this:

Task slick.basic.BasicBackend$DatabaseDef$$anon$2@dc100cf rejected from slick.util.AsyncExecutor$$anon$2$$anon$1@2f716c9e[Running, pool size = 20, active threads = 0, queued tasks = 0, completed tasks = 74259]

and it was not recovering from it, had to restart app server.

Is there a cure? :)

@abdelrahmanmohamed
Copy link

+1

@acjay
Copy link

acjay commented Sep 20, 2017

I'm also seeing this when running my tests, very predictably.

@aisven
Copy link

aisven commented Jan 3, 2018

We are also running into this. Is it already supported to increase the queueSize (which has been mentioned as a potential cure here) and other related settings like number of Threads when using the convenient DatabaseConfig described at http://slick.lightbend.com/doc/3.2.1/database.html#databaseconfig ?

@aisven
Copy link

aisven commented Jan 3, 2018

I think I found the answer is yes at http://slick.lightbend.com/doc/3.2.1/api/index.html#slick.jdbc.JdbcBackend$DatabaseFactoryDef forConfig.

@hvesalai hvesalai modified the milestones: Future, Feature ideas, Documentation ideas Feb 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests