You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And our application wants to publish two REST endpoints:
@Rest
suspendfunfirst() = withContext(Dispatchers.IO) {
queryFirstDatabse("well running query")
}
@Rest
suspendfunsecond() = withContext(Dispatchers.IO) {
querySecondDatabse("possibly blocking query due to a bug")
}
Suppose our second method starts to block for long time due to a bug.
After 64 invocation of second method there will be no free thread left in Dispatchers.IO.
As a result first method will start to block too.
Another case: suppose that our second database querying function not blocks forever.
Instead due to a performance problem in second database it starts to execute slowly.
This will increase latency not only for second REST method but also for first REST method too. And whole system performance will suffer due to one buggy second method.
One solution that I can think of is to explicitly separate thread pools for different methods:
valFirstDispatcher=Executors.newFixedThreadPool(10).asCoroutineDispatcher()
valSecondDispatcher=Executors.newFixedThreadPool(10).asCoroutineDispatcher()
@Rest
suspendfunfirst() = withContext(FirstDispatcher) {
queryFirstDatabse("well running query")
}
@Rest
suspendfunsecond() = withContext(SecondDispatcher) {
querySecondDatabse("possibly blocking query due to a bug")
}
This way problems in second method will not affect first method.
But we are loosing nice property of Dispatchers.IO: it prevents unnecessary context-switch.
In this example if there are no overloading of dispatchers queues the context switch will not occur.
Current state
Context switch reducing approach for IO and Deafult dispatchers internally implemented via kotlinx.coroutines.scheduling.LimitingDispatcher. Actually Dispatchers.IO and Dispatchers.Default both sits on top of a single dispatcher and simply limit amount of blocking and cpu intensive tasks.
Proposal
Provide api to enable users to build lightweight limiting dispatchers that doesn't own any resources (its own threads) and simply provides a view of the original dispatchers.
Problem
Suppose that our application works with two databases that provides third-party blocking API:
And our application wants to publish two REST endpoints:
Suppose our second method starts to block for long time due to a bug.
After 64 invocation of second method there will be no free thread left in Dispatchers.IO.
As a result first method will start to block too.
Another case: suppose that our second database querying function not blocks forever.
Instead due to a performance problem in second database it starts to execute slowly.
This will increase latency not only for second REST method but also for first REST method too. And whole system performance will suffer due to one buggy second method.
One solution that I can think of is to explicitly separate thread pools for different methods:
This way problems in second method will not affect first method.
But we are loosing nice property of Dispatchers.IO: it prevents unnecessary context-switch.
In this example if there are no overloading of dispatchers queues the context switch will not occur.
With our fix with explicit dispatchers built on top of different executors context-switch will always occur:
Current state
Context switch reducing approach for IO and Deafult dispatchers internally implemented via
kotlinx.coroutines.scheduling.LimitingDispatcher
. Actually Dispatchers.IO and Dispatchers.Default both sits on top of a single dispatcher and simply limit amount of blocking and cpu intensive tasks.Proposal
Provide api to enable users to build lightweight limiting dispatchers that doesn't own any resources (its own threads) and simply provides a view of the original dispatchers.
or if we decided to use default IO initialized by max threads count of 64:
The text was updated successfully, but these errors were encountered: