Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TaskRunner's concurrent performance exhibits a significant decline in case of poor networks conditions #8388

Open
bringyou opened this issue Apr 26, 2024 · 12 comments
Labels
bug Bug in existing code

Comments

@bringyou
Copy link

bringyou commented Apr 26, 2024

Our server needs to call some external API, and the stablity of API providers is quite poor and not reliable.
And we found that during instances of network fluctuation, okHttp's internal TaskRunner will cause so many threads keep waiting for condition, finally resulting in all threads that have attempted HTTP calls becoming suspended. We can check the jstack pic below
image
After debugging we found that all OkHttpClient uses TaskRunner.INSTANCE by the default.
And nearly every method in TaskRunner acquire a instance level lock.
In another word, all instances of OkHttpClient configured as default will competing for the same lock instance in some scenarios, especially when external's network is poor and not reliable.
Is there any plan to improve the concurrent performance of TaskRunner? e.g. use atomic values rather than ReentrantLock

@bringyou bringyou added the bug Bug in existing code label Apr 26, 2024
@bringyou
Copy link
Author

How to reproduce: calling a external API which can only serve limited tcp and takes a long time to create each tcp

@yschimke
Copy link
Collaborator

yschimke commented Apr 26, 2024

Can you show a stack trace or the thread that holds the lock?

Or attach the entire jstack output

@yschimke
Copy link
Collaborator

yschimke commented Apr 26, 2024

Are you sure those stacktraces aren't just the threads waiting for work to do?
https://product.hubspot.com/blog/understanding-thread-dumps

Can you point to other profiling showing the slowdown?

As far as locks vs synchronised. I looked at this recently #8366

It seems most detailed investigations show for contended work using Locks is the more performant option. So before switching to synchronized, do you have some links or benchmarks showing the benefit of synchronized?

@swankjesse
Copy link
Member

This particular lock is not expected to be held for more than a few cycles – every acquire reads & writes a few lock-guarded fields and then releases the lock.

But under highly-concurrent use, the fact that we’re using locking at all is likely to cause some contention.

Can you tell me how many concurrent calls are you making?

If it’s not very many (say, 10 or fewer) and you’re seeing this problem, then it’s likely we’ve got a bug in our implementation. We’re holding the lock longer than we’re expecting to.

If you’re doing lots of concurrent calls and this lock is a bottleneck, then I think we should invest in find getting a lock-free alternative to TaskRunner; either by writing one or by using something else in the JDK.

@bringyou
Copy link
Author

bringyou commented Apr 27, 2024

@yschimke Hi ,this is the full stacktrace , we can see that threads were blocked at TaskRunner.runTask and waiting for lock.
jstatck-2.txt

at java.util.concurrent.locks.ReentrantLock$Sync.lock(java.base@21/ReentrantLock.java:153)
at java.util.concurrent.locks.ReentrantLock.lock(java.base@21/ReentrantLock.java:322)
at okhttp3.internal.concurrent.TaskRunner.runTask(TaskRunner.kt:126)

running on okhttpclient alpha-14

It seems most detailed investigations show for contended work using Locks is the more performant option. So before switching to synchronized, do you have some links or benchmarks showing the benefit of synchronized?

I didn't wish to change back to synchronized, sychronized is evan slower than lock~
In my honest opinion, there are two ways to imporve the performance:
First is sharing the RealBackend instead of sharing TaskRunner, because RealBackend holds the thread-pool, and multiple TaskRunner instance will reduce the competition.
Second is using lock-free components, like ConcurrentHashSet

@bringyou
Copy link
Author

bringyou commented Apr 27, 2024

Can you tell me how many concurrent calls are you making?

If it’s not very many (say, 10 or fewer) and you’re seeing this problem, then it’s likely we’ve got a bug in our implementation. We’re holding the lock longer than we’re expecting to.

Hi, @swankjesse it's about 4 concurrent calls. But as mentioned before, the network of external server is not stable and reliable, sometimes the server will take a long time to ack TCP.

@yschimke
Copy link
Collaborator

Thanks for the stacktrace, it's really helpful. Apologies for not taking this so seriously initially, it does look like a legit issue.

The move from synchronized to ReentrantLock, does lose us the Thread dump info of who holds the lock. I can't see any thread in your trace that would have the lock, and is blocked.

I can however trivially reproduce what you see, not by slowdown, but by taking a lock and just not releasing it. but it sounds like that is not what you see.

I'll try to reproduce this with some additional tracing and socket delays. The atomic option would be to give you a slightly tweaked build with some tracing to capture the thread that has the lock at these times.

@yschimke
Copy link
Collaborator

yschimke commented Apr 27, 2024

I'm not seeing something concrete. In the creative testing I'm doing, the two things holding locks and taking a non trivial amount of time are

TaskRunner.awaitTaskToRun
Http2Connection.applyAndAckSettings

I'll see if I can work out where it's going.

RealBackend.execute seems to be occasionally taking 1-3 ms on my laptop. But I'm really not confident here, or what I'm seeing.

@swankjesse
Copy link
Member

I think we might just be missing a signal() call on our monitor? This thread is waiting on the condition and it shouldn’t be.

"OkHttp TaskRunner" #1545 [255] daemon prio=5 os_prio=0 cpu=0.13ms elapsed=1117.04s tid=0x00007f8320012f40 nid=255 waiting on condition  [0x00007f82f09c8000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@21/Native Method)
	- parking to wait for  <0x00000000f14c1118> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
	at java.util.concurrent.locks.LockSupport.park(java.base@21/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@21/AbstractQueuedSynchronizer.java:1761)
	at okhttp3.internal.concurrent.TaskRunner$RealBackend.coordinatorWait(TaskRunner.kt:320)
	at okhttp3.internal.concurrent.TaskRunner.awaitTaskToRun(TaskRunner.kt:229)
	at okhttp3.internal.concurrent.TaskRunner$runnable$1.run(TaskRunner.kt:67)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21/Thread.java:1596)
	at java.lang.Thread.run(java.base@21/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000f33ff110> (a java.util.concurrent.ThreadPoolExecutor$Worker)

@yschimke
Copy link
Collaborator

@swankjesse I think you know this code a lot better than I do. I'll leave it to you. let me know if I can help.

@bringyou
Copy link
Author

I think we might just be missing a signal() call on our monitor? This thread is waiting on the condition and it shouldn’t be.

Maybe the TaskRunner.coordinatorWaiting should be annotated as @Volatile ?

@swankjesse
Copy link
Member

My hypothesis is that when TaskRunner is busy, it potentially launches unnecessary threads here:

// Also start another thread if there's more work or scheduling to do.
if (multipleReadyTasks || !coordinatorWaiting && readyQueues.isNotEmpty()) {
backend.execute(this@TaskRunner, runnable)
}

          // Also start another thread if there's more work or scheduling to do.
          if (multipleReadyTasks || !coordinatorWaiting && readyQueues.isNotEmpty()) {
            backend.execute(this@TaskRunner, runnable)
          }

These threads exit quickly, but they contend with the lock in the process. And that exacerbates the problem.

Fixing this code to start only the threads it needs should be a straightforward fix.

swankjesse added a commit that referenced this issue Apr 28, 2024
We've got a race where we'll start a thread when we need
one, even if we've already started a thread. This changes
TaskRunner's behavior to never add a thread if we're
still waiting for a recently-added one to start running.

This is intended to reduce the number of threads contenting
for the TaskRunner lock as reported in this issue:

#8388
swankjesse added a commit that referenced this issue Apr 29, 2024
We've got a race where we'll start a thread when we need
one, even if we've already started a thread. This changes
TaskRunner's behavior to never add a thread if we're
still waiting for a recently-added one to start running.

This is intended to reduce the number of threads contenting
for the TaskRunner lock as reported in this issue:

#8388
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Bug in existing code
Projects
None yet
Development

No branches or pull requests

3 participants