-
-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix corePoolSize
so that maximum number of messages (maxConcurrentMessages
* number of queues) are processed simultaneously.
#833
Conversation
…Messages` * number of queues) are processed simultaneously. Probably the problem is from commit 30a4c4d ref. https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/concurrent/ThreadPoolTaskExecutor.html ref. https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/concurrent/ThreadPoolExecutor.html
Sorry for the delay @mokamoto12, and thanks for bringing this up. You're right - while this Do you think we could add an integration test to assert this behavior? Not sure how we'd do it though, maybe send 20 messages and hold each thread until all 20 messages are received, so we make sure they're being processed simultaneously? Please let me know your thoughts. Thanks. |
Thank you for the confirmation @tomazfernandes. I will add the test. |
@tomazfernandes I have added a test. Before modifying the corePoolSize, the test was failing, and after the modification, I am confirming that the test is passing. |
Wow. This is a very serious issue. To limit the concurrency to only 10 for each application instance would be very bad. I am trying to get familiar with some of the classes. It seems that Maybe the But I am not sure whether the backPressure actuates only in the polling step. Anyway, it does not seem very optimal to have 2 distinct places of the infra-structure enforcing the same restriction ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR looks great and thanks for the test @mokamoto12!
TIL about CyclicBarrier
, had never heard of it, thanks.
I just left one comment - please see if it makes sense to you.
Thanks!
@Autowired | ||
LatchContainer latchContainer; | ||
|
||
@SqsListener(queueNames = MAX_CONCURRENT_MESSAGES_QUEUE_NAME, maxMessagesPerPoll = "1", maxConcurrentMessages = "10", id = "max-concurrent-messages") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have this test with maxMessagesPerPoll to 10, maxConcurrentMessages to 20 and 20 messages please?
That would represent what I believe to be a more common scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. I have fixed it.
It seemed that the test would not pass unless the list of messages was sent twice, so that is what I did.
Hey @jgslima, nice to see you here!
Yeah, that would be bad - fortunately it's 10 per container rather than per application, so not so dramatic with default settings.
It's not a good practice to have an unbounded
The thread pool configuration is really only to make sure we can handle enough threads, it's not supposed to enforce a restriction. The restriction is in the Feel free to ask if you have any other questions! |
Hello Tomaz! Very good the enhancements on the 3.0 version. Congratulations and thank you. When we used the 2.x version, we actually had to extend and change the container class, because that behaviour of blocking until all the messages batch had been completely processed before polling again, was not robust. If a single message got stuck due, for instance, a database lock, that application instance would essentially stop polling messages from that queue. We implemented a form of backpressure (not so sophisticated like yours). Now, with a configuration like the "Executors.newCachedThreadPool()", in fact the I am not trying to insist in anything here, just having a conversation. Anyway, the framework already allows the application to provide its own executor. |
Hey @jgslima, thanks!
Yeah, I think we've all been there 😄 Hopefully no one will need to do this anymore! 🙌🏼
I think your reasoning makes sense, but IMO there are some tradeoffs in place that would not be good for having an unbounded executor. If all goes well with concurrency control it doesn't make much difference if we're capping the top limit or not. If the limit is 20, we'll never have more than 20 threads, and it doesn't make a difference to have a thread limit of 20 or Integer.MAX_VALUE, right? But let's say we have an issue in concurrency control that leads to a permit leak and now we can't enforce a limit anymore. I'd rather have an explicit task rejected exception than having the user app malfunction and eventually die of a mysterious OOM. So that's why I think the limit is important as a failsafe. I wouldn't even have the queue if possible, but than there's a racing condition between releasing the permit and releasing the thread that would be too complex to get rid of.
Yes, and that's exactly the way the Does this make sense to you, or maybe I'm missing something? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR @mokamoto12! Looking forward to more!
📢 Type of change
📜 Description
Ensure that messages are processed concurrently up to
maxConcurrentMessages
* number of queues.The value of
corePoolSize
is changed so that a new thread is created even when the ThreadPoolTaskExecutor queue is not full.💡 Motivation and Context
Referring to the documentation, it is expected that
maxConcurrentMessages
* number of queues will be processed simultaneously.ref. https://docs.awspring.io/spring-cloud-aws/docs/3.0.0/reference/html/index.html#sqscontaineroptions-descriptions
However, before this change,
maxConcurrentMessages
* number of queues messages were not processed simultaneously, onlymaxMessagesPerPoll
messages were processed simultaneously.We suspect that this is due to the ThreadPoolExecutor specification and is a problem from commit 30a4c4d.
ref. https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/concurrent/ThreadPoolTaskExecutor.html
ref. https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/concurrent/ThreadPoolExecutor.html
💚 How did you test it?
Executing the following code verifies that 20 messages are processed simultaneously, instead of one.
📝 Checklist
🔮 Next steps