-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define Concurrency Resources Portably #38
Comments
@glassfishrobot Commented |
@glassfishrobot Commented Maybe the Java EE spec should even go one step further and set as a general rule that for every administered object introduced by a sub-spec there should be an equivalent portable solution (annotation and element in deployment descriptor). If such a solution is not provided there must be a really good (technical) reason given. Would such a rule be an option? |
@glassfishrobot Commented |
@glassfishrobot Commented |
@glassfishrobot Commented There are several use cases for this. The following holds for just the ability to define and configure the services from within an application archive:
The ability to define more than one instance of any of the four services has as use case:
And example for the XML version in web.xml or application.xml could be:
(an annotation variant would basically mirror this exactly) And then inject this application defined instance via:
|
@glassfishrobot Commented What is appropriate for configuration at the application level ought to differ from what is appropriate for the server administrator to configure. Specifically, I would point out it's a bad practice for applications to define their own thread pools. Management of server resources like threads should not be the role of the application. While it is already possible for applications to get a thread pool by pooling threads obtained from a ManagedThreadFactory, for example, by supplying one any of the various methods of java.util.concurrent.Executors that create thread pools, we should avoid encouraging the practice any further by putting thread pool settings on application defined resources. I certainly agree there is some value in applications being able to define managed executors in ways that are particular to the needs of the application.
|
@glassfishrobot Commented
I would like to respectively disagree with this. It highly depends on the application type, the deployment scenario and the team setup. In one team where I participated we developed a highly concurrent datafeeds processing system, internally using a consumer/producer pattern. We were in a small devops team and deployed a single application to a single AS. In that case it would not have made sense to hand over this job to any kind of server administrator (which incidentally wasn't there as a separate person). The separate thread pools that we used were all highly tuned and had relations such as one thread pool not allowed to have less threads than an other pool. I know that you said to agree that there is value in applications being able to define executors, but in general if the platform limits developers out of an idea of enforcing best practices, then developers in practice will pretty much always find a way around it; use non-managed pools, embed an (other) application server with the archive, skip Java EE altogether, etc. I think none of these options are what we should strive for. At most individual server products could put a limit on what the application is allowed to do, but I strongly belief it's not the Java EE platform's role to enforce what is a best practice for each and every kind of situation. |
@glassfishrobot Commented Getting this addressed will very likely be extremely well received by the community and help address what could otherwise be a continued portability, usability and flexibility gap in Java EE |
@glassfishrobot Commented The main reason for me is that in the future I expect to see more of the scenarios where applications need to run in different environments e.g. on a local developer machine in a docker container, on a bare metal Q&A system and a fully virtualized pre-production or production environment. For sure those environments require different values for concurrency related configuration properties like our ExecutorServices. I'm talking of
all of which differ in the above mentioned environments.
Same for me. Probably none of us can predict which environments Java EE customers will have in 5-8 years and what constraints they bring along but it'd be great if the customers could still run Java EE 8 servers on them, wouldn't it? |
@glassfishrobot Commented We also need to consider that not every task submitted to an executor necessarily runs on a thread pool. For example, in the case of invokeAll(task1, task2), task1 might run on a pooled thread and task2 might run on the submitter's thread. Or maybe a task that hasn't started yet runs on the thread from which future.get() is attempted rather than a pooled thread. This is why establishing a PoolInfo:MaximumSize (section 3.1.4.2 example) doesn't address the usage pattern, whereas a Max Concurrency does. Here are some more concrete examples of annotatively defining managed executors/thread factories/context service along these lines, @ContextServiceDefinition( { ContextServiceDefinition.CLASSLOADER_CONTEXT, ContextServiceDefinition.SECURITY_CONTEXT } ) @ManagedExecutorDefinition( @ManagedScheduledExecutorDefinition( @ManagedThreadFactoryDefinition( We need more discussion about the proper granularity of hung-task-threshold. Is it better defined as a single value for all usage of an executor, or on a per task basis (via an execution property like ManagedTask.LONGRUNNING_HINT). Or should it be defined in both places, in which case the more granular setting takes precedence if specified. The keepalive-time attribute seems like an implementation detail that would preclude otherwise valid implementations of EE Concurrency that choose not to use thread pools, or that choose to have multiple managed executors sharing thread pools. Maybe we could include a 'properties' attribute along the lines of what @DataSourceDefinition does for implementation specific properties. I'm confused about "number of available CPUs/ processors" as a configurable setting. It would make more sense to me if we provided this information to the application rather than received it from the application. Was that the intention? Are there other attributes that ought to be included? |
@glassfishrobot Commented |
|
I agree this would be a nice addition to the spec, but when adding it please remember to take into account that the EE Concurrency Utilities spec intentionally avoids making any requirement for the implementation to use separate thread pools for each managed executor. Some implementations, for example, OpenLiberty, have a single shared thread pool for the entire server and instead allow for the configuration of concurrency constraints for managed executors. Any configuration that is added by the spec should take care to be compatible with the various implementations in existence to ensure that it does not force any onerous changes upon them. |
@arjantijms @m-reza-rahman In MicroProfile Concurrency we have introduced a way to configure @Inject
@ManagedExecutorConfig(maxAsync = 5, maxQueued = 5)
ManagedExecutor exec; Our intent with MicroProfile Concurrency has been to design it in a way that is compatible with EE Concurrency, in the hope that we can eventually fold the MP Concurrency functionality into a figure version of the EE Concurrency spec. We encourage your feedback at this stage before the MP Concurrency spec is finalized! You can find the API/doc on GitHub here: https://github.com/eclipse/microprofile-concurrency |
@aguibert I haven't looked deeply into the MP proposals at all, but just a very quick comment about the above: Doesn't that create a single use executor? Almost by definition executors (and their pools) are often defined first at a single place, and then re-used at many other places. |
@arjantijms I can help answer that. The MP Concurrency spec doesn't assume a one-to-one relation between executors and thread pools (and the EE Concurrency spec doesn't currently state a requirement like this either). So, even though executors can be created to be used in a more limited scope, it doesn't imply that a thread pool backing it is limited to that usage. The same thread pool could be backing any number of executors and can still be defined in a single place. |
What was mentioned in earlier comments about what MicroProfile configuration is something that was being considered but was never actually adopted. A simpler, and more flexible, approach with builders was taken by MicroProfile instead. It is documented here: The use of builders allows the application to compute fields that it configures, allows the user to set up overrides via MicroProfile Config if they so wish, and fits in really well with the CDI producer pattern for controlling life cycle and sharing with other beans.
In MicroProfile, the disposer ends up being optional for ApplicationScoped because MicroProfile requires the server to shut down instances that were created by the application upon application stop, but is shown here anyway for a more complete example. MicroProfile standardized only a small number of configuration attributes (hopefully the most common and useful) so as to not get too far ahead of Java/Jakarta EE and have the best chance of remaining compatible. These deal with listing the types of context to propagate, clear from the executing thread, or ignore (leave unchanged), optionally capping the number of async tasks running at any given point for an executor, and optionally capping the number of async requests that can queue up waiting to start. |
I'd like to pick this up for Jakarta EE 10 |
Here are some links to the source for the fluent builder pattern that is used by MicroProfile to accomplish this:
It should be noted that whether or not this pattern is adopted in Jakarta EE Concurrency is not a prerequisite of Jakarta EE Concurrency being able to adopt other MicroProfile capabilities. I'm mentioning it here to give an overview of what was done in MicroProfile in case Jakarta would like to reuse it. |
Signed-off-by: Nathan Rauh <nathan.rauh@us.ibm.com>
Signed-off-by: Nathan Rauh <nathan.rauh@us.ibm.com>
…r resource definitions Signed-off-by: Nathan Rauh <nathan.rauh@us.ibm.com>
Signed-off-by: Nathan Rauh <nathan.rauh@us.ibm.com>
…ames Issue #38 ambiguity with more granular names
Resource definition annotations for ContextService, ManagedExecutorService, ManagedScheduledExecutorService, and ManagedThreadFactory were added to Concurrency 3.0 in the pulls referenced by this issue, including common configuration attributes. If additional configuration attributes are needed, issues can be opened for a follow on release of the Concurrency specification. |
It is currently not possible to define concurrency resources (context services, managed thread factories, managed executor services, managed scheduled executor services, etc) via standard annotations or XML. This is a potential usability issue, especially in cloud environments.
It may be very helpful to have portable annotations or XML for this, just like we now have for data sources and JMS resources.
Do let me know if anything needs to be explained further - I am happy to help.
Please note that these are purely my personal views and certainly not of Oracle's as a company.
The text was updated successfully, but these errors were encountered: