-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for CDI injection of JobOperator #17
Comments
@scottkurz how to cope with this one? It's not only my inexperience with CDI, but also the current state of the project that makes it somewhat difficult to verify this (once completed), I guess Are there any examples of how this project or the JobOperator interface could be used? |
Guess risk is almost zero, a CDI extension can observes all bean implementing JobOperator and register the spec one only if not existing. Very worse case we could add a qualifier but it would be a pain for no gain IMHO. The more impacting change can be to move from batch components being injected to batch components being cdi beans since rules are not the same but guess having a fallback on "just" injecting an unmanaged instance when it is not a cdi bean works well. |
Yeah, @thodorisbais we are not at a place where this could easily be completed. The most that you could do would be to take one of the compatible implementations and experiment, to see if it suggest any interesting points of discussion. Perhaps we could write a TCK test or some spec doc in a post-EE 9 branch. I wouldn't want to merge it into the jbatch or Open Liberty impls at this point (but can't speak for any of the other impls). It's a tough place to jump into a new project though, I recognize. |
There is another option: doing a portable CDI extension handling that for all impls. |
I had more troubles with this API. |
See comments from #42 which I closed as dup of this. |
Just to be clear, this issue would only require an implementation to support |
I don't think "CDI injection of JobOperator" (in the issue title) is very well-defined. One way to scope the issue might be: "given a CDI-managed bean, allow CDI-managed injection of Ultimately we're going to have to decide what is and isn't required (or is possibly optional) for a Jakarta Batch implementation. But without having better defined all that "CDI + Jakarta Batch integration" would entail, in more detail, it's hard to put forth a proposal for us to decide on. So I don't think we have an answer now to what would be required. |
Well, I think we do need to have the discussion on if Jakarta Batch is going to stay DI implementation agnostic or not. Currently the spec requires a DI implementation but does not express an opinion on what one that is. I'm happy to have that conversation somewhere other than this issue if someone can point me to a better place to have it. |
I had many troubles with this API. It is really bad design and bad
implementation.
Nevertheless the CDI is badly implemented in a new Thread with managed
context. Problems to inject EntityManager and it looks like there is
another context root.
As a workaround I had to use EJB and Async calls, so very bad but the only
solution for my company.
I can imaging in my mind some fluent design API for Batch and annotations
as well but not the XML any more.
So it is worthy to drop this API and create a new modern API where the CDI
(JSR330 and 299) will be supported.
I don't say that it should be done in the current version but i say that it
should be done.
…On Thu, Oct 15, 2020 at 11:29 PM Michael Minella ***@***.***> wrote:
Well, I think we do need to have the discussion on if Jakarta Batch is
going to stay DI implementation agnostic or not. Currently the spec
requires a DI implementation but does not express an opinion on what one
that is. I'm happy to have that conversation somewhere other than this
issue if someone can point me to a better place to have it.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAH7ER2GYOHSSAZDIEAFVCTSK5STLANCNFSM4LGYDMYA>
.
|
Guess we just need to stay aligned on what specs does: jakartaEE (and optionally standalone) - because it is what jakarta/eclipse can control at the end. This means the specification of CDI support is "a bean with type JobOperator is added if not already existing during CDI deployment" - it fully defines injections are supported. If it helps, JCache was designed for CDI with interceptors but works very well on spring and guice cause API are very close so it generally works well to do so. side note @Tibor17 the issue you reference are impl issues (context related), not spec issues and were solved in CDi 2 with context propagation I guess. It is not a CDI specific issue but how a threadlocal can b propagated, EJB do it differently from CDI but with other pitfalls- no silver bullets as usual ;). |
That's true that it was in 2018 when I used Wildfly 14 and the server did
not have CDI 2.0.
That time I saw several requirements of the developers writing the claims
against the design of this API at the stackoverflow portal.
They required the Java EE API to fully support CDI context and Qualifiers
for the annotations in Batch processing.
If the CDI context is fixed with CDI 2.0 then there is the last issue with
the design, and it is the existence of XML.
Here is a funny situation. We have a framework, after Java 1.5, with
annotations and we have to use XML in Batch processing.
We can design fluent objects (due to annotations not carrying objects of
this API) instead of XML and maybe it would require creating more
annotations.
Today the user's code starts with "BatchRuntime" and continues with fluent
objects. Then let's convert the XML to fluent objects design and remove the
XML.
Cheers
Tibor
…On Fri, Oct 16, 2020 at 8:43 AM Romain Manni-Bucau ***@***.***> wrote:
Guess we just need to stay aligned on what specs does: jakartaEE (and
optionally standalone) - because it is what jakarta/eclipse can control at
the end. This means the specification of CDI support is "a bean with type
JobOperator is added if not already existing during CDI deployment" - it
fully defines injections are supported.
For Spring, guice etc, it will land as usual in the spring, guice, etc
lands since there is no spec we can rely on (and it is fine since the
integration for them is already done anyway).
If it helps, JCache was designed for CDI with interceptors but works very
well on spring and guice cause API are very close so it generally works
well to do so.
side note @Tibor17 <https://github.com/Tibor17> the issue you reference
are impl issues (context related), not spec issues and were solved in CDi 2
with context propagation I guess. It is not a CDI specific issue but how a
threadlocal can b propagated, EJB do it differently from CDI but with other
pitfalls- no silver bullets as usual ;).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAH7ER3GULRPRJWQ5JDRWPDSK7TPLANCNFSM4LGYDMYA>
.
|
@Tibor17 The XML vs java based configuration is really a different topic than what is being discussed here. Full transparency, Spring Batch has a full java based API that is much more popular than our XML based options these days as well. I do think we need to come to an agreement on if CDI and JSR-299 support is going to be a requirement for this specification. I know there are members of the community pushing for all of Jakarta EE specs to go that way. Obviously the batch spec was written in a way to allow Spring Batch to be an implementer given how much of the spec is based on Spring Batch. If CDI and JSR-299 support were a requirement, we probably wouldn't be able to going forward. |
The particular JSR listed the voters from the Spring community and the Vote
was obvious for me that the community was motivated by the Spring community.
But it was nonsense to accept such a process and it was strange that we
designed it especially for the Spring.
Spring is just another brand with some impl, uncertified.
Even the company Spring Source, is a different business. I don't see any
reason why Java EE and Jakarta EE should look at the Spring and continue
with taylor made API.
We should look at the individual developer and the way he would solve his
tasks, that's all. For me it was a big mistake to use this API, but today
it may be better with CDI 2.0 but still not the way of design we used
elsewhere.
…On Fri, Oct 16, 2020 at 10:31 PM Michael Minella ***@***.***> wrote:
@Tibor17 <https://github.com/Tibor17> The XML vs java based configuration
is really a different topic than what is being discussed here. Full
transparency, Spring Batch has a full java based API that is much more
popular than our XML based options these days as well.
I do think we need to come to an agreement on if CDI and JSR-299 support
is going to be a requirement for this specification. I know there are
members of the community pushing for all of Jakarta EE specs to go that
way. Obviously the batch spec was written in a way to allow Spring Batch to
be an implementer given how much of the spec is based on Spring Batch. If
CDI and JSR-299 support were a requirement, we probably wouldn't be able to
going forward.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAH7ERYVOBIC6OYZDYORWG3SLCUSVANCNFSM4LGYDMYA>
.
|
I'm not going to try to defend technical decisions made nearly a decade ago in comparison with today's options. I am only interested in looking forward in how the spec evolves. My question is a simple one: Will CDI (JSR-299) be a requirement for implementers of this specification to support? |
We, (the Jakarta Batch community), haven't decided yet. I think of this as first, defining what Jakarta Batch integration with CDI would mean in more detail. Then somewhere along the way or at the end of this we have to decide, does Jakarta Batch + CDI get defined as an optional element of Jakarta Batch or is it optional? Or maybe it's required to claim Jakarta Platform support but not solely Jakarta Batch support. We definitely need to get some opinion from the Jakarta Platform perspective as well. And I'm not trying to redirect this question to some other forum... the details of what Jakarta Batch + CDI integration would mean need to be developed further in order for any of this to move forwards. |
@Tibor17 please stay in the topic, we have another issue to have a java configuration as in BatchEE (think JBeret has something too?) and this does not require any new annotation, but this is not about JobOperator CDI support. Also please don't use stackoverflow as a source of truth, it is plain of wrong "solutions" - you probably know several of them about surefire. @mminella CDI will be a requirement if running in a CDI container (= will not be for Spring apps and TCK must stay "grouped" - as testng groups or junit5 tags - to enable to run the part of the spec you support). But as a jakarta spec it must integrate under the jakarta umbrella otherwise it should quit jakarta group. Once again, here we are speaking about literally 3 lines of code (including braces) and something not impacting spring/guice, so just let's make it. |
NOTE: If new to the thread, skip the previous comment history and read this comment alone(Not that I'm scolding anyone.. just trying to save anyone reading this comment some time, since there were some tangents) SPEC/IMPL UPDATES TRIVIAL
TCK UPDATE REQUIRED - PLATFORM TCK?A non-trivial aspect though is TCK verification, and I think this raises the question: do we need a platform-level TCK test here? To start, I added a simple test here to my branch which now has a few new tests piled up. My test fits well with the standalone Batch TCK, as I simply verify injecting JobOperator into a batch artifact and do something like Why not inject JobOperator into the JUnit test class and start the job with it? Well, at the moment, the JUnit tests in the TCK don't really require to be run in any time of application context. In theory the TCK executor could use ServiceLoader to have Now, I also don't want to add a new component type like a servlet into the Batch standalone TCK space. The standalone Batch TCK, OTOH, may rely on a servlet wrapper when running in the context of an Arquillian extension, but that is more of an implementation detail (of the TCK configuration against a particular implementation). So, back to the point I started with: I guess one could imagine Platform level tests where a servlet gets injected with JobOperator and can be used to start a batch job. The platform already has a place to hold servlet components. That all said, this is all fairly trivial stuff. It doesn't really seem like there's enough interesting to justify the work adding a platform test would require at this point. The platform doesn't currently have any Batch-specific tests (well, except for the ones we've said are going to live entirely in the standalone Batch TCK). Thoughts? @scottmarlow am I convincing you? |
Allow a CDI bean to do:
@Inject JobOperator jobOperator
(JSR 352 Bug 5075).
Possibly an associated task here is to specify that batch artifacts are considered managed beans, whether they have bean-defining annotations (BDA) or not.. though that could be done separately or later.
The text was updated successfully, but these errors were encountered: