Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for CDI injection of JobOperator #17

Closed
scottkurz opened this issue Mar 13, 2020 · 19 comments · Fixed by #183
Closed

Support for CDI injection of JobOperator #17

scottkurz opened this issue Mar 13, 2020 · 19 comments · Fixed by #183

Comments

@scottkurz
Copy link
Contributor

scottkurz commented Mar 13, 2020

Allow a CDI bean to do:

@Inject JobOperator jobOperator

(JSR 352 Bug 5075).

Possibly an associated task here is to specify that batch artifacts are considered managed beans, whether they have bean-defining annotations (BDA) or not.. though that could be done separately or later.

@teobais
Copy link
Contributor

teobais commented Apr 4, 2020

@scottkurz how to cope with this one? It's not only my inexperience with CDI, but also the current state of the project that makes it somewhat difficult to verify this (once completed), I guess

Are there any examples of how this project or the JobOperator interface could be used?

@rmannibucau
Copy link

Guess risk is almost zero, a CDI extension can observes all bean implementing JobOperator and register the spec one only if not existing. Very worse case we could add a qualifier but it would be a pain for no gain IMHO.

The more impacting change can be to move from batch components being injected to batch components being cdi beans since rules are not the same but guess having a fallback on "just" injecting an unmanaged instance when it is not a cdi bean works well.

@scottkurz
Copy link
Contributor Author

Yeah, @thodorisbais we are not at a place where this could easily be completed. The most that you could do would be to take one of the compatible implementations and experiment, to see if it suggest any interesting points of discussion. Perhaps we could write a TCK test or some spec doc in a post-EE 9 branch. I wouldn't want to merge it into the jbatch or Open Liberty impls at this point (but can't speak for any of the other impls).

It's a tough place to jump into a new project though, I recognize.

@rmannibucau
Copy link

There is another option: doing a portable CDI extension handling that for all impls.

@Tibor17
Copy link

Tibor17 commented Jun 19, 2020

I had more troubles with this API.
The CDI support was one.
But then I had problems with Transactions as well and the injection of EntityManager from CDI producer.
I also had problems to handle some events from the stages. Basically this API looks like SE where you register listeners etc. It would be nice to rework this API.

@scottkurz
Copy link
Contributor Author

scottkurz commented Sep 13, 2020

See comments from #42 which I closed as dup of this.

@mminella
Copy link

Just to be clear, this issue would only require an implementation to support @Inject (aka JSR-330...something that Spring supports) and not the other full CDI feature set defined by JSR-299 (Something Spring does not), correct?

@scottkurz
Copy link
Contributor Author

I don't think "CDI injection of JobOperator" (in the issue title) is very well-defined.

One way to scope the issue might be: "given a CDI-managed bean, allow CDI-managed injection of @Inject JobOperator" in which case one might ask "where does that leave Spring if it doesn't support CDI-managed beans?"

Ultimately we're going to have to decide what is and isn't required (or is possibly optional) for a Jakarta Batch implementation. But without having better defined all that "CDI + Jakarta Batch integration" would entail, in more detail, it's hard to put forth a proposal for us to decide on. So I don't think we have an answer now to what would be required.

@mminella
Copy link

Well, I think we do need to have the discussion on if Jakarta Batch is going to stay DI implementation agnostic or not. Currently the spec requires a DI implementation but does not express an opinion on what one that is. I'm happy to have that conversation somewhere other than this issue if someone can point me to a better place to have it.

@Tibor17
Copy link

Tibor17 commented Oct 16, 2020 via email

@rmannibucau
Copy link

Guess we just need to stay aligned on what specs does: jakartaEE (and optionally standalone) - because it is what jakarta/eclipse can control at the end. This means the specification of CDI support is "a bean with type JobOperator is added if not already existing during CDI deployment" - it fully defines injections are supported.
For Spring, guice etc, it will land as usual in the spring, guice, etc lands since there is no spec we can rely on (and it is fine since the integration for them is already done anyway).

If it helps, JCache was designed for CDI with interceptors but works very well on spring and guice cause API are very close so it generally works well to do so.

side note @Tibor17 the issue you reference are impl issues (context related), not spec issues and were solved in CDi 2 with context propagation I guess. It is not a CDI specific issue but how a threadlocal can b propagated, EJB do it differently from CDI but with other pitfalls- no silver bullets as usual ;).

@Tibor17
Copy link

Tibor17 commented Oct 16, 2020 via email

@mminella
Copy link

@Tibor17 The XML vs java based configuration is really a different topic than what is being discussed here. Full transparency, Spring Batch has a full java based API that is much more popular than our XML based options these days as well.

I do think we need to come to an agreement on if CDI and JSR-299 support is going to be a requirement for this specification. I know there are members of the community pushing for all of Jakarta EE specs to go that way. Obviously the batch spec was written in a way to allow Spring Batch to be an implementer given how much of the spec is based on Spring Batch. If CDI and JSR-299 support were a requirement, we probably wouldn't be able to going forward.

@Tibor17
Copy link

Tibor17 commented Oct 16, 2020 via email

@mminella
Copy link

I'm not going to try to defend technical decisions made nearly a decade ago in comparison with today's options. I am only interested in looking forward in how the spec evolves.

My question is a simple one: Will CDI (JSR-299) be a requirement for implementers of this specification to support?

@scottkurz
Copy link
Contributor Author

Will CDI (JSR-299) be a requirement for implementers of this specification to support?

We, (the Jakarta Batch community), haven't decided yet.

I think of this as first, defining what Jakarta Batch integration with CDI would mean in more detail. Then somewhere along the way or at the end of this we have to decide, does Jakarta Batch + CDI get defined as an optional element of Jakarta Batch or is it optional? Or maybe it's required to claim Jakarta Platform support but not solely Jakarta Batch support. We definitely need to get some opinion from the Jakarta Platform perspective as well.

And I'm not trying to redirect this question to some other forum... the details of what Jakarta Batch + CDI integration would mean need to be developed further in order for any of this to move forwards.

@rmannibucau
Copy link

@Tibor17 please stay in the topic, we have another issue to have a java configuration as in BatchEE (think JBeret has something too?) and this does not require any new annotation, but this is not about JobOperator CDI support. Also please don't use stackoverflow as a source of truth, it is plain of wrong "solutions" - you probably know several of them about surefire.

@mminella CDI will be a requirement if running in a CDI container (= will not be for Spring apps and TCK must stay "grouped" - as testng groups or junit5 tags - to enable to run the part of the spec you support). But as a jakarta spec it must integrate under the jakarta umbrella otherwise it should quit jakarta group.

Once again, here we are speaking about literally 3 lines of code (including braces) and something not impacting spring/guice, so just let's make it.

@scottkurz scottkurz added this to the Batch 2.1 (proposed) - EE 10 milestone Oct 6, 2021
@scottkurz
Copy link
Contributor Author

scottkurz commented Dec 9, 2021

NOTE: If new to the thread, skip the previous comment history and read this comment alone

(Not that I'm scolding anyone.. just trying to save anyone reading this comment some time, since there were some tangents)

SPEC/IMPL UPDATES TRIVIAL

TCK UPDATE REQUIRED - PLATFORM TCK?

A non-trivial aspect though is TCK verification, and I think this raises the question: do we need a platform-level TCK test here?

To start, I added a simple test here to my branch which now has a few new tests piled up.

My test fits well with the standalone Batch TCK, as I simply verify injecting JobOperator into a batch artifact and do something like getJobExecution(long).

Why not inject JobOperator into the JUnit test class and start the job with it?

Well, at the moment, the JUnit tests in the TCK don't really require to be run in any time of application context. In theory the TCK executor could use ServiceLoader to have BatchRuntime.getJobOperator() return a JobOperator that launches the TCK jobs remotely, in another process. I don't want to complicate the TCK, say, by injecting JobOperator into the JUnit test and using it to say, start a job.

Now, I also don't want to add a new component type like a servlet into the Batch standalone TCK space. The standalone Batch TCK, OTOH, may rely on a servlet wrapper when running in the context of an Arquillian extension, but that is more of an implementation detail (of the TCK configuration against a particular implementation).

So, back to the point I started with: I guess one could imagine Platform level tests where a servlet gets injected with JobOperator and can be used to start a batch job. The platform already has a place to hold servlet components.

That all said, this is all fairly trivial stuff. It doesn't really seem like there's enough interesting to justify the work adding a platform test would require at this point. The platform doesn't currently have any Batch-specific tests (well, except for the ones we've said are going to live entirely in the standalone Batch TCK).

Thoughts?

@scottmarlow am I convincing you?

@scottkurz
Copy link
Contributor Author

As mentioned in the PR, we're going to checkpoint at the level of spec/TCK function described in #183, which I merged just now, within the M1 TCK release. And we're using #190 to come back to this and follow-up w/ the final 2.1 release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants