New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JENKINS-28924] Added support for disabling java specifics. #15
Conversation
Thank you for a pull request! Please check this document for how the Jenkins project handles pull requests |
@@ -61,6 +64,10 @@ public String getTestJob() { | |||
public String getPatternFile() { | |||
return patternFile; | |||
} | |||
|
|||
public boolean getNoJava() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bean getters for boolean
conventionally use is
rather than get
.
Also I would recommend inverting the sense so that the visible property is called java
, though the field could keep the current sense (noJava
) to make it easier to retain serialization compatibility.
Missing support for the Workflow step. |
A functional test would be in order too. Minimally creating a fake test result, running the build step, verifying that the includes/excludes match some expected values. More work, but ideally: an addition to |
Also needs merge with |
95c848e
to
d2b3ff4
Compare
I had to take over HMS-JJO's work on this pull request. When I looked at it I realized that we needed some additions. HMS-JJO's original change outputted exclusions as test case name only, which works well with the discussed VUnit tool for VHDL code where all test case names are unique (it outputs all test cases under single class name). But for test tools where you can have multiple classes containing test cases of the same name that doesn't work very well. So I added the option to output either "testCaseName" only or "className.testCaseName". The latter is usable with the nose test framework for Python test for example where you can skip individual test cases based on that information. I added unittests on the form that Latombe introduced in his previous change. Did a rebase against master and forced pushed over HMS-JJO's previous commits. |
} | ||
|
||
static public boolean isEnabled(TestCaseCentricFormat format) { | ||
return (format != null) && (format != TestCaseCentricFormat.DISABLED); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably better to have the format
always be passed around as non-null, and for this to be an instance method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did this construct to try and work around (what I thought was) serialization compatibility issues. But I'm not really a Java developer, and this is my first take on trying to contribute to a plugin, so I'm lacking some knowledge in this area.
I thought that adding a variable of type TestCaseCentricFormat to ParallelTestExecutor would make it take on a null value if an old version of of ParallelTestExecutor was de-serialized. Then it would be impossible to guarantee that the value is always non-null when used in the ParallelTestExecutor code. So by not using the TestCaseCentricFormat directly but always call isEnabled(TestCaseCentricFormat format) I could skip null checks everywhere else.
Any tips on how I can make sure format is always non-null when taking de-serialization of an older version into account (or perhaps I've just gotten things wrong and that is not an really an issue...)?
// HMS-LIG (accidentally logged in with other user)
Overall seems right, insofar as I understand it. Made some comments about the style of the API (which is also visible to scripted consumers such as Pipeline formerly known as Workflow). |
Changed from TestCaseCentric terminology to TestMode, amongst other things. |
By the way in #17 I have cleaned up the Docker-based demo (still trying to get a code review so I can merge it). Might have merge conflicts here but likely trivial if so. Do you have any easy examples at hand of freely downloadable tools which could take advantage of this change so that the demo can include a working non-Java job? I do not think this is a requirement but it would certainly be nice. |
Our main use case is the VHDL/VUnit testing as mentioned earlier which isn't lending itself to be a very easy to setup demo. We also have some Python tests using the nose test framework which supports exclusion via file so I'm investigating if that will work as an easy example. But, while experimenting, I found some weird behavior that is probably a bug in my implementation so I need to investigate that further also before this PR is finished. |
OK finally got that merged, sorry for delay.
That would be great if it works out. |
I've investigated the weird behaviour I saw a bit further. Having a pipeline job that uses parallel test executor with count driven parallelism (count 3) with The 2nd run constructs exclusions so that slave one runs 1 test, slave two runs 2 tests and slave This behavior is not consistent, if I make a copy of the job it may very well produce exclusions This problem is nothing I've introduced but is present in older version of the plugin as well So for make a demo with nosetests for the docker demo I may have to either decrease the count for I'm going to be out of office for a couple of weeks so I'll revisit this later and figure out what |
Possibly just need to set some flag on the |
What's the status on this? Is @HMS-LIG still working on this? I've been trying to get this plugin to work with |
I have the intention to finish this PR, but have been swamped with other things (still are). Things kind of fell apart for me when I didn't get docker to work in Windows. We are still using the code internally, built the plug-in ourselves (old version of the plug-in, we haven't rebased when things changed). Getting time to fix this up is still at least two months away for me. |
Added a test case centric option to be able to work with test cases instead of java classes/files. A drop down box has been added where the user can select whether the output shall be "testCase" or "className.testCaseName" in the exclusion/inclusion files. If the select option is left at he default "Disabled" option the regular behavior (.java/.class output) is used.
What was previously called test case centric behavior (with default setting DISABLED) is now called test mode instead (with default setting JAVA).
Added job that uses the newly added test mode option to output exclusions/inclusions on test case basis instead of class basis. The job pulls a docker image containing python and the pytest framework that can handle exclusions/inclusion of test cases.
8859b13
to
28ce93b
Compare
I've rebased my previous changes ontop of master. I have not done any functional changes in the actual plugin besides that. I've also added a second pipeline job to the docker demo showing how to parallelize on test case names using python and the pytest framework. The Jenkinsfile in the repo_pytest uses a docker image from docker hub under some user's namespace, is this a valid way of implementing it? If that docker image disappears the pipeline-pytest job will fail but your original pipeline job will still build correctly. Another commit will have to be made to add in a version of the parallel test executor plugin that actually support the test mode my new job uses. When trying the demo now I build the plugin then launch the demo and then manually update the parallel test executor plugin by uploading the newly built .hpi file in the target folder. I've noticed a couple of problems: The test result trend graph is not working very well, similar thing reported in https://issues.jenkins-ci.org/browse/JENKINS-18696 and https://issues.jenkins-ci.org/browse/JENKINS-39696. The reference build used does not always contain the complete set of test cases it ran. I.e a new build, 5, uses build 4 as reference reporting it to have 80 test cases and creating exclusions/inclusions based on these 80 test cases when in fact 100 test cases were run (confirmed by looking in the junitResult.xml of build 4). This faulty collection of previous test result shows itself more often the shorter the time in between new builds. If i restart Jenkins in between each new build the test result from the reference build is always correct. If you use exclusion pattern only the above problem means that each of your test slaves will run the missing test cases, but if you work with inclusion patterns only the single slave using the exclusion split will run the missing test cases. |
+1, would this PR support running mocha tests in a parallel way through nodes ? |
I don't know how mocha works but if it produces JUnit result xml files and can handle excluding test cases based on testCaseName or className.testCaseName it should work. Would be nice if you could download the PR code and test it and report back here or in #21 |
I've been working with the parallel-test-executor-plugin for a few days and have seen the problems that HMS-LIG mentions above (I was using the release 1.19 of the plugin, not this branch). Since this is the only place I've seen this problem mentioned in relation to running tests in parallel, I'm posting this here. The problem with the trend graph appeared to be that it only counted what was archived the first time the junit archiver was called, so there was a race condition when run in parallel. But that just a minor part of it: this also led the parallel-test-executor-plugin to sometimes create incorrect splits (again depending on the race) giving me splits that had only dummy included and excluded, which ran everything in one split, resulting in one or more of the parallel splits failing because it didn't run anything (setting the junit plugin to accept empty results inexplicably didn't fix it). I just got the fix in release 1.20 of the junit plugin - including the fix for https://issues.jenkins-ci.org/browse/JENKINS-39696 - and that fixed everything including my problem with it occasionally failing. It's kind of puzzling to me how this has been working for others up to now. |
IMHO the junit archiver in its current state should not be called multiple times within a pipeline. We're running a gathering stash/unstash step to collect results from parallel executions to make a single archive call. |
Works fine AFAIK using current plugin versions. |
public enum TestMode { | ||
JAVA ("Parallelize on test classes (Java)"), | ||
TESTCASENAME ("Parallelize on test case name (Generic)"), | ||
CLASSANDTESTCASENAME ("Parallelize on class and test case name (Generic)"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this is on the right track, but should be refactored into an AbstractDescribableImpl
whose Descriptor
s with @Symbol
s are an ExtensionPoint
, so that support for Java is not baked into anything except the Java implementation, and support for other languages could be added just by introducing new modes, potentially from other plugins.
IIUC the critical bit would be something like
public abstract Map<String, TestEntity> getData(ClassResult result);
With the advent of |
Hmm, I cannot say I am actively maintaining this plugin; not sure if anyone is. |
JENKINS-28924
Added option to disable java specifics and allow for parallelized tests of code in other languages.