If you are maintaining (or creating) a plugin and wish its features to work smoothly with Pipeline, there are a number of special considerations.
Several common types of plugin features (@Extension
s) can be invoked from a Pipeline script without any special plugin dependencies so long as you use newer Jenkins core APIs.
Then there is “metastep” in Pipeline (step
, checkout
, wrap
) which loads the extension by class name and calls it.
There are several considerations common to the various metasteps.
First, make sure the baseline Jenkins version in your pom.xml
is sufficiently new.
Suggested versions for:
This introduces some new API methods, and deprecates some old ones.
If you are nervous about making your plugin depend on a recent Jenkins version,
remember that you can always create a branch from your previous release (setting the version to x.y.1-SNAPSHOT
) that works with older versions of Jenkins and git cherry-pick -x
trunk changes into it as needed;
or merge from one branch to another if that is easier.
(mvn -B release:prepare release:perform
works fine on a branch and knows to increment just the last version component.)
Replace AbstractBuild.getProject
with Run.getParent
.
BuildListener
has also been replaced with TaskListener
in new method overloads.
If you need a Node
where the build is running to replace getBuiltOn
, you can use FilePath.getComputer
.
TransientProjectActionFactory
can be replaced by TransientActionFactory<Job>
.
There is no equivalent to AbstractBuild.getBuildVariables()
for WorkflowRun
(any Groovy local variables are not accessible as such).
Also, WorkflowRun.getEnvironment(TaskListener)
is implemented, but only yields the initial build environment, irrespective of withEnv
blocks and the like.
(To get the contextual environment in a Step
, you can inject EnvVars
using @StepContextParameter
;
pending JENKINS-29144 there is no equivalent for a SimpleBuildStep
.
A SimpleBuildWrapper
does have access to an initialEnvironment
if required.)
Anyway code run from Pipeline should take any configuration values as literal strings and make no attempt to perform variable substitution (including via the token-macro
plugin),
since the script author would be using Groovy facilities ("like ${this}"
) for any desired dynamic behavior.
To have a single code fragment support both Pipeline and traditional builds, you can use idioms such as:
private final String location;
public String getLocation() {
return location;
}
@DataBoundSetter public void setLocation(String location) {
this.location = location;
}
private String actualLocation(Run<?,?> build, TaskListener listener) {
if (build instanceof AbstractBuild) {
EnvVars env = build.getEnvironment(listener);
env.overrideAll(((AbstractBuild) build).getBuildVariables());
return env.expand(location);
} else {
return location;
}
}
It is a good idea to replace a lengthy @DataBoundConstructor
with a short one taking just truly mandatory parameters (such as a server location).
For all optional parameters, create a public setter marked with @DataBoundSetter
(with any non-null default value set in the constructor or field initializer).
This allows most parameters to be left at their default values in a Pipeline script, not to mention simplifying ongoing code maintenance because it is much easier to introduce new options this way.
For Java-level compatibility, leave any previous constructors in place, but mark them @Deprecated
.
Also remove @DataBoundConstructor
from them (there can be only one).
To ensure Snippet Generator enumerates only those options the user has actually customized from their form defaults, ensure that Jelly default
attributes match the property defaults as seen from the getter.
For a cleaner XStream serial form in freestyle projects, it is best for the default value to also be represented as a null in the Java field.
So for example if you want a textual property which can sensibly default to blank, your configuration form would look like
<f:entry field="stuff" title="${%Stuff}">
<f:textbox/>
</f:entry>
and your Describable
should use
private @CheckForNull String stuff;
public @CheckForNull String getStuff() {
return stuff;
}
@DataBoundSetter public void setStuff(@CheckForNull String stuff) {
this.stuff = Util.fixNull(stuff);
}
If you want a nonblank default, it is a little more complicated.
If you do not care about XStream hygiene, for example because the Describable
is a Pipeline Step
(or is only being used as part of one):
<f:entry field="stuff" title="${%Stuff}">
<f:textbox default="${descriptor.defaultStuff}"/>
</f:entry>
private @Nonnull String stuff = DescriptorImpl.defaultStuff;
public @Nonnull String getStuff() {
return stuff;
}
@DataBoundSetter public void setStuff(@Nonnull String stuff) {
this.stuff = stuff;
}
@Extension public static class DescriptorImpl extends Descriptor<Whatever> {
public static final String defaultStuff = "junk";
// …
}
(The Descriptor
is the most convenient place to put a constant for use from a Jelly view: descriptor
is always defined even if instance
is null, and Jelly/JEXL allows a static
field to be loaded using instance-field notation.
From a Groovy view you could use any syntax supported by Java to refer to a constant, but Jelly in Jenkins is weaker: getStatic
will not work on classes defined in plugins.)
To make sure the field is omitted from the XStream form when unmodified, you can use the same Descriptor
and configuration form but null out the default:
private @CheckForNull String stuff;
public @Nonnull String getStuff() {
return stuff == null ? DescriptorImpl.defaultStuff : stuff;
}
@DataBoundSetter public void setStuff(@Nonnull String stuff) {
this.stuff = stuff.equals(DescriptorImpl.defaultStuff) ? null : stuff;
}
None of these considerations apply to mandatory parameters with no default, which should be requested in the @DataBoundConstructor
and have a simple getter.
(You could still have a default
in the configuration form as a hint to new users, as a complement to a full description in help-stuff.html
, but the value chosen will always be saved.)
See the user documentation for background. The checkout
metastep uses an SCM
.
As the author of an SCM plugin, there are some changes you should make to ensure your plugin can be used from pipelines.
You can use mercurial-plugin
as a relatively straightforward code example.
Make sure your Jenkins baseline is at least 1.568 (or 1.580.1, the next LTS).
Check your plugin for compilation warnings relating to hudson.scm.*
classes to see outstanding changes you need to make.
Most importantly, various methods in SCM
which formerly took an AbstractBuild
now take a more generic Run
(i.e., potentially a Pipeline build) plus a FilePath
(i.e., a workspace).
Use the specified workspace rather than the former build.getWorkspace()
, which only worked for traditional projects with a single workspace.
Similarly, some methods formerly taking AbstractProject
now take the more generic Job
.
Be sure to use @Override
wherever possible to make sure you are using the right overloads.
Note that changelogFile
may now be null in checkout
.
If so, just skip changelog generation.
checkout
also now takes an SCMRevisionState
so you can know what to compare against without referring back to the build.
SCMDescriptor.isApplicable
should be switched to the Job
overload.
Typically you will unconditionally return true
.
You should override the new getKey
.
This allows a Pipeline job to match up checkouts from build to build so it knows how to look for changes.
You may override the new guessBrowser
, so that scripts do not need to specify the changelog browser to display.
If you have a commit trigger, generally an UnprotectedRootAction
which schedules builds, it will need a few changes.
Use SCMTriggerItem
rather than the deprecated SCMedItem
; use SCMTriggerItem.SCMTriggerItems.asSCMTriggerItem
rather than checking instanceof
.
Its getSCMs
method can be used to enumerate configured SCMs, which in the case of a pipeline will be those run in the last build.
Use its getSCMTrigger
method to look for a configured trigger (for example to check isIgnorePostCommitHooks
).
Ideally you will already be integrated with the scm-api
plugin and implementing SCMSource
; if not, now is a good time to try it.
In the future pipelines may take advantage of this API to support automatic creation of subprojects for each detected branch.
If you want to provide a smoother experience for Pipeline users than is possible via the generic scm
step,
you can add a (perhaps optional) dependency on workflow-scm-step
to your plugin.
Define a SCMStep
using SCMStepDescriptor
and you can define a friendly, script-oriented syntax.
You still need to make the aforementioned changes, since at the end you are just preconfiguring an SCM
.
See the user documentation for background. The metastep is step
.
To add support for use of a Builder
or Publisher
from a pipeline, depend on Jenkins 1.577+, typically 1.580.1 (tips).
Then implement SimpleBuildStep
, following the guidelines in its Javadoc.
Also prefer @DataBoundSetter
s to a sprawling @DataBoundConstructor
(tips).
Note that a SimpleBuildStep
is designed to work also in a freestyle project, and thus assumes that a FilePath workspace
is available (as well as some associated services, like a Launcher
).
That is always true in a freestyle build, but is a potential limitation for use from a Pipeline build.
For example, you might legitimately want to take some action outside the context of any workspace:
node('win64') {
bat 'make all'
archive 'myapp.exe'
}
input 'Ready to tell the world?' // could pause indefinitely, do not tie up a slave
step([$class: 'FunkyNotificationBuilder', artifact: 'myapp.exe']) // ← FAILS!
Even if FunkyNotificationBuilder
implements SimpleBuildStep
, the above will fail, because the workspace
required by SimpleBuildStep.perform
is missing.
You could grab an arbitrary workspace just to run the builder:
node('win64') {
bat 'make all'
archive 'myapp.exe'
}
input 'Ready to tell the world?'
node {
step([$class: 'FunkyNotificationBuilder', artifact: 'myapp.exe']) // OK
}
but if the workspace
is being ignored anyway (in this case because FunkyNotificationBuilder
only cares about artifacts that have already been archived), it may be better to just write a custom step (described below).
For code which genuinely has to run after the build completes, there is RunListener
.
If the behavior of this hook needs to be customizable at the job level, the usual technique would be to define a JobProperty
.
(One distinction from freestyle projects is that in the case of Pipeline there is no way to introspect the “list of build steps” or “list of publishers” or “list of build wrappers” so any decisions based on such metadata are impossible.)
In most other cases, you just want some code to run after some portion of the build completes, which is typically handled with a Publisher
if you wish to share a code base with freestyle projects.
For regular Publisher
s, which are run as part of the build, a Pipeline script would use the step
metastep. There are two subtypes:
Recorder
s generally should be placed inline with other build steps in whatever order makes sense.Notifier
s can be placed in afinally
block, or you can use thecatchError
step. This document goes into depth.
Here the metastep is wrap
.
To add support for a BuildWrapper
, depend on Jenkins 1.599+ (typically 1.609.1), and implement SimpleBuildWrapper
, following the guidelines in its Javadoc.
Like SimpleBuildStep
, wrappers written this way always require a workspace.
If that would be constricting, consider writing a custom step instead.
Replace Trigger<AbstractProject>
with Trigger<X>
where X
is Job
or perhaps ParameterizedJob
or SCMTriggerItem
and implement TriggerDescriptor.isApplicable
accordingly.
Use EnvironmentContributor
rather than RunListener.setUpEnvironment
.
Do not necessarily need any special integration, but are encouraged to use OnceRetentionStrategy
from durable-task
to allow Pipeline builds to survive restarts.
Plugins can also implement custom Pipeline steps with specialized behavior. See here for more.
Traditional Jenkins Job
s are defined in a fairly deep type hierarchy: FreestyleProject
→ Project
→ AbstractProject
→ Job
→ AbstractItem
→ Item
.
(As well as paired Run
types: FreestyleBuild
, etc.)
In older versions of Jenkins, much of the interesting implementation was in AbstractProject
(or AbstractBuild
), which was packed full of assorted features not present in Job
(or Run
).
Some of these features were also needed by Pipeline, like having a programmatic way to start a build (optionally with parameters), or lazy-load build records, or integrate with SCM triggers.
Others were not applicable to Pipeline, like declaring a single SCM and a single workspace per build, or being tied to a specific label, or running a linear sequence of build steps within the scope of a single Java method call.
WorkflowJob
directly extends Job
since it cannot act like an AbstractProject
.
Therefore some refactoring was needed, to make the relevant features available to other Job
types without code or API duplication.
Rather than introduce yet another level into the type hierarchy (and freezing for all time the decision about which features are more “generic” than others), mixins were introduced.
Each encapsulates a set of related functionality originally tied to AbstractProject
but now also usable from WorkflowJob
(and potentially other future Job
types).
ParameterizedJobMixIn
allows a job to be scheduled to the queue (the olderBuildableItem
was inadequate), taking care also of build parameters and the REST build trigger.SCMTriggerItem
integrates withSCMTrigger
, including a definition of which SCM or SCMs a job is using, and how it should perform polling. It also allows various plugins to interoperate with the Multiple SCMs plugin without needing an explicit dependency. Supersedes and deprecatesSCMedItem
.LazyBuildMixIn
handles the plumbing of lazy-loading build records (a system introduced in Jenkins 1.485).
For Pipeline compatibility, plugins formerly referring to AbstractProject
/AbstractBuild
will generally need to start dealing with Job
/Run
but may also need to refer to ParameterizedJobMixIn
and/or SCMTriggerItem
.
(LazyBuildMixIn
is rarely needed from outside code, as the methods defined in Job
/Run
suffice for typical purposes.)
Future improvements to Pipeline may well require yet more implementation code to be extracted from AbstractProject
/AbstractBuild
.
The main constraint is the need to retain binary compatibility.