Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
15 contributors

Users who have contributed to this file

@vtbassmatt @steved0x @davidstaheli @ericsciple @willsmythe @N-Usha @KathrynEE @chrispat @brcrista @msebolt @Taojunshen @joshmgross @greggboer @dakale @alexhomer1
1701 lines (1298 sloc) 46 KB
title ms.custom description ms.prod ms.technology ms.assetid ms.manager ms.author author ms.reviewer ms.date monikerRange
YAML schema
seodec18
An overview of all YAML features.
devops
devops-cicd
2c586863-078f-4cfe-8158-167080cd08c1
jillfra
macoope
vtbassmatt
macoope
08/29/2019
>= azure-devops-2019

YAML schema reference

Azure Pipelines

Here's a detailed reference guide to Azure Pipelines YAML pipelines, including a catalog of all supported YAML capabilities, and the available options.

::: moniker range="azure-devops"

The best way to get started with YAML pipelines is through the quickstart guide. After that, to learn how to configure your YAML pipeline the way you need it to work, see conceptual topics such as Build variables and Jobs. ::: moniker-end

::: moniker range="< azure-devops"

To learn how to configure your YAML pipeline the way you need it to work, see conceptual topics such as Build variables and Jobs. ::: moniker-end

Pipeline structure

::: moniker range="> azure-devops-2019"

Pipelines are made of one or more stages describing a CI/CD process. Stages are the major divisions in a pipeline: "build this app", "run these tests", and "deploy to pre-production" are good examples of stages.

Stages consist of one or more jobs, which are units of work assignable to a particular machine. Both stages and jobs may be arranged into dependency graphs: "run this stage before that one" or "this job depends on the output of that job".

Jobs consist of a linear series of steps. Steps can be tasks, scripts, or references to external templates.

This hierarchy is reflected in the structure of the YAML file.

  • Pipeline
    • Stage A
      • Job 1
        • Step 1.1
        • Step 1.2
        • ...
      • Job 2
        • Step 2.1
        • Step 2.2
        • ...
    • Stage B
      • ...

For simpler pipelines, not all of these levels are required. For example, in a single-job build, you can omit the containers for "stages" and "jobs" since there are only steps. Also, many options shown here are optional and have good defaults, so your YAML definitions are unlikely to include all of them.

::: moniker-end

::: moniker range="azure-devops-2019"

Pipelines are made of one or more jobs describing a CI/CD process. Jobs are units of work assignable to a particular machine. Jobs may be arranged into dependency graphs, for example: "this job depends on the output of that job".

Jobs consist of a linear series of steps. Steps can be tasks, scripts, or references to external templates.

This hierarchy is reflected in the structure of the YAML file.

  • Pipeline
    • Job 1
      • Step 1.1
      • Step 1.2
      • ...
    • Job 2
      • Step 2.1
      • Step 2.2
      • ...

For single-job pipelines, you can omit the container "jobs" since there are only steps. Also, many options shown here are optional and have good defaults, so your YAML definitions are unlikely to include all of them.

::: moniker-end

Conventions

Conventions used in this topic:

  • To the left of : are literal keywords used in pipeline definitions.
  • To the right of : are data types. These can be primitives like string or references to rich structures defined elsewhere in this topic.
  • [ datatype ] indicates an array of the mentioned data type. For instance, [ string ] is an array of strings.
  • { datatype : datatype } indicates a mapping of one data type to another. For instance, { string: string } is a mapping of strings to strings.
  • | indicates there are multiple data types available for the keyword. For instance, job | templateReference means either a job definition or a template reference are allowed.

YAML basics

This document covers the schema of an Azure Pipelines YAML file. To learn the basics of YAML, see Learn YAML in Y Minutes. Note: Azure Pipelines doesn't support all features of YAML, such as anchors, complex keys, and sets.

Pipeline

Schema

::: moniker range="> azure-devops-2019"

name: string  # build numbering format
resources:
  containers: [ containerResource ]
  repositories: [ repositoryResource ]
variables: { string: string } | [ variable | templateReference ]
trigger: trigger
pr: pr
stages: [ stage | templateReference ]

If you have a single stage, you can omit stages and directly specify jobs:

# ... other pipeline-level keywords
jobs: [ job | templateReference ]

If you have a single stage and a single job, you can omit those keywords and directly specify steps:

# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

::: moniker-end

::: moniker range="azure-devops-2019"

name: string  # build numbering format
resources:
  containers: [ containerResource ]
  repositories: [ repositoryResource ]
variables: { string: string } | [ variable | templateReference ]
trigger: trigger
pr: pr
jobs: [ job | templateReference ]

If you have a single job, you can omit the jobs keyword and directly specify steps:

# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

::: moniker-end

Example

name: $(Date:yyyyMMdd)$(Rev:.r)
variables:
  var1: value1
jobs:
- job: One
  steps:
  - script: echo First step!

Learn more about multi-job pipelines, using containers and repositories in pipelines, triggers, variables, and build number formats.

::: moniker range="> azure-devops-2019"

Stage

A stage is a collection of related jobs. By default, stages run sequentially, starting only after the stage ahead of them has completed.

You can manually control when a stage should run using approval checks. This is commonly used to control deployments to production environments. Checks are a mechanism available to the resource owner to control if and when a stage in a pipeline can consume a resource. As an owner of a resource, such as an environment, you can define checks that must be satisfied before a stage consuming that resource can start.

Currently, manual approval checks are supported on environments. For more information, see Approvals.

Schema

stages:
- stage: string  # name of the stage, A-Z, a-z, 0-9, and underscore
  displayName: string  # friendly name to display in the UI
  dependsOn: string | [ string ]
  condition: string
  variables: { string: string } | [ variable | variableReference ] 
  jobs: [ job | templateReference]

Example

This example will run three stages, one after another. The middle stage will run two jobs in parallel.

stages:
- stage: Build
  jobs:
  - job: BuildJob
    steps:
    - script: echo Building!
- stage: Test
  jobs:
  - job: TestOnWindows
    steps:
    - script: echo Testing on Windows!
  - job: TestOnLinux
    steps:
    - script: echo Testing on Linux!
- stage: Deploy
  jobs:
  - job: Deploy
    steps:
    - script: echo Deploying the code!

This example will run two stages in parallel. (For brevity, the jobs and steps have been omitted.)

stages:
- stage: BuildWin
  displayName: Build for Windows
- stage: BuildMac
  displayName: Build for Mac
  dependsOn: [] # by specifying an empty array, this stage doesn't depend on the stage before it

Learn more about stages, conditions, and variables.

::: moniker-end

Job

A job is a collection of steps to be run by an agent or on the server. Jobs can be run conditionally, and they may depend on earlier jobs.

Schema

jobs:
- job: string  # name of the job, A-Z, a-z, 0-9, and underscore
  displayName: string  # friendly name to display in the UI
  dependsOn: string | [ string ]
  condition: string
  strategy:
    parallel: # parallel strategy, see below
    matrix: # matrix strategy, see below
    maxParallel: number # maximum number of matrix jobs to run simultaneously
  continueOnError: boolean  # 'true' if future jobs should run even if this job fails; defaults to 'false'
  pool: pool # see pool schema
  workspace:
    clean: outputs | resources | all # what to clean up before the job runs
  container: containerReference # container to run this job inside
  timeoutInMinutes: number # how long to run the job before automatically cancelling
  cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before killing them
  variables: { string: string } | [ variable | variableReference ] 
  steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
  services: { string: string | container } # container resources to run as a service container

Example

jobs:
- job: MyJob
  displayName: My First Job
  continueOnError: true
  workspace:
    clean: outputs
  steps:
  - script: echo My first job

Learn more about variables, steps, pools, and server jobs,.

[!Note] If you have only one stage and one job, you can use single-job syntax as a shorter way to describe the steps to run.

Container reference

container is supported by jobs.

Schema

container: string # Docker Hub image reference or resource alias
container:
  image: string  # container image name
  options: string  # arguments to pass to container at startup
  endpoint: string  # endpoint for a private container registry
  env: { string: string }  # list of environment variables to add

Example

jobs:
- job: RunsInContainer
  container: ubuntu:16.04 # Docker Hub image reference
jobs:
- job: RunsInContainer
  container: # inline container specification
    image: ubuntu:16.04
    options: --hostname container-test --ip 192.168.0.1
resources:
  containers:
  - container: linux # reusable alias
    image: ubuntu:16.04

jobs:
- job: a
  container: linux # reference

- job: b
  container: linux # reference

Strategies

matrix and parallel are mutually-exclusive strategies for duplicating a job.

Matrix

Matrixing generates copies of a job with different inputs. This is useful for testing against different configurations or platform versions.

Schema

strategy:
  matrix: { string1: { string2: string3 } }
  maxParallel: number

For each string1 in the matrix, a copy of the job will be generated. string1 is the copy's name and will be appended to the name of the job. For each string2, a variable called string2 with the value string3 will be available to the job.

[!NOTE] Matrix configuration names must contain only basic Latin alphabet letters (A-Z, a-z), numbers, and underscores (_). They must start with a letter. Also, they must be 100 characters or less.

Optionally, maxParallel specifies the maximum number of simultaneous matrix legs to run at once. ::: moniker range="> azure-devops-2019" If not specified or set to 0, no limit will be applied. ::: moniker-end ::: moniker range="azure-devops-2019" If not specified, no limit will be applied. ::: moniker-end

Example

jobs:
- job: Build
  strategy:
    matrix:
      Python35:
        PYTHON_VERSION: '3.5'
      Python36:
        PYTHON_VERSION: '3.6'
      Python37:
        PYTHON_VERSION: '3.7'
    maxParallel: 2

This matrix will create three jobs, "Build Python35", "Build Python36", and "Build Python37". Within each job, a variable PYTHON_VERSION will be available. In "Build Python35", it will be set to "3.5". Likewise, it will be "3.6" in "Build Python36". Only 2 jobs will run simultaneously.


Parallel

This specifies how many duplicates of the job should run. This is useful for slicing up a large test matrix. The VS Test task understands how to divide the test load across the number of jobs scheduled.

Schema

strategy:
  parallel: number

Example

jobs:
- job: SliceItFourWays
  strategy:
    parallel: 4

::: moniker range="azure-devops"

Deployment job

A deployment job is a special type of job that is a collection of steps to be run sequentially against the environment. In YAML pipelines, we recommend that you put your deployment steps in a deployment job.

Schema

jobs:
- deployment: string   # name of the deployment job, A-Z, a-z, 0-9, and underscore
  displayName: string  # friendly name to display in the UI
  pool:                # see pool schema
    name: string
    demands: string | [ string ]
  dependsOn: string 
  condition: string 
  continueOnError: boolean                # 'true' if future jobs should run even if this job fails; defaults to 'false'
  timeoutInMinutes: nonEmptyString        # how long to run the job before automatically cancelling
  cancelTimeoutInMinutes: nonEmptyString  # how much time to give 'run always even if cancelled tasks' before killing them
  variables: { string: string } | [ variable | variableReference ]  
  environment: string  # target environment name and optionally a resource-name to record the deployment history; format: <environment-name>.<resource-name>
  strategy:
    runOnce:
      deploy:
        steps:
        - script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

Example

jobs:
  # track deployments on the environment
- deployment: DeployWeb
  displayName: deploy Web App
  pool:
    vmImage: 'Ubuntu-16.04'
  # creates an environment if it doesn't exist
  environment: 'smarthotel-dev'
  strategy:
    # default deployment strategy, more coming...
    runOnce:
      deploy:
        steps:
        - script: echo my first deployment

::: moniker-end


Steps

Steps are a linear sequence of operations that make up a job. Each step runs in its own process on an agent and has access to the pipeline workspace on disk. This means environment variables are not preserved between steps but filesystem changes are.

Schema

steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

Example

steps:
- script: echo This runs in the default shell on any machine
- bash: |
    echo This multiline script always runs in Bash.
    echo Even on Windows machines!
- pwsh: |
    Write-Host "This multiline script always runs in PowerShell Core."
    Write-Host "Even on non-Windows machines!"

See the schema references for script, bash, pwsh, powershell, checkout, task, and step templates for more details about each.

Variables

Hardcoded values can be added directly, or variable groups can be referenced. Variables may be specified at the pipeline, stage, or job level.

Schema

For a simple set of hardcoded variables:

variables: { string: string }

To include variable groups, switch to this list syntax:

variables:
- name: string # name of a variable
  value: any # value of the variable
- group: string # name of a variable group

name/value pairs and groups can be repeated.

Variables may also be included from templates.

Example

::: moniker range="> azure-devops-2019"

variables:      # pipeline-level
  MY_VAR: 'my value'
  ANOTHER_VAR: 'another value'

stages:
- stage: Build
  variables:    # stage-level
    STAGE_VAR: 'that happened'

  jobs:
  - job: FirstJob
    variables:  # job-level
      JOB_VAR: 'a job var'
    steps:
    - script: echo $(MY_VAR) $(STAGE_VAR) $(JOB_VAR)

::: moniker-end

::: moniker range="azure-devops-2019"

variables:      # pipeline-level
  MY_VAR: 'my value'
  ANOTHER_VAR: 'another value'

jobs:
- job: FirstJob
  variables:  # job-level
    JOB_VAR: 'a job var'
  steps:
  - script: echo $(MY_VAR) $(STAGE_VAR) $(JOB_VAR)

::: moniker-end

variables:
- name: MY_VARIABLE           # hardcoded value
  value: some value
- group: my-variable-group-1  # variable group
- group: my-variable-group-2  # another variable group

Template references

[!NOTE] Be sure to see the full template expression syntax (all forms of ${{ }}).

::: moniker range="> azure-devops-2019"

You can export reusable sections of your pipeline to a separate file. These separate files are known as templates. Azure Pipelines supports four kinds of templates:

::: moniker-end

::: moniker range="azure-devops-2019"

You can export reusable sections of your pipeline to a separate file. These separate files are known as templates. Azure DevOps Server 2019 supports two kinds of templates:

::: moniker-end

Templates may themselves include other templates. Azure Pipelines supports a maximum of 50 unique template files in a single pipeline.

::: moniker range="> azure-devops-2019"

Stage templates

A set of stages can be defined in one file and used multiple places in other files.

Schema

In the main pipeline:

- template: string # name of template to include
  parameters: { string: any } # provided parameters

And in the included template:

parameters: { string: any } # expected parameters
stages: [ stage ]

Example

In this example, a stage is repeated twice for two different testing regimes. The stage itself is only specified once.

# File: stages/test.yml

parameters:
  name: ''
  testFile: ''

stages:
- stage: Test_${{ parameters.name }}
  jobs:
  - job: ${{ parameters.name }}_Windows
    pool:
      vmImage: vs2017-win2016
    steps:
    - script: npm install
    - script: npm test -- --file=${{ parameters.testFile }}
  - job: ${{ parameters.name }}_Mac
    pool:
      vmImage: macos-10.13
    steps:
    - script: npm install
    - script: npm test -- --file=${{ parameters.testFile }}
# File: azure-pipelines.yml

stages:
- template: stages/test.yml  # Template reference
  parameters:
    name: Mini
    testFile: tests/miniSuite.js

- template: stages/test.yml  # Template reference
  parameters:
    name: Full
    testFile: tests/fullSuite.js

::: moniker-end

Job templates

A set of jobs can be defined in one file and used multiple places in other files.

Schema

In the main pipeline:

- template: string # name of template to include
  parameters: { string: any } # provided parameters

And in the included template:

parameters: { string: any } # expected parameters
jobs: [ job ]

Example

In this example, a single job is repeated on three platforms. The job itself is only specified once.

# File: jobs/build.yml

parameters:
  name: ''
  pool: ''
  sign: false

jobs:
- job: ${{ parameters.name }}
  pool: ${{ parameters.pool }}
  steps:
  - script: npm install
  - script: npm test
  - ${{ if eq(parameters.sign, 'true') }}:
    - script: sign
# File: azure-pipelines.yml

jobs:
- template: jobs/build.yml  # Template reference
  parameters:
    name: macOS
    pool:
      vmImage: 'macOS-10.13'

- template: jobs/build.yml  # Template reference
  parameters:
    name: Linux
    pool:
      vmImage: 'ubuntu-16.04'

- template: jobs/build.yml  # Template reference
  parameters:
    name: Windows
    pool:
      vmImage: 'vs2017-win2016'
    sign: true  # Extra step on Windows only

See templates for more about working with job templates.

Step templates

A set of steps can be defined in one file and used multiple places in another file.

Schema

In the main pipeline:

steps:
- template: string  # reference to template
  parameters: { string: any } # provided parameters

And in the included template:

parameters: { string: any } # expected parameters
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

Example

# File: steps/build.yml

steps:
- script: npm install
- script: npm test
# File: azure-pipelines.yml

jobs:
- job: macOS
  pool:
    vmImage: 'macOS-10.13'
  steps:
  - template: steps/build.yml # Template reference

- job: Linux
  pool:
    vmImage: 'ubuntu-16.04'
  steps:
  - template: steps/build.yml # Template reference

- job: Windows
  pool:
    vmImage: 'vs2017-win2016'
  steps:
  - template: steps/build.yml # Template reference
  - script: sign              # Extra step on Windows only

See templates for more about working with templates.

::: moniker range="> azure-devops-2019"

Variable templates

A set of variables can be defined in one file and referenced several times in other files.

Schema

In the main pipeline:

- template: string            # name of template file to include
  parameters: { string: any } # provided parameters

And in the included template:

parameters: { string: any }   # expected parameters
variables: [ variable ]

Example

In this example, a set of variables is repeated across multiple pipelines. The variables are only specified once.

# File: variables/build.yml
variables:
- name: vmImage
  value: vs2017-win2016
- name: arch
  value: x64
- name: config
  value: debug
# File: component-x-pipeline.yml
variables:
- template: variables/build.yml  # Template reference
pool:
  vmImage: ${{ variables.vmImage }}
steps:
- script: build x ${{ variables.arch }} ${{ variables.config }}
# File: component-y-pipeline.yml
variables:
- template: variables/build.yml  # Template reference
pool:
  vmImage: ${{ variables.vmImage }}
steps:
- script: build y ${{ variables.arch }} ${{ variables.config }}

::: moniker-end

Resources

Container resource

Container jobs let you isolate your tools and dependencies inside a container. The agent will launch an instance of your specified container, then run steps inside it. The container resource lets you specify your container images.

Service containers run alongside a job to provide various dependencies such as databases.

Schema

resources:
  containers:
  - container: string  # identifier (A-Z, a-z, 0-9, and underscore)
    image: string  # container image name
    options: string  # arguments to pass to container at startup
    endpoint: string  # endpoint for a private container registry
    env: { string: string }  # list of environment variables to add
    ports: [ string ] # ports to expose on the container
    volumes: [ string ] # volumes to mount on the container

Example

resources:
  containers:
  - container: linux
    image: ubuntu:16.04
  - container: my_service
    image: my_service:tag
    ports:
    - 8080:80 # bind container port 80 to 8080 on the host machine
    - 6379 # bind container port 6379 to a random available port on the host machine
    volumes:
    - /src/dir:/dst/dir # mount /src/dir on the host into /dst/dir in the container

Repository resource

If your pipeline has templates in another repository, you must let the system know about that repository. The repository resource lets you specify an external repository.

Schema

resources:
  repositories:
  - repository: string  # identifier (A-Z, a-z, 0-9, and underscore)
    type: enum  # see below
    name: string  # repository name (format depends on `type`)
    ref: string  # ref name to use, defaults to 'refs/heads/master'
    endpoint: string  # name of the service connection to use (for non-Azure Repos types)

Example

resources:
  repositories:
  - repository: common
    type: github
    name: Contoso/CommonTools

Type

Pipelines support two types of repositories, git and github. git refers to Azure Repos Git repos. If you choose git as your type, then name refers to another repository in the same project. For example, otherRepo. To refer to a repo in another project within the same organization, prefix the name with that project's name. For example, OtherProject/otherRepo.

If you choose github as your type, then name is the full name of the GitHub repo including the user or organization. For example, Microsoft/vscode. Also, GitHub repos require a service connection for authorization.

Triggers

Push trigger

A trigger specifies what branches will cause a continuous integration build to run. If left unspecified, pushes to every branch will trigger a build. Learn more about triggers and how to specify them. Also, be sure to see the note about wildcards in triggers.

Schema

There are three distinct options for trigger: a list of branches to include, a way to disable CI triggering, and the full syntax for ultimate control.

List syntax:

trigger: [ string ] # list of branch names

Disable syntax:

trigger: none # will disable CI builds entirely

Full syntax:

::: moniker range="> azure-devops-2019"

trigger:
  batch: boolean # batch changes if true (the default); start a new build for every push if false
  branches:
    include: [ string ] # branch names which will trigger a build
    exclude: [ string ] # branch names which will not
  tags:
    include: [ string ] # tag names which will trigger a build
    exclude: [ string ] # tag names which will not
  paths:
    include: [ string ] # file paths which must match to trigger a build
    exclude: [ string ] # file paths which will not trigger a build

::: moniker-end

::: moniker range="<= azure-devops-2019"

trigger:
  batch: boolean # batch changes if true (the default); start a new build for every push if false
  branches:
    include: [ string ] # branch names which will trigger a build
    exclude: [ string ] # branch names which will not
  paths:
    include: [ string ] # file paths which must match to trigger a build
    exclude: [ string ] # file paths which will not trigger a build

::: moniker-end

[!IMPORTANT] When you specify a trigger, only branches that are explicitly configured to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list. If you specify an exclude but don't specify any includes, nothing will trigger.

Example

List syntax:

trigger:
- master
- develop

Disable syntax:

trigger: none # will disable CI builds (but not PR builds)

Full syntax:

trigger:
  batch: true
  branches:
    include:
    - features/*
    exclude:
    - features/experimental/*
  paths:
    exclude:
    - README.md

PR trigger

A pull request trigger specifies what branches will cause a pull request build to run. If left unspecified, pull requests to every branch will trigger a build. Learn more about pull request triggers and how to specify them.

::: moniker range="azure-devops"

[!IMPORTANT] YAML PR triggers are only supported in GitHub and Bitbucket Cloud. If you are using Azure Repos Git, you can configure a branch policy for build validation in order to trigger your build pipeline for validation.

::: moniker-end

::: moniker range="azure-devops-2019"

[!IMPORTANT] YAML PR triggers are only supported in GitHub. If you are using Azure Repos Git, you can configure a branch policy for build validation in order to trigger your build pipeline for validation.

::: moniker-end

Schema

There are three distinct options for pr: a list of branches to include, a way to disable PR triggering, and the full syntax for ultimate control.

List syntax:

pr: [ string ] # list of branch names

Disable syntax:

pr: none # will disable PR builds entirely; will not disable CI triggers

Full syntax:

pr:
  autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for the same PR. Defaults to true
  branches:
    include: [ string ] # branch names which will trigger a build
    exclude: [ string ] # branch names which will not
  paths:
    include: [ string ] # file paths which must match to trigger a build
    exclude: [ string ] # file paths which will not trigger a build

[!IMPORTANT] When you specify a pr trigger, only branches that are explicitly configured to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list. If you specify an exclude but don't specify any includes, nothing will trigger.

Example

List syntax:

pr:
- master
- develop

Disable syntax:

pr: none # will disable PR builds (but not CI builds)

Full syntax:

pr:
  branches:
    include:
    - features/*
    exclude:
    - features/experimental/*
  paths:
    exclude:
    - README.md

Scheduled trigger

::: moniker range="<= azure-devops-2019"

YAML scheduled triggers are not available in this version of Azure DevOps Server or TFS. You can use scheduled triggers in the classic editor.

::: moniker-end

::: moniker range="azure-devops"

A scheduled trigger specifies a schedule on which branches will be built. If left unspecified, no scheduled builds will occur. Learn more about scheduled triggers and how to specify them.

Schema

schedules:
- cron: string # cron syntax defining a schedule
  displayName: string # friendly name given to a specific schedule
  branches:
    include: [ string ] # which branches the schedule applies to
    exclude: [ string ] # which branches to exclude from the schedule
  always: boolean # whether to always run the pipeline or only if there have been source code changes since the last run. The default is false.

[!IMPORTANT] When you specify a scheduled trigger, only branches that are explicitly configured to be included are scheduled for a build. Includes are processed first, and then excludes are removed from that list. If you specify an exclude but don't specify any includes, no branches will be built.

Example

schedules:
- cron: "0 0 * * *"
  displayName: Daily midnight build
  branches:
    include:
    - master
    - releases/*
    exclude:
    - releases/ancient/*
- cron: "0 12 * * 0"
  displayName: Weekly Sunday build
  branches:
    include:
    - releases/*
  always: true

In this example, two schedules are defined. The first schedule, Daily midnight build, runs a pipeline at midnight every day, but only if the code has changed since the last run, for master and all releases/* branches, except those under releases/ancient/*.

The second schedule, Weekly Sunday build, runs a pipeline at noon on Sundays, whether the code has changed or not since the last run, for all releases/* branches.


::: moniker-end

Pool

pool specifies which pool to use for a job of the pipeline. It also holds information about the job's strategy for running.

Schema

Full syntax:

pool:
  name: string  # name of the pool to run this job in
  demands: string | [ string ]  ## see below
  vmImage: string # name of the vm image you want to use, only valid in the Microsoft-hosted pool

If you're using a Microsoft-hosted pool, then choose an available vmImage.

If you're using a private pool and don't need to specify demands, this can be shortened to:

pool: string # name of the private pool to run this job in

Example

To use the Microsoft hosted pool, omit the name and specify one of the available hosted images.

pool:
  vmImage: ubuntu-16.04

To use a private pool with no demands:

pool: MyPool

Learn more about conditions and timeouts.

Demands

demands is supported by private pools. You can check for existence of a capability or a specific string like this:

Schema

pool:
  demands: [ string ]

Example

pool:
  name: MyPool
  demands:
  - myCustomCapability   # check for existence of capability
  - agent.os -equals Darwin  # check for specific string in capability

::: moniker range="azure-devops"

Environment

environment specifies the environment or its resource that is to be targeted by a deployment job of the pipeline. It also holds information about the deployment strategy for running the steps defined inside the job.

Schema

Full syntax:

environment:                # create environment and(or) record deployments
  name: string              # name of the environment to run this job on.
  resourceName: string      # name of the resource in the environment to record the deployments against
  resourceId: number        # resource identifier
  resourceType: string      # type of the resource you want to target. Supported types - virtualMachine, Kubernetes, appService
  tags: string | [ string ] # tag names to filter the resources in the environment
  strategy:                 # deployment strategy
    runOnce:                # default strategy
      deploy:
        steps:
        - script: echo Hello world

If you're specifying an environment or one of its resource and don't need to specify other properties, this can be shortened to:

environment: environmentName.resourceName
strategy:                 # deployment strategy
    runOnce:              # default strategy
      deploy:
        steps:
        - script: echo Hello world

Example

It is possible to scope down the target of deployment to a particular resource within the environment as shown below.

environment: 'smarthotel-dev.bookings'
  strategy: 
    runOnce:
      deploy:
        steps:
        - task: KubernetesManifest@0
          displayName: Deploy to Kubernetes cluster
          inputs:
            action: deploy
            namespace: $(k8sNamespace)
            manifests: $(System.ArtifactsDirectory)/manifests/*
            imagePullSecrets: $(imagePullSecret)
            containers: $(containerRegistry)/$(imageRepository):$(tag)
            # value for kubernetesServiceConnection input automatically passed down to task by environment.resource input

::: moniker-end

Server

server specifies a server job. Only server tasks such as manual intervention or invoking an Azure Function can be run in a server job.

Schema

This will make the job run as a server job rather than an agent job.

pool: server

Example

jobs:
- job: RunOnServer
  pool: server

Script

script is a shortcut for the command line task. It will run a script using cmd.exe on Windows and Bash on other platforms.

Schema

steps:
- script: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  workingDirectory: string  # initial working directory for the step
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether or not to run this step; defaults to 'true'
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

Example

steps:
- script: echo Hello world!
  displayName: Say hello

Learn more about conditions and timeouts.

Bash

bash is a shortcut for the shell script task. It will run a script in Bash on Windows, macOS, or Linux.

Schema

steps:
- bash: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  workingDirectory: string  # initial working directory for the step
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether or not to run this step; defaults to 'true'
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

Example

steps:
- bash: |
    which bash
    echo Hello $name
  displayName: Multiline Bash script
  env:
    name: Microsoft

Learn more about conditions and timeouts.

Pwsh

pwsh is a shortcut for the PowerShell task with pwsh set to true. It will run a script in PowerShell Core on Windows, macOS, or Linux.

Schema

steps:
- pwsh: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  errorActionPreference: enum  # see below
  ignoreLASTEXITCODE: boolean  # see below
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  workingDirectory: string  # initial working directory for the step
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether or not to run this step; defaults to 'true'
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

Example

steps:
- pwsh: echo Hello $(name)
  displayName: Say hello
  name: firstStep
  workingDirectory: $(build.sourcesDirectory)
  failOnStderr: true
  env:
    name: Microsoft

Learn more about conditions and timeouts.

PowerShell

powershell is a shortcut for the PowerShell task. It will run a script in PowerShell on Windows.

Schema

steps:
- powershell: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  errorActionPreference: enum  # see below
  ignoreLASTEXITCODE: boolean  # see below
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  workingDirectory: string  # initial working directory for the step
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether or not to run this step; defaults to 'true'
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

Example

steps:
- powershell: echo Hello $(name)
  displayName: Say hello
  name: firstStep
  workingDirectory: $(build.sourcesDirectory)
  failOnStderr: true
  env:
    name: Microsoft

Learn more about conditions and timeouts.

Error action preference

Unless specified, the task defaults the error action preference to stop. The line $ErrorActionPreference = 'stop' is prepended to the top of your script.

When the error action preference is set to stop, errors will cause PowerShell to terminate and return a non-zero exit code. The task will also be marked as Failed.

Schema

errorActionPreference: stop | continue | silentlyContinue

Example

steps:
- powershell: |
    Write-Error 'Uh oh, an error occurred'
    Write-Host 'Trying again...'
  displayName: Error action preference
  errorActionPreference: continue

Ignore last exit code

By default, the last exit code returned from your script will be checked and, if non-zero, treated as a step failure. The system will append your script with:

if ((Test-Path -LiteralPath variable:\LASTEXITCODE)) { exit $LASTEXITCODE }

If you don't want this behavior, set ignoreLASTEXITCODE to true.

Schema

ignoreLASTEXITCODE: boolean

Example

steps:
- powershell: git nosuchcommand
  displayName: Ignore last exit code
  ignoreLASTEXITCODE: true

Learn more about conditions and timeouts.

::: moniker range="azure-devops"

Publish

publish is a shortcut for the Publish Pipeline Artifact task. It will publish (upload) a file or folder as a pipeline artifact that can be consumed by other jobs and pipelines.

Schema

steps:
- publish: string # path to a file or folder
  artifact: string # artifact name

Example

steps:
- publish: $(Build.SourcesDirectory)/build
  artifact: WebApp

Learn more about publishing artifacts.

Download

download is a shortcut for the Download Pipeline Artifact task. It will download one or more artifacts associated with the current run to $(Pipeline.Workspace). It can also be used to disable automatic downloading of artifacts in classic release and deployment jobs.

Schema

steps:
- download: [ current | none ] # disable automatic download if "none"
  artifact: string # artifact name
  patterns: string # patterns representing files to include

Example

steps:
- download: current
  artifact: WebApp
  patterns: '**/.js'

Learn more about downloading artifacts. ::: moniker-end

Checkout

Non-deployment jobs automatically check out source code. You can configure or suppress this behavior with checkout.

Schema

steps:
- checkout: self  # self represents the repo where the initial Pipelines YAML file was found
  clean: boolean  # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
  fetchDepth: number  # the depth of commits to ask Git to fetch; defaults to no limit
  lfs: boolean  # whether to download Git-LFS files; defaults to false
  submodules: true | recursive  # set to 'true' for a single level of submodules or 'recursive' to get submodules of submodules; defaults to not checking out submodules
  path: string  # path to check out source code, relative to the agent's build directory (e.g. \_work\1); defaults to a directory called `s`
  persistCredentials: boolean  # if 'true', leave the OAuth token in the Git config after the initial fetch; defaults to false

Or to avoid syncing sources at all:

steps:
- checkout: none

Example

steps:
- checkout: self  # self represents the repo where the initial Pipelines YAML file was found
  clean: true
  fetchDepth: 5
  lfs: true
  path: PutMyCodeHere

Task

Tasks are the building blocks of a pipeline. There is a catalog of tasks available to choose from.

Schema

steps:
- task: string  # reference to a task and version, e.g. "VSBuild@1"
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether or not to run this step; defaults to 'true'
  timeoutInMinutes: number
  inputs: { string: string }  # task-specific inputs
  env: { string: string }  # list of environment variables to add

Example

steps:
- task: VSBuild@1
  displayName: Build
  timeoutInMinutes: 120
  inputs:
    solution: '**\*.sln'

Syntax highlighting

Syntax highlighting is available for the pipeline schema via a VS Code extension. You can download VS Code, install the extension, and check out the project on GitHub.

The extension includes a JSON schema for validation.

You can’t perform that action at this time.