Skip to content

Latest commit

 

History

History
390 lines (294 loc) · 19.1 KB

getting-started.md

File metadata and controls

390 lines (294 loc) · 19.1 KB

Getting started

The OpenShift extension for Azure DevOps allows you to connect and interact with an OpenShift cluster as part of your build or release pipeline. The following paragraphs guide you through the process of using this extension.

Connect to your OpenShift cluster

To use any of the pipeline tasks, you first need a way to connect to your cluster. In Azure DevOps, access to external and remote services is configured in service connections.

The OpenShift extension for Azure DevOps provides two ways to set up a connection: create a custom OpenShift service connection type which allows you to connect to your cluster using various authentication forms or by defining it at Task level when creating your pipeline.

Service Connection Types

Configuring the OpenShift service connection

To configure an OpenShift connection, select the project settings (cogwheel icon). From there choose Service connections, followed by New service connection. Select the OpenShift service connection and use one of the following methods to configure authentication:

Basic Authentication

Basic Authentication

Server URL
Required. The URL of the Openshift cluster.
Username
Required. OpenShift username.
Password
Required. Password for the specified user.
Accept untrusted SSL certificates
Whether it is ok to accept self-signed (untrusted) certificated.
Certificate Authority File
The path where the certificate authority file is stored.
Service Connection Name
Required. The name you will use to refer to this service connection.
Grant Access permission to all pipelines
Allow all pipelines to use this connection. It allows YAML defined pipeline, which are not automatically authorized for service connections, to use this service connection.

Token Authentication

Token Authentication

Server URL
Required. The URL of the Openshift cluster.
Accept untrusted SSL certificates
Whether it is ok to accept self-signed (untrusted) certificated.
Certificate Authority File
The path where the certificate authority file is stored.
API Token
Required.The API token used for authentication.
Service Connection Name
Required. The name you will use to refer to this service connection.
Grant Access permission to all pipelines
Allow all pipelines to use this connection. It allows YAML defined pipeline, which are not automatically authorized for service connections, to use this service connection.

Kubeconfig

Kubeconfig Authentication

Server URL
Required. The URL of the Openshift cluster.
Kubeconfig
The contents of the kubectl configuration file.
Service Connection Name
Required. The name you will use to refer to this service connection.
Grant Access permission to all pipelines
Allow all pipelines to use this connection. It allows YAML defined pipeline, which are not automatically authorized for service connections, to use this service connection.

Note: In version 1.* of this extension the Azure DevOps built-in Kubernetes service connection was used. If you want to you keep using this service connection you need to select the 1.* version when configuring a task.


Set up the OpenShift connection at runtime

To set up an OpenShift connection at runtime, select the Set Up Configuration on Runtime option in the Service connection type . You should be displayed with two options: File Path and Inline Configuration.

  • File Path allows you to add a path where the agent will find the config file to use during the execution

file path connection type

  • Inline Configuration expects you to copy the content of your config. The extension will create a new config file with the content inserted

inline configuration connection type

Pipeline Tasks

The following paragraphs describe each of the provided pipeline tasks and their use.

Based on the options used, a task could need cURL to download the oc bundle requested. Each task assumes that cURL is already installed on the Agent that is running the build. If cURL is not located on the Agent, an error will be thrown, and the task will fail.

Install and setup oc

The most generic task is the Install and setup oc task. This task allows you to install a specific version of the OpenShift CLI (oc). The installed binary matches the OS of your agent. The task also adds oc to the PATH and creates a kubeconfig file for authentication against the OpenShift cluster.

After adding and configuring a Install and setup oc task in your pipeline, you can use oc directly within your Command Line task, for example:

oc within Command Line task

To add the Install and setup oc task to your pipeline, you can filter the appearing task list by searching for Install oc. The Install oc has three configuration options.

Configuration of Install oc task

Service Connection Type
Required. Allows to set up a connection at runtime or by choosing an existing service connection. See Connect to your OpenShift cluster.
Version of oc to use
Allows to specify the version of oc to use for command execution, eg v3.10.0. If left blank the latest stable version is used. You can also specify a direct URL to the oc release bundle. See How the cache works
Proxy
Allows to specify a proxy (host:port) to use to download oc cli

Executing single oc commands

In case you want to execute a single oc command you can use the Execute OpenShift command task.

To add this task, you can filter the appearing task list by searching for Execute oc command. The Execute oc command has six configuration options.

Configuration of Execute oc task

Service Connection Type
Required. Allows to set up a connection at runtime or by choosing an existing service connection. See Connect to your OpenShift cluster.
Version of oc to use
Allows to specify the version of oc to use for command execution, eg v3.10.0. If left blank the latest stable version is used. You can also specify a direct URL to the oc release bundle. See How the cache works
Command to run
The actual oc command to run starting with the oc sub-command, eg "rollout latest dc/my-app -n production". Check the notes below to find out more features supported by the extension.
Ignore on success return value
It ignores non success return value from the current step and keep executing the pipeline if it fails. If you are executing a step which contains command like create/delete/patch but the resource has already been created/deleted/patched the pipeline could fail. By checking this option this error will be skipped and the execution will keep going.
Use local oc executable
It forces the extension to use, if present, the oc cli found in the machine where the agent is running. If no version is specified, the extension will use the local oc cli no matter its version is. If a version is specified then the extension will first check if the oc cli installed has the same version requested by the user, if not the correct oc cli will be downloaded.
Proxy
Allows to specify a proxy (host:port) to use to download oc cli

Note: It is possible to use variables defined in the agent. For example, to reference a file in the artefact _my_sources you could do:

apply -f ${SYSTEM_DEFAULTWORKINGDIRECTORY}/_my_sources/my-openshift-config.yaml


_Note: The extension support command interpolation. For example, to execute a command inside another one you can execute:

oc logs $(oc get pod -l app=test -o name)


_Note: The extension supports pipe (|) operators. Due to the limitation of Azure library the extension only support one single pipe per command. The pipe operator also allow to use a different ToolRunner than oc (i.e grep - the tool must be visible to the extension).

oc describe pod/nodejs-ex | grep kubernetes


_Note: The extension supports redirector (>, >>, 2>) operators. The redirector operator expect a valid path as argument.

> (write): create the file if it does not exist and write on it. If it exists, its content will be overwritten.
>> (append): append text to the file
2> (write stderr): redirect stderr to a file
oc describe pod/nodejs-ex | grep kubernetes > /path/log.txt


Executing conditional oc commands

In case you want to execute a single conditional oc command you can use the Execute conditional oc command task.

To add this task, you can filter the appearing task list by searching for Execute conditional oc command. The Execute conditional oc command has ten configuration options.

Configuration of Execute oc task

Service Connection Type
Required. Allows to set up a connection at runtime or by choosing an existing service connection. See Connect to your OpenShift cluster.
Version of oc to use
Allows to specify the version of oc to use for command execution, eg v3.10.0. If left blank the latest stable version is used. You can also specify a direct URL to the oc release bundle. See How the cache works
Command to run
The oc command to run whenever the condition is met, eg "rollout latest dc/my-app -n production". Check the additional features supported by the extension.
Ignore on success return value
It ignores non success return value from the current step and keep executing the pipeline if it fails. If you are executing a step which contains command like create/delete/patch but the resource has already been created/deleted/patched the pipeline could fail. By checking this option this error will be skipped and the execution will keep going.
Condition type
The condition type to be checked over the resource specified. The condition types supported in the current release are `Exists` and `Not_exists`.
Resource on which to verify the condition
The extension expects a clear name of the resource/resources to be checked (E.g pods -l app=test). In the case shown in the example the extension, based on the condition type chosen, will check if there is atleast one pod (Exists) or no pods at all (No_exists) with that label.
Time (in milliseconds) after which to stop the execution
The time the extension will wait before to stop checking the condition status. If the condition will not be met before the timeout elapses the task will errored. N.B: The default timeout is 5 minutes.
Skip timed out error
If checked it allows the extension to execute the command even if the timeout elapses. In this case the task will no errored when the timeout elapses and the task output will be displayed based on the result of the command execution
Use local oc executable
It forces the extension to use, if present, the oc cli found in the machine where the agent is running. If no version is specified, the extension will use the local oc cli no matter its version is. If a version is specified then the extension will first check if the oc cli installed has the same version requested by the user, if not the correct oc cli will be downloaded.
Proxy
Allows to specify a proxy (host:port) to use to download oc cli

_Note: An example of conditional command task can be found here.



Updating a ConfigMap

An even more specific task offered by this extension is the Update ConfigMap task. It allows you to update the properties of a given ConfigMap using a grid.

To add this task, select the + to add a task to your pipeline. You can filter the appearing task list by searching for Update ConfigMap. Add the Update ConfigMap task to your pipeline using the Add button.

Adding Update ConfigMap task

The Update ConfigMap task has six configuration options.

Configuration of Update ConfigMap task

Service Connection Type
Required. Allows to set up a connection at runtime or by choosing an existing service connection. See Connect to your OpenShift cluster.
Version of oc to use
Allows to specify the version of oc to use for command execution, eg v3.10.0. If left blank the latest stable version is used. You can also specify a direct URL to the oc release bundle. See How the cache works
Name of ConfigMap
Required.The name of the ConfigMap to update.
Namespace of ConfigMap
The namespace in which to find the ConfigMap. The current namespace is used if none is specified.
ConfigMap Properties
The properties to set/update. Only the properties which need creating/updating need to be listed. Space separated values need to be surrounded by quotes (").
Use local oc executable
It forces the extension to use, if present, the oc cli found in the machine where the agent is running. If no version is specified, the extension will use the local oc cli no matter its version is. If a version is specified then the extension will first check if the oc cli installed has the same version requested by the user, if not the correct oc cli will be downloaded.
Proxy
Allows to specify a proxy (host:port) to use to download oc cli

Note: It is possible to use variables defined in the agent. For example, to reference a variable MY_VAR defined in the pipeline configuration, you can use ${MY_VAR} as the property value.

How the cache works in OpenShift VSTS extension

OpenShift VSTS extension supports oc executable caching based by its version to avoid downloading the same bundle over and over when executing different pipelines.

The cache is only enabled when the version is clearly specified in the task (e.g 4.1, 3.1.28..). If the version will be defined as an URL or left blank (when wanting to use the latest oc version available) the extension will try to download the oc version requested without checking the cache.

The oc executable will be cached inside the _work/_tool/oc folder.

YAML configuration

You can also use the tasks of the OpenShift extension as part of a YAML defined pipeline. The following configuration shows an example for each of the provided tasks:

jobs:
- job: myjob
  displayName: MyJob
  pool:
    vmImage: 'vs2017-win2016'
  steps:
  # Install oc so that it can be used within a 'script' or bash 'task'
  - task: oc-setup@2
    displayName: Setup oc
    inputs:
      openshiftService: 'my_openshift_connection'
  # A script task making use of 'oc'    
  - script: |
      oc new-project my-project
      oc apply -f ${SYSTEM_DEFAULTWORKINGDIRECTORY}/openshift/config.yaml -n my-project
    displayName: 
  # Single shot 'oc' command
  - task: oc-cmd@2
    displayName: Wait for deployment
    inputs:
      openshiftService: 'my_openshift_connection'
      cmd: 'rollout status -w deployment/my-app'
  # Updating an existing ConfigMap
  - task: config-map@2
    displayName: Update ConfigMap
    inputs:
      openshiftService: 'my_openshift_connection'
      configMapName: 'my-config'
      namespace: 'my-project'
      properties: '-my-key1 my-value1 -my-key2 my-value2'

This example shows how to use the conditional command task. In this case an application will be deployed and its build logs will be retrieved when the deployment process succeed.

steps:
- task: oc-cmd@2
  inputs:
    connectionType: 'Runtime Configuration'
    configurationPath: '/path/testconfig'
    version: '3.9.103'
    cmd: 'oc new-app https://github.com/sclorg/nodejs-ex -l app=test'
- task: oc-conditional-cmd@2
  inputs:
    connectionType: 'Runtime Configuration'
    configurationPath: '/path/testconfig'
    version: '3.9.103'
    cmd: 'logs $(oc get bc -l app=test -o name)'
    condition: 'not_exists'
    resource: 'pods -l app=test'

Note: With Azure DevOps YAML defined pipelines are currently only available for build pipelines. Configuration as code for release pipelines is under development. See here and here.