-
Notifications
You must be signed in to change notification settings - Fork 2
Guide
We'll start by creating a new directory and adding a file called werk.yml
to keep things clean.
mkdir playground
cd playground
touch werk.yml
Now, please open the file in your favorite editor, and let's start working on it.
We'll start by defining a job called hello,
which prints a message.
version: "1.0"
jobs:
hello:
executor: local
commands:
- echo "Hello there!"
Now let's run this job, head to the terminal, and execute the job using the following command.
werk run hello
Pretty simple right? Notice that we ran the job using the run
command followed by the job name. That's called a target job; keep that in mind. Now that we know how to create jobs let's proceed to add more!
Ok, so we added a job for saying hello; let's add one for saying goodbye. Please note that job names are unique; you cannot have two jobs with the same name.
version: "1.0"
jobs:
hello:
executor: local
commands:
- echo "Hello there!"
goodbye:
executor: local
commands:
- echo "Goodbye!"
Now we can choose between executing the hello
or goodbye
jobs. We'll follow the same procedure as in the previous step.
werk run hello
werk run goodbye
Okay, running jobs on their own is sufficient, but sometimes some dependencies between them. Some jobs need to run before others; for example, you cannot say goodbye to someone you haven't met first. In our case, the goodbye
job depends on hello
. Let's adjust the configuration a little bit.
version: "1.0"
jobs:
hello:
executor: local
commands:
- echo "Hello there!"
goodbye:
executor: local
commands:
- echo "Goodbye!"
needs:
- hello
Now with this small adjustment in place, let's run the goodbye
job again. You will notice that first, it will run the hello
job.
werk run goodbye
That's how dependencies are declared. Be careful of circular dependencies; Werk will detect them automatically and refuse to run.
Let's customize the pipeline to support a name. We'll start by adding a variables section to each job containing a variable declaration with a default value and adjusting the command to use the variable.
version: "1.0"
jobs:
hello:
executor: local
variables:
NAME: Peter
commands:
- echo "Hello ${NAME}!"
goodbye:
executor: local
variables:
NAME: Peter
commands:
- echo "Goodbye ${NAME}!"
needs:
- hello
Ok, let's run it:
werk run goodbye
Ok, we added the local variables for each job, but it's a bit lengthy; if we want to change the name, we have to do it in two places, once in the hello
job and another in the goodbye
job. We can drop the local declarations in favor of a global variable.
version: "1.0"
variables:
NAME: Peter
jobs:
hello:
executor: local
commands:
- echo "Hello ${NAME}!"
goodbye:
executor: local
commands:
- echo "Goodbye ${NAME}!"
needs:
- hello
Running it, you'll notice that the pipeline behaves the same, but we're not repeating ourselves.
werk run goodbye
There are cases in which we want to keep the variables separate from the configuration, especially when we have secrets. Werk supports loading variables directly from dotfiles both globally and locally. Here's an example of how to use dotenv files:
version: "1.0"
dotenv:
- globals.env
- secrets.env
jobs:
hello:
executor: local
commands:
- echo "Hello there!"
- echo $MY_SECRET_GLOBAL
goodbye:
executor: local
dotenv:
- locals.env
commands:
- echo "Goodbye!"
- echo $MY_SECRET_GLOBAL
- echo $MY_SECRET_LOCAL
NOTE: It's recommended NOT to check in these dotenv files.
Wow, that's great, but what if we meet Jane instead of Peter, or anybody else for that matter. We need a way of providing the name without editing the script every time. Ok, let's rerun it:
werk run goodbye -e NAME=Jane
That's it; we've met Jane.
Werk makes some internal variables available for you; let's inspect them by adding another job in the file.
version: "1.0"
variables:
NAME: Peter
jobs:
hello:
executor: local
commands:
- echo "Hello ${NAME}!"
goodbye:
executor: local
commands:
- echo "Goodbye ${NAME}!"
needs:
- hello
env:
executor: local
commands:
- env | grep WERK
Now let's take a look at these variables:
werk run env
When Werk encounters an error will stop the execution of the pipeline.
version: "1.0"
variables:
NAME: Peter
jobs:
hello:
executor: local
commands:
- echo "Hello ${NAME}!"
- exit 1
goodbye:
executor: local
commands:
- echo "Goodbye ${NAME}!"
needs:
- hello
env:
executor: local
commands:
- env | grep WERK
Let's try that out:
werk run goodbye
You'll notice that the pipeline execution stops after the exit
command is executed. But what if we want to ignore the error? Yep, we can do that!
version: "1.0"
variables:
NAME: Peter
jobs:
hello:
executor: local
commands:
- echo "Hello ${NAME}!"
- exit 1
can_fail: true
goodbye:
executor: local
commands:
- echo "Goodbye ${NAME}!"
needs:
- hello
env:
executor: local
commands:
- env | grep WERK
Now let's try that again:
werk run goodbye
You should see that the execution continues, and the goodbye
job gets executed.
If your job output gets too verbose, you can disable it by adding the silent
property.
version: "1.0"
variables:
NAME: Peter
jobs:
hello:
executor: local
commands:
- echo "Hello ${NAME}!"
- exit 1
can_fail: true
goodbye:
executor: local
commands:
- echo "Goodbye ${NAME}!"
needs:
- hello
silent: true
Werk will optimize execution as much as possible by creating an execution plan and determining if jobs can run in parallel. But there's a limit to how many things run in parallel; that limit by default is 32, but that limit can be adjusted based on the machine's capabilities. The example below changes the limit to 10.
werk run -j 10
One neat trick to force the pipeline to execute sequentially is to set the limit to 1; this causes the pipeline to run one job at a time.
werk run -j 1
As you've probably noticed so far, the output for the jobs is prefixed with the name. In the case of parallel jobs, it will combine the output.
[TODO] Add a description for the local executor
Werk has support for Docker; this means that jobs can run inside Docker containers. Werk will pull images and manage the container state for you. It will automatically mount the working directory inside the container and run all commands inside of it. This feature does require you to have Docker installed on your machine.
You can use any public image from Docker Hub; for now, we don't support building custom images on the fly or images that require authentication.
version: "1.0"
jobs:
hello:
executor: docker
image: ubuntu:focal
commands:
- apt-get update -qq
- apt-get install -y build-essential
Sometimes Docker images have a predefined entry point, which is not suitable for our workload. Let's take the Kaniko image, in which the shell interpreter is not located on the standard path. We can adjust this using the entrypoint
property on the job.
version: "1.0"
jobs:
kaniko:
executor: docker
image: gcr.io/kaniko-project/executor:debug
entrypoint: ["/busybox/sh"]
commands:
- >-
/kaniko/executor
--context .
--dockerfile Dockerfile
--no-push
[TODO] Add description on how to mount additional volumes
To see what's being run before running, use the plan
command. This command will show you how the jobs execute and what jobs will run in parallel.
werk plan [target_job]
werk run -r [target_job]