Skip to content

Commit

Permalink
Spelling fixes in our doc
Browse files Browse the repository at this point in the history
Found using `mdspell -a -n --en-us "**/*.md"`. At some point we might want to have a built in ignore dictionary to avoid false positives.

(cherry picked from commit 7b87070)
  • Loading branch information
skaegi authored and Scott committed Mar 11, 2020
1 parent f39848f commit 8c64289
Show file tree
Hide file tree
Showing 7 changed files with 15 additions and 15 deletions.
2 changes: 1 addition & 1 deletion docs/developers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ TaskRun Pods. Without intervention sidecars will typically run for the entire
lifetime of a Pod but in Tekton's case it's desirable for the sidecars to run
only as long as Steps take to complete. There's also a need for Tekton to
schedule the sidecars to start before a Task's Steps begin, just in case the
Steps rely on a sidecars behaviour, for example to join an Istio service mesh.
Steps rely on a sidecars behavior, for example to join an Istio service mesh.
To handle all of this, Tekton Pipelines implements the following lifecycle
for sidecar containers:

Expand Down
2 changes: 1 addition & 1 deletion docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ for more information.
```
See the
[OpenShift CLI documentation](https://docs.openshift.com/container-platform/4.3/cli_reference/openshift_cli/getting-started-cli.html)
for more inforomation on the `oc` command.
for more information on the `oc` command.

1. Monitor the installation using the following command until all components show a `Running` status:

Expand Down
4 changes: 2 additions & 2 deletions docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ following fields:

### Specifying a pipeline

Since a `PipelineRun` is an invocation of a [`Pipeline`](pipelines.md), you must sepcify
Since a `PipelineRun` is an invocation of a [`Pipeline`](pipelines.md), you must specify
what `Pipeline` to invoke.

You can do this by providing a reference to an existing `Pipeline`:
Expand Down Expand Up @@ -122,7 +122,7 @@ When running a [`Pipeline`](pipelines.md), you will need to specify the
be run with different `PipelineResources` in cases such as:

- When triggering the run of a `Pipeline` against a pull request, the triggering
system must specify the commitish of a git `PipelineResource` to use
system must specify the commit-ish of a git `PipelineResource` to use
- When invoking a `Pipeline` manually against one's own setup, one will need to
ensure one's own GitHub fork (via the git `PipelineResource`), image
registry (via the image `PipelineResource`) and Kubernetes cluster (via the
Expand Down
6 changes: 3 additions & 3 deletions docs/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,16 +131,16 @@ This will tell Tekton to take whatever workspace is provided by the PipelineRun
with name "pipeline-ws1" and wire it into the "output" workspace expected by
the gen-code task. The same workspace will then also be wired into the "src" workspace
expected by the commit task. If the workspace provided by the PipelineRun is a
persitent volume claim then we have successfully shared files between the two tasks!
persistent volume claim then we have successfully shared files between the two tasks!

#### Workspaces Don't Imply Task Ordering (Yet)

One usecase for workspaces in `Pipeline`s is to provide a PVC to multiple `Task`s
and have one or some write to it before the others read from it. This kind of behaviour
and have one or some write to it before the others read from it. This kind of behavior
relies on the order of the `Task`s - one writes, the next reads, and so on - but this
ordering is not currently enforced by Tekton. This means that `Task`s which write to a
PVC may be run at the same time as `Task`s expecting to read that data. In the worst case
this can result in deadlock behaviour where multiple `Task`'s pods are all attempting
this can result in deadlock behavior where multiple `Task`'s pods are all attempting
to mount a PVC for writing at the same time.

To avoid this situation `Pipeline` authors can explicitly declare the ordering of `Task`s
Expand Down
4 changes: 2 additions & 2 deletions docs/podtemplates.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ configuration that will be used as the basis for the `Task` pod.
This allows to customize some Pod specific field per `Task` execution, aka `TaskRun`.

Alternatively, you can also define a default pod template in tekton config, see [here](./install.md)
When a pod template is specified for a `PipelineRun` or `TaskRun`, the default pod template is ignored, ie
When a pod template is specified for a `PipelineRun` or `TaskRun`, the default pod template is ignored, i.e.
both templates are **NOT** merged, it's always one or the other.

---
Expand Down Expand Up @@ -49,7 +49,7 @@ The current fields supported are:
- `schedulerName` the name of the
[scheduler](https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/)
to use when dispatching the Pod. This can be used when workloads of specific types need specific schedulers,
eg: If you are using volcano.sh for Machine Learning Workloads, you can pass the schedulerName and have Tasks be
e.g.: If you are using volcano.sh for Machine Learning Workloads, you can pass the schedulerName and have Tasks be
dispatched by the volcano.sh scheduler.


Expand Down
6 changes: 3 additions & 3 deletions docs/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ refer to the local path to the mounted resource.

### Variable substitution

`Task` and `Condition` specs can refer resource params as well as pre-defined
`Task` and `Condition` specs can refer resource params as well as predefined
variables such as `path` using the variable substitution syntax below where
`<name>` is the resource's `name` and `<key>` is one of the resource's `params`:

Expand All @@ -99,7 +99,7 @@ $(resources.<name>.<key>)

#### Accessing local path to resource

The `path` key is pre-defined and refers to the local path to a resource on the
The `path` key is predefined and refers to the local path to a resource on the
mounted volume `shell $(resources.inputs.<name>.path)`

### Controlling where resources are mounted
Expand Down Expand Up @@ -472,7 +472,7 @@ https://godoc.org/github.com/jenkins-x/go-scm/scm#State

#### Pull Request

The `pullRequest` resource will look for GitHub or Gitlab OAuth authentication
The `pullRequest` resource will look for GitHub or GitLab OAuth authentication
tokens in spec secrets with a field name called `authToken`.

URLs should be of the form: https://github.com/tektoncd/pipeline/pull/1
Expand Down
6 changes: 3 additions & 3 deletions docs/taskruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ allows to customize some Pod specific field per `Task` execution, aka `TaskRun`.

In the following example, the Task is defined with a `volumeMount`
(`my-cache`), that is provided by the TaskRun, using a
PersistenceVolumeClaim. The SchedulerName has also been provided to define which scheduler should be used to
PersistentVolumeClaim. The SchedulerName has also been provided to define which scheduler should be used to
dispatch the Pod. The Pod will also run as a non-root user.

```yaml
Expand Down Expand Up @@ -346,7 +346,7 @@ Fields include start and stop times for the `TaskRun` and each `Step` and exit c
For each step we also include the fully-qualified image used, with the digest.

If any pods have been [`OOMKilled`](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/)
by Kubernetes, the `Taskrun` will be marked as failed even if the exitcode is 0.
by Kubernetes, the `Taskrun` will be marked as failed even if the exit code is 0.

### Steps

Expand Down Expand Up @@ -689,7 +689,7 @@ Typical examples of the sidecar pattern are logging daemons, services to
update files on a shared volume, and network proxies.

Tekton will happily work with sidecars injected into a TaskRun's
pods but the behaviour is a bit nuanced: When TaskRun's steps are complete
pods but the behavior is a bit nuanced: When TaskRun's steps are complete
any sidecar containers running inside the Pod will be terminated. In
order to terminate the sidecars they will be restarted with a new
"nop" image that quickly exits. The result will be that your TaskRun's
Expand Down

0 comments on commit 8c64289

Please sign in to comment.