Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: make healthcheck page top level doc #6898

Merged

Conversation

@gsquared94 gsquared94 requested a review from a team as a code owner November 23, 2021 13:23
@google-cla google-cla bot added the cla: yes label Nov 23, 2021
@gsquared94 gsquared94 enabled auto-merge (squash) November 23, 2021 13:45
@gsquared94 gsquared94 changed the title docs: update status-check supported resource types docs: make healthcheck page top level doc Nov 23, 2021
@gsquared94 gsquared94 added the docs-modifications runs the docs preview service on the given PR label Nov 23, 2021
@container-tools-bot
Copy link

Please visit http://35.236.115.106:1313 to view changes to the docs.

@container-tools-bot container-tools-bot removed the docs-modifications runs the docs preview service on the given PR label Nov 23, 2021
Copy link
Member

@briandealwis briandealwis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you are just moving an existing section, but we should clean up terminology here: Kubernetes uses "health checks" not the compound form ("healthchecks"). But IMHO we should never have used healthcheck as it is is very confusing with our existing use of "--status-checks" on the command-line. And since Skaffold doesn't watch liveness, we're not really doing health checks. I think we should use "status checks" throughout.

docs/content/en/docs/pipeline-stages/healthcheck.md Outdated Show resolved Hide resolved
docs/content/en/docs/pipeline-stages/healthcheck.md Outdated Show resolved Hide resolved
docs/content/en/docs/pipeline-stages/healthcheck.md Outdated Show resolved Hide resolved
docs/content/en/docs/pipeline-stages/healthcheck.md Outdated Show resolved Hide resolved
docs/content/en/docs/pipeline-stages/healthcheck.md Outdated Show resolved Hide resolved
docs/content/en/docs/pipeline-stages/healthcheck.md Outdated Show resolved Hide resolved
docs/content/en/docs/pipeline-stages/healthcheck.md Outdated Show resolved Hide resolved
@gsquared94 gsquared94 added the docs-modifications runs the docs preview service on the given PR label Nov 23, 2021
@container-tools-bot
Copy link

Please visit http://34.94.139.93:1313 to view changes to the docs.

@container-tools-bot container-tools-bot removed the docs-modifications runs the docs preview service on the given PR label Nov 23, 2021
Copy link
Member

@briandealwis briandealwis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems odd to use status-check since there is no such command, just an argument. I made suggestions to expand them.

@@ -12,7 +12,7 @@ When Skaffold deploys your application to Kubernetes, it (usually) goes through
* the Skaffold deployer _renders_ the final Kubernetes manifests: Skaffold replaces untagged image names in the Kubernetes manifests with the final tagged image names.
It also might go through the extra intermediate step of expanding templates (for helm) or calculating overlays (for kustomize).
* the Skaffold deployer _deploys_ the final Kubernetes manifests to the cluster
* the Skaffold deployer waits for the deployed resources to stabilize. See [healthchecks]({{< relref "/docs/workflows/ci-cd.md#waiting-for-skaffold-deployments-using-healthcheck" >}}).
* the Skaffold deployer waits for the deployed resources to stabilize. See [status-check]({{< relref "/docs/pipeline-stages/status-check" >}}).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WDYT

Suggested change
* the Skaffold deployer waits for the deployed resources to stabilize. See [status-check]({{< relref "/docs/pipeline-stages/status-check" >}}).
* the Skaffold deployer performs [status checks]({{< relref "/docs/pipeline-stages/status-check" >}}) and waits for the deployed resources to stabilize.

@@ -152,4 +152,4 @@ deploy:
containerName: hooks-example* # use a glob pattern to prefix-match the container name and pod name for deployments, stateful-sets, etc.
podName: hooks-example-deployment*
```
This config snippet defines a simple `echo` command to run inside the containers that match `podName` and `containerName`, before and after each `kubectl` deploy. The `after` container commands are only run post [healthchecks]({{< relref "/docs/workflows/ci-cd.md#waiting-for-skaffold-deployments-using-healthcheck" >}}) on the deployment are complete. Also, unlike the `sync` container hooks, skaffold cannot determine the target container from just the config definition, and needs the `podName` and `containerName`.
This config snippet defines a simple `echo` command to run inside the containers that match `podName` and `containerName`, before and after each `kubectl` deploy. The `after` container commands are only run post [status-check]({{< relref "/docs/pipeline-stages/status-check" >}}) on the deployment are complete. Also, unlike the `sync` container hooks, skaffold cannot determine the target container from just the config definition, and needs the `podName` and `containerName`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This config snippet defines a simple `echo` command to run inside the containers that match `podName` and `containerName`, before and after each `kubectl` deploy. The `after` container commands are only run post [status-check]({{< relref "/docs/pipeline-stages/status-check" >}}) on the deployment are complete. Also, unlike the `sync` container hooks, skaffold cannot determine the target container from just the config definition, and needs the `podName` and `containerName`.
This config snippet defines a simple `echo` command to run inside the containers that match `podName` and `containerName`, before and after each `kubectl` deploy. The `after` container commands are only run after the [deployment status checks]({{< relref "/docs/pipeline-stages/status-check" >}}) are complete. Also, unlike the `sync` container hooks, skaffold cannot determine the target container from just the config definition, and needs the `podName` and `containerName`.

* [`Stateful Sets`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/): check the output of `kubectl rollout status statefulset` command
* [`Google Cloud Config Connector resources`](https://cloud.google.com/config-connector/docs/overview): check that the resource state has a `Ready` condition set to `True`.

{{<alert title="Note">}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem like it needs a call-out?

* [`Google Cloud Config Connector resources`](https://cloud.google.com/config-connector/docs/overview): check that the resource state has a `Ready` condition set to `True`.

{{<alert title="Note">}}
* `status-check` is enabled by default; it can be disabled with the `--status-check=false`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace "status-check" with "status checking"?

Suggested change
* `status-check` is enabled by default; it can be disabled with the `--status-check=false`
* Status checking is enabled by default; it can be disabled with the `--status-check=false`

Deployments stabilized in 2.168799605s
```

### Configuring timeout for `status-check`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Configuring timeout for `status-check`
### Configuring timeout for status checking

FATA[0006] 1/1 deployment(s) failed
```

### Configuring `status-check` for multiple deployers or multiple modules
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Configuring `status-check` for multiple deployers or multiple modules
### Configuring status checking for multiple deployers or multiple modules

Comment on lines +106 to +108
If you define multiple deployers, say `kubectl`, `helm`, and `kustomize`, all in the same skaffold config, or compose a multi-config project by importing other configs as dependencies, then the `status-check` can be run in one of two ways:
- _Single status check after all deployers are run_. This is the default and it runs a single `status-check` at the end for resources deployed from all deployers across all skaffold configs.
- _Per-deployer status check_. This can be enabled by using the `--iterative-status-check=true` flag. This will run a `status-check` iteratively after every individual deployer runs. This can be especially useful when there are startup dependencies between services, or you need to strictly enforce the time and order in which resources are deployed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you define multiple deployers, say `kubectl`, `helm`, and `kustomize`, all in the same skaffold config, or compose a multi-config project by importing other configs as dependencies, then the `status-check` can be run in one of two ways:
- _Single status check after all deployers are run_. This is the default and it runs a single `status-check` at the end for resources deployed from all deployers across all skaffold configs.
- _Per-deployer status check_. This can be enabled by using the `--iterative-status-check=true` flag. This will run a `status-check` iteratively after every individual deployer runs. This can be especially useful when there are startup dependencies between services, or you need to strictly enforce the time and order in which resources are deployed.
If you define multiple deployers, say `kubectl`, `helm`, and `kustomize`, all in the same skaffold config, or compose a multi-config project by importing other configs as dependencies, then the status check can be run in one of two ways:
- _Single status check after all deployers are run_. This is the default and it runs a single status check at the end for resources deployed from all deployers across all skaffold configs.
- _Per-deployer status check_. This can be enabled by using the `--iterative-status-check=true` flag. This will run a status check after every individual deployer runs. This can be especially useful when there are startup dependencies between services, or you need to strictly enforce the time and order in which resources are deployed.

@@ -175,7 +75,7 @@ Skaffold allows separating the generation of fully-hydrated Kubernetes manifests
`skaffold render` builds all application images from your artifacts, templates the newly-generated image tags into your Kubernetes manifests (based on your project's deployment configuration), and then prints out the final hydrated manifests to a file or your terminal.
This allows you to capture the full, declarative state of your application in configuration, such that _applying_ the changes to your cluster can be done as a separate step.

`skaffold apply` consumes one or more fully-hydrated Kubernetes manifests, and then sends the results directly to the Kubernetes control plane via `kubectl` to create resources on the target cluster. After creating the resources on your cluster, `skaffold apply` uses Skaffold's built-in health checking to monitor the created resources for readiness. See [resource health checks]({{<relref "/docs/workflows/ci-cd#waiting-for-skaffold-deployments-using-healthcheck">}}) for more information on how Skaffold's resource health checking works.
`skaffold apply` consumes one or more fully-hydrated Kubernetes manifests, and then sends the results directly to the Kubernetes control plane via `kubectl` to create resources on the target cluster. After creating the resources on your cluster, `skaffold apply` uses Skaffold's built-in health checking to monitor the created resources for readiness. See [resource health checks]({{<relref "/docs/pipeline-stages/status-check">}}) for more information on how Skaffold's resource health checking works.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`skaffold apply` consumes one or more fully-hydrated Kubernetes manifests, and then sends the results directly to the Kubernetes control plane via `kubectl` to create resources on the target cluster. After creating the resources on your cluster, `skaffold apply` uses Skaffold's built-in health checking to monitor the created resources for readiness. See [resource health checks]({{<relref "/docs/pipeline-stages/status-check">}}) for more information on how Skaffold's resource health checking works.
`skaffold apply` consumes one or more fully-hydrated Kubernetes manifests, and then sends the results directly to the Kubernetes control plane via `kubectl` to create resources on the target cluster. After creating the resources on your cluster, `skaffold apply` uses Skaffold's built-in health checking to monitor the created resources for readiness. See [deployment status checking]({{<relref "/docs/pipeline-stages/status-check">}}) for more information on how Skaffold's resource health checking works.

@gsquared94 gsquared94 merged commit 2e5439d into GoogleContainerTools:main Nov 23, 2021
gsquared94 added a commit to gsquared94/skaffold that referenced this pull request Nov 23, 2021
gsquared94 added a commit that referenced this pull request Nov 24, 2021
* docs: address comments from #6898

* Update docs/content/en/docs/pipeline-stages/deployers/_index.md

Co-authored-by: Brian de Alwis <bsd@acm.org>

Co-authored-by: Brian de Alwis <bsd@acm.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants