Skip to content

Commit

Permalink
[Do Not Merge] Release 1.12 (#10292)
Browse files Browse the repository at this point in the history
* Update docs for fields allowed at root of CRD schema (#9973)

* add plugin docs and examples (#10053)

* docs update to promote TaintNodesByCondition to beta (#9626)

* HPA Specificity Improvements (#8757)

Updated the HPA docs to reference the `autoscaling/v2beta2` API version,
and added documentation about the new fields.

* adjust docs for pod ready++ (#10049)

* Remove --cadvisor-port - has been deprecated since v1.10 (#10023)

Change-Id: Id2a685473a243aef492a98ff450759f39e362557

* Add Documentation for Snapshot Feature (#9948)

* Add documentation for snapshot feature

* Update volume-snapshots.md

* Add dry-run to api-concepts (#10033)

* kubeadm-init: Update the offline support section (#10062)

The update includes the following things (in mind with Kubernetes 1.12):

- Remove the 1.8 image versions
- Add the 1.10 image versions that were missing until now
- Include a comment for the missing arch suffixes in 1.12

Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>

* Say bye to `DynamicProvisioningScheduling` (#10157)

The mentioned feature gate is now collapsed into `VolumeScheduling`.

xref: kubernetes/kubernetes#67432

* Update ResourceQuota per PriorityClass state for 1.12 (#10229)

* TokenRequest and TokenRequestProjection now beta (#10161)

xref: kubernetes/kubernetes#67349

* Change feature state for kms provider to beta. (#10230)

KMS Provider will be graduating to beta in v1.12, reflecting this change on the website.

* coredns default (#10200)

* Promote ShareProcessNamespace to beta in docs (#9996)

* Add CoreDNS details to DNS Debug docs (#10201)

* add coredns details

* address nits, add query logging section

* Update docs with topology aware dynamic provisioning (#9939)

* Document topology aware volume binding feature

* update for readability

* Update storage-classes.md

* comma splice

* don't abbreviate

* HPA Algorithm Information Improvements (#9780)

* Update HPA docs with more algorithm details

The HPA docs pointed to an out-of-date document for information on the
algorithm details, which users were finding confusing.  This sticks a
section on the algorithm in the HPA docs instead, documenting both
general behavior and corner cases.

* Add glossary info, HPA docs on quantities

People often ask about the quantity notation when working with the
metrics APIs, so this adds a glossary entry on quantities (since they're
used elsewhere in the system), and a short explantation in the HPA walkthough.

* Information about HPA readiness and stabilization

This adds information about the new changes to HPA readiness and
stabilization from kubernetes/enhancements#591, and other minor changes that
landed in Kubernetes 1.12.

* Update horizontal-pod-autoscale.md

* Audit 1.12 doc (#9953)

* audit 1.12 document

* remove legacy audit feature

kubernetes/kubernetes#65862

* update feature gate doc

* MountPropagation is now GA (#10090)

* RuntimeClass documentation (#10102)

* RuntimeClass documentation

* Update runtime-class.md

* Add documentation for Scheduler performance tuning (#10048)

* Add documentation for Scheduler performance tuning

* Update scheduler-perf-tuning.md

* TTL controller for cleaning up finished resources (#10064)

* TTL controller for cleaning up finished resources

* Address comments

* Update ttlafterfinished.md

* Bump quota configuration api version (#10217)

* Incremental update from master (#10278)

* fix invalid href of cloud controller manager (#10240)

* fix invalid yaml format (#10238)

* update storage-limits doc with Azure disk part (#10224)

update storage-limits doc with Azure disk part

fix comments

* Update kubelet-config-file.md (#10222)

Update link to KubeletConfiguration struct.

* fix a trivial misspelling (#10244)

* Fix cassandra-statefulset.yaml indent level (#10243)

* Mention minimum etcd versions (#10208)

Source: https://groups.google.com/d/msg/kubernetes-dev/jMPA4JzKiY4/HIx2ugvLBAAJ

* fix 404 error (#10250)

* Small verb tweak (#10190)

Present participle, ftw.

* Add AnchorJS logic for header links (#10155)

* Add AnchorJS JavaScript

* Remove existing inpage_heading logic

* Remove underline from anchor tags

* Use single icon and add touch visibility

* Use paragraph link icon for AnchorJS

* Update Sass to use code formatting in docsContent headers

* Update header size coverage to H3-H6

* fix broken link in kubefed.md (#10254)

* Update the version numbers for the X-Remote-Extra- and Impersonate-Extra- key fixes (#9827)

The fix was cherry picked into 1.11.3, 1.10.7, and 1.9.11:

kubernetes/kubernetes#67162
kubernetes/kubernetes#67163
kubernetes/kubernetes#67164

* fix typo (#10168)

* fix typo

* addressing comments.

* Update setup-ha-etcd-with-kubeadm.md

* fix typos (#10252)

* fix description of contribute guide (#10253)

* describe truncate feature about advanced audit (#10236)

* describe truncate feature about advanced audit

* Update audit.md

* docs update to promote ScheduleDaemonSetPods to beta (#9923)

* Dynamic volume limit updates for 1.12 (#10211)

* add a placeholder commit

* Update docs for csi volume limits

* Update storage-limits.md

* Add "MayRunAs" value among other GroupStrategies (#9888)

* Add CoreDNS details to the customize DNS doc (#10228)

* Add CoreDNS details to the customize DNS doc

Rewrite the document to include more details about CoreDNS, since it's now the default from v1.12

* Address comments

* Improve doc wording

* Fix link

* Update dns-custom-nameservers.md

* Update dns-custom-nameservers.md

* Fix secrets docs in 1.12 branch (#10056)

* Fix secrets docs

* Update secret.md

* Revert CoreDNS Docs (#10319)

* Revert "Add CoreDNS details to DNS Debug docs (#10201)"

This reverts commit 462817a.

* Revert "Add CoreDNS details to the customize DNS doc (#10228)"

This reverts commit e7319ee.

* Revert "coredns default (#10200)"

This reverts commit 698e93b.

* Add CRI installation instructions page

Added cri-installation page with CRI installation instructions
Referenced it from kubeadm-init and install-kubeadm pages.

* kubeadm: update API types documentation for 1.12 (#10283)

v1alpha2 -> v1alpha3
MasterConfiguration -> [new-api-types]

* TokenRequest feature documentation (#10295)

* AdvancedAuditing is now GA (#10156)

xref: kubernetes/kubernetes#65862

`AdvancedAuditing` feature is GA in 1.12. This PR adjusts the related
docs.

* update runtime-class.md (#10332)

* update runtime-class.md

* Update runtime-class.md

* Document cross-authorizer permissions for creating RBAC roles (#10015)

* Document cross-authorizer permissions for creating RBAC roles

* Update rbac.md

* kubeadm: update authored content for 1.12 (reference docs and cluster creation) (#10348)

* kubeadm: update authored content in reference docs for 1.12

* kubeadm: add time frame in create-cluster-kubeadm for 1.12

* add AllowedProcMountTypes and ProcMountType to docs (#9911)

Signed-off-by: Jess Frazelle <acidburn@microsoft.com>

* kubeadm: add new command line reference (#10306)

Add:
- placeholder files
- include place holder files
- include "renew" sub command
- add missing tabs for "alpha phase kubelet"

* Documenting SCTP support in Kubernetes (#10279)

* Documenting SCTP support in Kubernetes Service, Endpoint, NetworkPolicy and Pod

* Updates based on comments on the PR

* kubectl expose update with SCTP support

* Updated according to comments in the PR

* Revert "kubectl expose update with SCTP support"

This reverts commit 0d5a1e6.

* TLS Bootstrap and Server Cert Rotation feature documentation (#10232)

* TokenRequest feature documentation

* line wrapping to make review not insane

* update content for GA without major refactor

* Update kubelet-tls-bootstrapping.md

* Add clarifications for volume snapshots (#10296)

* Update kubadm ha installation for 1.12 (#10264)

* Update kubadm ha installation for 1.12

Signed-off-by: Chuck Ha <ha.chuck@gmail.com>

* update stable version

Signed-off-by: Chuck Ha <ha.chuck@gmail.com>

* Update stacked control plane for v1.12 (#2)

* use v1alpha3

Signed-off-by: Chuck Ha <ha.chuck@gmail.com>

* more v1alpha3 (#4)

* updates

Signed-off-by: Chuck Ha <ha.chuck@gmail.com>

* Document how to run in-tree cloud providers with kubeadm (#10357)

Change-Id: Iab6b996a830503d74a6eb0c507c5f8ca7a39235b

* kubeadm reference doc for release 1.12 (#10359)

* Revert "Revert "Add CoreDNS details to DNS Debug docs (#10201)""

This reverts commit bb30f4d.

* Revert "Revert "Add CoreDNS details to the customize DNS doc (#10228)""

This reverts commit bc23d45.

* Revert "Revert "coredns default (#10200)""

This reverts commit 7f4350d.

* add missing instruction for ha guide (#10374)

Signed-off-by: Chuck Ha <ha.chuck@gmail.com>

* kubeadm - Ha upgrade updates (#10340)

* Update HA upgrade docs

* Adds external etcd HA upgrade guide

Signed-off-by: Chuck Ha <ha.chuck@gmail.com>

* copyedit

* more edits

* add runasgroup in psp (#10076)

* update KubeletPluginsWatcher feature gate (#10205)

* generated 1.12 docs

* Building Multi-arch images with Manifests (#10379)

In 1.12, a variety of images used in a typical kubernetes installation
have started to using manifests to better support environments with arm
or ppc64le architectures. For example all images used with kubeadm by
default have manifests, another would be all the tests in the
conformance test suite. Here we capture the best practices for everyone
to start using manifests in their own workflows.

Change-Id: I5ba4c5fe55ffc9486a8251760f3352be4f2e1494

* Upgrade docs for v1.12 (#10344)

* generated assets and docs

* remove 1.7

* update 1.12

* update plugin documentation under docs>tasks>extend-kubectl (#10259)

* update plugin documentation under docs>tasks>extend-kubectl

* Update kubectl-plugins.md
  • Loading branch information
jimangel authored and k8s-ci-robot committed Sep 27, 2018
1 parent 593b631 commit 786d314
Show file tree
Hide file tree
Showing 178 changed files with 8,960 additions and 2,634 deletions.
27 changes: 14 additions & 13 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,10 @@ time_format_blog = "Monday, January 02, 2006"
description = "Production-Grade Container Orchestration"
showedit = true

latest = "v1.11"
latest = "v1.12"

fullversion = "v1.11.0"
version = "v1.11"
fullversion = "v1.12.0"
version = "v1.12"
githubbranch = "master"
docsbranch = "master"
deprecated = false
Expand All @@ -76,10 +76,10 @@ githubWebsiteRepo = "github.com/kubernetes/website"
githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website"

[[params.versions]]
fullversion = "v1.11.0"
version = "v1.11"
githubbranch = "v1.11.0"
docsbranch = "release-1.11"
fullversion = "v1.12.0"
version = "v1.12"
githubbranch = "v1.12.0"
docsbranch = "release-1.12"
url = "https://kubernetes.io"

[params.pushAssets]
Expand All @@ -93,6 +93,13 @@ js = [
"script"
]

[[params.versions]]
fullversion = "v1.11.3"
version = "v1.11"
githubbranch = "v1.11.3"
docsbranch = "release-1.11"
url = "https://v1-11.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.10.3"
version = "v1.10"
Expand All @@ -114,12 +121,6 @@ githubbranch = "v1.8.4"
docsbranch = "release-1.8"
url = "https://v1-8.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.7.6"
version = "v1.7"
githubbranch = "v1.7.6"
docsbranch = "release-1.7"
url = "https://v1-7.docs.kubernetes.io"

# Language definitions.

Expand Down
6 changes: 2 additions & 4 deletions content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,9 @@ the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce fr
permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from
Kubernetes causes all the Pod objects running on the node to be deleted from the apiserver, and frees up their names.

Version 1.8 introduced an alpha feature that automatically creates
In version 1.12, `TaintNodesByCondition` feature is promoted to beta,so node lifecycle controller automatically creates
[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
To enable this behavior, pass an additional feature gate flag `--feature-gates=...,TaintNodesByCondition=true`
to the API server, controller manager, and scheduler.
When `TaintNodesByCondition` is enabled, the scheduler ignores conditions when considering a Node; instead
Similarly the scheduler ignores conditions when considering a Node; instead
it looks at the Node's taints and a Pod's tolerations.

Now users can choose between the old scheduling model and a new, more flexible scheduling model.
Expand Down
40 changes: 40 additions & 0 deletions content/en/docs/concepts/cluster-administration/cloud-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,47 @@ This page explains how to manage Kubernetes running on a specific
cloud provider.
{{% /capture %}}

{{< toc >}}

{{% capture body %}}
### kubeadm
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
in-tree cloud provider can be configured using kubeadm as shown below:

```yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
```

The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines
for the [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and the
[kubelet](/docs/admin/kubelet/). The contents of the file specified in `--cloud-config` for each provider is documented below as well.

For all external cloud providers, please follow the instructions on the individual repositories.

## AWS
This section describes all the possible configurations which can
be used when running Kubernetes on Amazon Web Services.
Expand Down
5 changes: 3 additions & 2 deletions content/en/docs/concepts/cluster-administration/proxies.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ There are several different proxies you may encounter when using Kubernetes:
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):

- runs on each node
- proxies UDP and TCP
- proxies UDP, TCP and SCTP
- does not understand HTTP
- provides load balancing
- is just used to reach services
Expand All @@ -51,7 +51,8 @@ There are several different proxies you may encounter when using Kubernetes:

- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
- are created automatically when the Kubernetes service has type `LoadBalancer`
- use UDP/TCP only
- usually supports UDP/TCP only
- SCTP support is up to the load balancer implementation of the cloud provider
- implementation varies by cloud provider.

Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ other pods to be evicted/not get scheduled. To resolve this issue,
[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) is
augmented to support Pod priority. An admin can create ResourceQuota for users
at specific priority levels, preventing them from creating pods at high
priorities. However, this feature is in alpha as of Kubernetes 1.11.
priorities. This feature is in beta since Kubernetes 1.12.
{{< /warning >}}

{{% /capture %}}
Expand Down
112 changes: 112 additions & 0 deletions content/en/docs/concepts/configuration/scheduler-perf-tuning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
---
reviewers:
- bsalamat
title: Scheduler Performance Tuning
content_template: templates/concept
weight: 70
---

{{% capture overview %}}

{{< feature-state for_k8s_version="1.12" >}}

Kube-scheduler is the Kubernetes default scheduler. It is responsible for
placement of Pods on Nodes in a cluster. Nodes in a cluster that meet the
scheduling requirements of a Pod are called "feasible" Nodes for the Pod. The
scheduler finds feasible Nodes for a Pod and then runs a set of functions to
score the feasible Nodes and picks a Node with the highest score among the
feasible ones to run the Pod. The scheduler then notifies the API server about this
decision in a process called "Binding".

{{% /capture %}}

{{% capture body %}}

## Percentage of Nodes to Score

Before Kubernetes 1.12, Kube-scheduler used to check the feasibility of all the
nodes in a cluster and then scored the feasible ones. Kubernetes 1.12 has a new
feature that allows the scheduler to stop looking for more feasible nodes once
it finds a certain number of them. This improves the scheduler's performance in
large clusters. The number is specified as a percentage of the cluster size and
is controlled by a configuration option called `percentageOfNodesToScore`. The
range should be between 1 and 100. Other values are considered as 100%. The
default value of this option is 50%. A cluster administrator can change this value by providing a
different value in the scheduler configuration. However, it may not be necessary to change this value.

```yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
algorithmSource:
provider: DefaultProvider

...

percentageOfNodesToScore: 50
```

{{< note >}} **Note**: In clusters with zero or less than 50 feasible nodes, the
scheduler still checks all the nodes, simply because there are not enough
feasible nodes to stop the scheduler's search early. {{< /note >}}

**To disable this feature**, you can set `percentageOfNodesToScore` to 100.

### Tuning percentageOfNodesToScore

`percentageOfNodesToScore` must be a value between 1 and 100
with the default value of 50. There is also a hardcoded minimum value of 50
nodes which is applied internally. The scheduler tries to find at
least 50 nodes regardless of the value of `percentageOfNodesToScore`. This means
that changing this option to lower values in clusters with several hundred nodes
will not have much impact on the number of feasible nodes that the scheduler
tries to find. This is intentional as this option is unlikely to improve
performance noticeably in smaller clusters. In large clusters with over a 1000
nodes setting this value to lower numbers may show a noticeable performance
improvement.

An important note to consider when setting this value is that when a smaller
number of nodes in a cluster are checked for feasibility, some nodes are not
sent to be scored for a given Pod. As a result, a Node which could possibly
score a higher value for running the given Pod might not even be passed to the
scoring phase. This would result in a less than ideal placement of the Pod. For
this reason, the value should not be set to very low percentages. A general rule
of thumb is to never set the value to anything lower than 30. Lower values
should be used only when the scheduler's throughput is critical for your
application and the score of nodes is not important. In other words, you prefer
to run the Pod on any Node as long as it is feasible.

It is not recommended to lower this value from its default if your cluster has
only several hundred Nodes. It is unlikely to improve the scheduler's
performance significantly.

### How the scheduler iterates over Nodes

This section is intended for those who want to understand the internal details
of this feature.

In order to give all the Nodes in a cluster a fair chance of being considered
for running Pods, the scheduler iterates over the nodes in a round robin
fashion. You can imagine that Nodes are in an array. The scheduler starts from
the start of the array and checks feasibility of the nodes until it finds enough
Nodes as specified by `percentageOfNodesToScore`. For the next Pod, the
scheduler continues from the point in the Node array that it stopped at when checking
feasibility of Nodes for the previous Pod.

If Nodes are in multiple zones, the scheduler iterates over Nodes in various
zones to ensure that Nodes from different zones are considered in the
feasibility checks. As an example, consider six nodes in two zones:

```
Zone 1: Node 1, Node 2, Node 3, Node 4
Zone 2: Node 5, Node 6
```

The Scheduler evaluates feasibility of the nodes in this order:

```
Node 1, Node 5, Node 2, Node 6, Node 3, Node 4
```

After going over all the Nodes, it goes back to Node 1.

{{% /capture %}}
12 changes: 9 additions & 3 deletions content/en/docs/concepts/configuration/secret.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,9 +343,15 @@ files.

When a secret being already consumed in a volume is updated, projected keys are eventually updated as well.
Kubelet is checking whether the mounted secret is fresh on every periodic sync.
However, it is using its local ttl-based cache for getting the current value of the secret.
As a result, the total delay from the moment when the secret is updated to the moment when new keys are
projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
However, it is using its local cache for getting the current value of the Secret.
The type of the cache is configurable using the (`ConfigMapAndSecretChangeDetectionStrategy` field in
[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go)).
It can be either propagated via watch (default), ttl-based, or simply redirecting
all requests to directly kube-apiserver.
As a result, the total delay from the moment when the Secret is updated to the moment
when new keys are projected to the Pod can be as long as kubelet sync period + cache
propagation delay, where cache propagation delay depends on the chosen cache type
(it equals to watch propagation delay, ttl of cache, or zero corespondingly).

{{< note >}}
**Note:** A container using a Secret as a
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -279,9 +279,10 @@ which matches the behavior when this feature is disabled.

## Taint Nodes by Condition

Version 1.8 introduces an alpha feature that causes the node controller to create taints corresponding to
Node conditions. When this feature is enabled (you can do this by including `TaintNodesByCondition=true` in the `--feature-gates` command line flag to the scheduler, such as
`--feature-gates=FooBar=true,TaintNodesByCondition=true`), the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
In version 1.12, `TaintNodesByCondition` feature is promoted to beta, so node lifecycle controller automatically creates taints corresponding to
Node conditions.
Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
Note that `TaintNodesByCondition` only taints nodes with `NoSchedule` effect. `NoExecute` effect is controlled by `TaintBasedEviction` which is an alpha feature and disabled by default.

Starting in Kubernetes 1.8, the DaemonSet controller automatically adds the
following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from
Expand Down
20 changes: 20 additions & 0 deletions content/en/docs/concepts/containers/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,26 @@ you can do one of the following:

Note that you should avoid using `:latest` tag, see [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images) for more information.

## Building Multi-architecture Images with Manifests

Docker CLI now supports the following command `docker manifest` with sub commands like `create`, `annotate` and `push`. These commands can be used to build and push the manifests. You can use `docker manifest inspect` to view the manifest.

Please see docker documentation here:
https://docs.docker.com/edge/engine/reference/commandline/manifest/

See examples on how we use this in our build harness:
https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos=

These commands rely on and are implemented purely on the Docker CLI. You will need to either edit the `$HOME/.docker/config.json` and set `experimental` key to `enabled` or you can just set `DOCKER_CLI_EXPERIMENTAL` environment variable to `enabled` when you call the CLI commands.

{{< note >}}
**Note:** Please use Docker *18.06 or above*, versions below that either have bugs or do not support the experimental command line option. Example https://github.com/docker/cli/issues/1135 causes problems under containerd.
{{< /note >}}

If you run into trouble with uploading stale manifests, just clean up the older manifests in `$HOME/.docker/manifests` to start fresh.

For Kubernetes, we have typically used images with suffix `-$(ARCH)`. For backward compatability, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.

## Using a Private Registry

Private registries may require keys to read images from them.
Expand Down
Loading

0 comments on commit 786d314

Please sign in to comment.