From f905adf6178783f888b0235828f7c91b1c085180 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Thu, 21 Apr 2022 12:03:36 -0300 Subject: [PATCH 001/292] Add content/pt-br/docs/concepts/architecture/nodes.md --- .../pt-br/docs/concepts/architecture/nodes.md | 392 ++++++++++++++++++ 1 file changed, 392 insertions(+) create mode 100644 content/pt-br/docs/concepts/architecture/nodes.md diff --git a/content/pt-br/docs/concepts/architecture/nodes.md b/content/pt-br/docs/concepts/architecture/nodes.md new file mode 100644 index 0000000000000..23ee94bcb46b4 --- /dev/null +++ b/content/pt-br/docs/concepts/architecture/nodes.md @@ -0,0 +1,392 @@ +--- +reviewers: +- caesarxuchao +- dchen1107 +title: Nós +content_type: conceito +weight: 10 +--- + + + +O Kubernetes executa sua carga de trabalho colocando contêineres em Pods para serem executados em _Nós_. Um nó pode ser uma máquina virtual ou física, dependendo do cluster. Cada nó é gerenciado pelo {{< glossary_tooltip text="plano de controle" term_id="control-plane" >}} e contém os serviços necessários para executar {{< glossary_tooltip text="Pods" term_id="pod" >}}. + +Normalmente, você tem vários nós em um cluster; em um ambiente de aprendizado ou limitado por recursos, você pode ter apenas um nó. + +Os [componentes](/docs/concepts/overview/components/#node-components) em um nó incluem o {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, um {{< glossary_tooltip text="contêiner runtime" term_id="container-runtime" >}}, e o {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}. + + + +## Administração + +Existem duas maneiras principais de adicionar Nós ao {{< glossary_tooltip text="servidor API" term_id="kube-apiserver" >}}: + +1. O kubelet em um nó se registra automaticamente no plano de controle +2. Você (ou outro usuário humano) adiciona manualmente um objeto Nó + +Depois de criar um {{< glossary_tooltip text="objeto" term_id="object" >}} Nó, ou o kubelet em um nó se registra automaticamente, o plano de controle verifica se o novo objeto Nó é válido. Por exemplo, se você tentar criar um nó a partir do seguinte manifesto JSON: + +```json +{ + "kind": "Node", + "apiVersion": "v1", + "metadata": { + "name": "10.240.79.157", + "labels": { + "name": "my-first-k8s-node" + } + } +} +``` + +O Kubernetes cria um objeto nó internamente (a representação). O Kubernetes verifica se um kubelet se registrou no servidor API que corresponde ao campo `metadata.name` do Nó. Se o nó estiver saudável (ou seja, todos os serviços necessários estiverem em execução), ele será elegível para executar um Pod. Caso contrário, esse nó é ignorado para qualquer atividade de cluster até que se torne saudável. + +{{< note >}} +O Kubernetes mantém o objeto nó inválido e continua verificando se ele se torna saudável. + +Você, ou um {{< glossary_tooltip term_id="controller" text="controlador">}}, deve excluir explicitamente o objeto Nó para interromper essa verificação de integridade. +{{< /note >}} + +O nome de um objeto nó deve ser um nome de [subdomínio válido de DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +### Singularidade de nome do nó + +O [nome](/docs/concepts/overview/working-with-objects/names#names) identifica um nó. Dois nós não podem ter o mesmo nome ao mesmo tempo. O Kubernetes também assume que um recurso com o mesmo nome é o mesmo objeto. No caso de um nó, assume-se implicitamente que uma instância usando o mesmo nome terá o mesmo estado (por exemplo, configurações de rede, conteúdo do disco raiz) e atributos como rótulos de nó. Isso pode levar a inconsistências se uma instância for modificada sem alterar seu nome. Se o nó precisar ser substituído ou atualizado significativamente, o objeto Nó existente precisa ser removido do servidor API primeiro e adicionado novamente após a atualização. + +### Auto-registro de Nós + +Quando a opção `--register-node` do kubelet for verdadeira (padrão), o kubelet tentará se registrar no servidor API. Este é o padrão preferido, usado pela maioria das distribuições. + +Para auto-registro, o kubelet é iniciado com as seguintes opções: + +- `--kubeconfig` - O caminho das credenciais para se autenticar no servidor API. +- `--cloud-provider` - Como falar com um {{< glossary_tooltip text="provedor de nuvem" term_id="cloud-provider" >}} + para ler metadados sobre si mesmo. +- `--register-node` - Registrar automaticamente no servidor API. +- `--register-with-taints` - Registra o nó com a lista fornecida de {{< glossary_tooltip text="taints" term_id="taint" >}} (separadas por vírgula `=:`). + +Não funciona se o `register-node` for falso. + +- `--node-ip` - endereço IP do nó. +- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} a serem adicionados ao registrar o nó + no cluster (consulte as restrições de label impostas pelo [plug-in de admissão NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)). +- `--node-status-update-frequency` - Especifica com que frequência o kubelet publica o status do nó no servidor da API. + +Quando o [modo de autorização do nó](/docs/reference/access-authn-authz/node/) e o [plug-in de admissão NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) estão ativados, os kubelets só estrão autorizados a criar/modificar seu próprio recurso do nó. + + +{{< note >}} +Como mencionado na seção de [singularidade do nome do nó](#singularidade-de-nome-do-no), quando a configuração do nó precisa ser atualizada, é uma boa prática registrar novamente o nó no servidor da API. Por exemplo, se o kubelet estiver sendo reiniciado com o novo conjunto de `--node-labels`, mas o mesmo nome de nó for usado, a alteração não entrará em vigor, pois os labels estão sendo definidos no registro do Nó. + +Pods já agendados no Nó podem se comportar mal ou causar problemas se a configuração do Nó for alterada na reinicialização do kubelet. Por exemplo, o Pod já em execução pode estar contaminado contra os novos rótulos atribuídos ao Nó, enquanto outros Pods, que são incompatíveis com esse Pod, serão agendados com base nesse novo rótulo. O novo registro do nó garante que todos os Pods sejam drenados e devidamente reprogramados. +{{< /note >}} + +### Administração manual de nós + +Você pode criar e modificar objetos Nó usando o {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}.. + +Quando você quiser criar objetos Nó manualmente, defina a opção do kubelet `--register-node=false`. + +Você pode modificar objetos Nó, independentemente da configuração de `--register-node`. Por exemplo, você pode definir labels em um nó existente ou marcá-lo como não programado. + +Você pode usar labels nos Nós em conjunto com seletores de nós nos Pods para controlar o agendamento. Por exemplo, você pode restringir um Pod a ser elegível apenas para ser executado em um subconjunto dos nós disponíveis. + +Marcar um nó como não programável impede que o agendador coloque novos pods nesse nó, mas não afeta os Pods existentes no nó. Isso é útil como uma etapa preparatória antes da reinicialização de um nó ou outra manutenção. + +Para marcar um nó como não programado, execute: + +```shell +kubectl cordon $NODENAME +``` + +Consulte [Drenar um nó com segurança](/docs/tasks/administer-cluster/safely-drain-node/) para obter mais detalhes. + +{{< note >}} +Os Pods que fazem parte de um {{< glossary_tooltip term_id="daemonset" >}} toleram ser executados em um nó não programável. Os DaemonSets geralmente fornecem serviços locais de nós que devem ser executados em um Nó, mesmo que ele esteja sendo drenado de aplicativos de carga de trabalho. +{{< /note >}} + +## Situação do Nó + +O status de um nó contém as seguintes informações: + +* [Endereços](#addresses) +* [Condições](#condition) +* [Capacidade](#capacity) +* [Informação](#info) + +Você pode usar o `kubectl` para visualizar o status de um nó e outros detalhes: + +```shell +kubectl describe node +``` + +Cada seção da saída está descrita abaixo. + +### Endereços + +O uso desses campos pode mudar dependendo do seu provedor de nuvem ou configuração `bare metal`. + +* HostName: O nome do host relatado pelo `kernel` do nó. Pode ser substituído através do parâmetro kubelet `--hostname-override`. +* ExternalIP: Geralmente, o endereço IP do nó que é roteável externamente (disponível fora do `cluster`). +* InternalIP: Geralmente, o endereço IP do nó que é roteável somente dentro do `cluster`. + +### Condições {#conditions} + +O campo `conditions` descreve o status de todos os nós em execução. Exemplos de condições incluem: + +{{< table caption = "Node conditions, and a description of when each condition applies." >}} +| Node Condition | Description | +|----------------------|-------------| +| `Ready` | `True` Se o nó estiver saudável e pronto para aceitar pods, `False` se o nó não estiver saudável e não estiver aceitando pods, e desconhecido `Unknown` se o controlador do nó tiver sem notícias do nó no último `node-monitor-grace-period` (o padrão é de 40 segundos) | +| `DiskPressure` | `True` Se houver pressão sobre o tamanho do disco, ou seja, se a capacidade do disco for baixa; caso contrário `False` | +| `MemoryPressure` | `True` Se houver pressão na memória do nó, ou seja, se a memória do nó estiver baixa; caso contrário `False` | +| `PIDPressure` | `True` Se houver pressão sobre os processos, ou seja, se houver muitos processos no nó; caso contrário `False` | +| `NetworkUnavailable` | `True` Se a rede do nó não estiver configurada corretamente, caso contrário `False` | +{{< /table >}} + +{{< note >}} +Se você usar as ferramentas de linha de comando para mostrar os detalhes de um nó isolado, a `Condition` inclui `SchedulingDisabled`. `SchedulingDisabled` não é uma condição na API do Kubernetes; em vez disso, os nós isolados são marcados como `Unschedulable` em suas especificações. +{{< /note >}} + +Na API do Kubernetes, a condição de um nó é representada como parte do `.status` do recurso do nó. Por exemplo, a seguinte estrutura JSON descreve um nó saudável: + +```json +"conditions": [ + { + "type": "Ready", + "status": "True", + "reason": "KubeletReady", + "message": "kubelet is posting ready status", + "lastHeartbeatTime": "2019-06-05T18:38:35Z", + "lastTransitionTime": "2019-06-05T11:41:27Z" + } +] +``` + +Se o status da condição `Ready` permanecer desconhecido (`Unknown`) ou falso (`False`) por mais tempo do que o limite de despejo do pod (`pod-eviction-timeout`) (um argumento passado para o {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager">}}), o [controlador de nó](#node-controller) acionará o {{< glossary_tooltip text="despejo iniciado pela API" term_id="api-eviction" >}} para todos os Pods atribuídos a esse nó. A duração padrão do tempo limite de despejo é de **cinco minutos**. Em alguns casos, quando o nó está inacessível, o servidor API não consegue se comunicar com o kubelet no nó. A decisão de excluir os pods não pode ser comunicada ao kubelet até que a comunicação com o servidor API seja restabelecida. Enquanto isso, os pods agendados para exclusão podem continuar a ser executados no nó particionado. + +O controlador de nós não força a exclusão dos pods até que seja confirmado que eles pararam de ser executados no cluster. Você pode ver os pods que podem estar sendo executados em um nó inacessível como estando no estado de terminando (`Terminating`) ou desconhecido (`Unknown`). Nos casos em que o Kubernetes não retirar da infraestrutura subjacente se um nó tiver deixado permanentemente um cluster, o administrador do cluster pode precisar excluir o objeto do nó manualmente. Excluir o objeto do nó do Kubernetes faz com que todos os objetos Pod em execução no nó sejam excluídos do servidor da API e libera seus nomes. + +Quando ocorrem problemas nos nós, o plano de controle do Kubernetes cria automaticamente [`taints`](/docs/concepts/scheduling-eviction/taint-and-toleration/) que correspondem às condições que afetam o nó. O agendador leva em consideração as `taints` do Nó ao atribuir um Pod a um Nó. Os Pods também podem ter {{< glossary_tooltip text="tolerations" term_id="toleration" >}} que os permitem funcionar em um nó, mesmo que tenha uma `taint` específica. + +Consulte [Nó Taint Nodes por Condição](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition) +para mais detalhes. + +### Capacidade e Alocável {#capacity} + +Descreve os recursos disponíveis no nó: CPU, memória e o número máximo de pods que podem ser agendados no nó. + +Os campos no bloco de capacidade indicam a quantidade total de recursos que um nó possui. O bloco alocado indica a quantidade de recursos em um nó que está disponível para ser consumido por Pods normais. + +Você pode ler mais sobre capacidade e recursos alocados enquanto aprende a [reservar recursos de computação](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) em um nó. + +### Info + +Descreve informações gerais sobre o nó, como a versão do kernel, a versão do Kubernetes (versão do kubelet e kube-proxy), detalhes do tempo de execução do contêiner e qual sistema operacional o nó usa. O kubelet coleta essas informações do nó e as publica na API do Kubernetes. + +## Heartbeats + +Os `Heartbeats`, enviados pelos nós do Kubernetes, ajudam seu cluster a determinar a disponibilidade de cada nó e a agir quando as falhas forem detectadas. + +Para nós, existem duas formas de `heartbeats`: + +* atualizações para o `.status` de um Nó +* Objetos [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) dentro do {{< glossary_tooltip term_id="namespace" text="namespace">}} `kube-node-lease`. Cada nó tem um objeto de `Lease` associado. + +Em comparação com as atualizações no `.status` de um nó, um Lease é um recurso mais leve. O uso de Leases para `heartbeats` reduz o impacto no desempenho dessas atualizações para grandes clusters. + +O kubelet é responsável por criar e atualizar o `.status` dos Nós e por atualizar suas Leases relacionadas. + +- O kubelet atualiza o .status do nó quando há mudança de status ou se não houve atualização para um intervalo configurado. O intervalo padrão para atualizações .status para Nós é de 5 minutos, o que é muito maior do que o tempo limite padrão de 40 segundos para nós inacessíveis. +- O kubelet cria e atualiza seu objeto `Lease` a cada 10 segundos (o intervalo de atualização padrão). As atualizações de Lease ocorrem independentemente das atualizações no `.status` do Nó. Se a atualização do `Lease` falhar, o kubelet voltará a tentativas, usando um recuo exponencial que começa em 200 milissegundos e limitado a 7 segundos. + +## Controlador de Nós + +O {{< glossary_tooltip text="controlador" term_id="controller" >}} de nós é um componente do plano de controle do Kubernetes que gerencia vários aspectos dos nós. + +O controlador de nó tem várias funções na vida útil de um nó. O primeiro é atribuir um bloco CIDR ao nó quando ele é registrado (se a atribuição CIDR estiver ativada). + +O segundo é manter a lista interna de nós do controlador de nós atualizada com a lista de máquinas disponíveis do provedor de nuvem. Ao ser executado em um ambiente de nuvem e sempre que um nó não é saudável, o controlador de nó pergunta ao provedor de nuvem se a VM desse nó ainda está disponível. Caso contrário, o controlador de nós exclui o nó de sua lista de nós. + +O terceiro é monitorar a saúde dos nós. O controlador do nó é responsável por: + +- No caso de um nó se tornar inacessível, atualizando a condição NodeReady de dentro do `.status` do nó. Nesse caso, o controlador do nó define a condição de pronto (`NodeReady`) como condição desconhecida (`ConditionUnknown`). +- Se um nó permanecer inacessível: será iniciado o [despejo pela API](/docs/concepts/scheduling-eviction/api-eviction/) para todos os Pods no nó inacessível. Por padrão, o controlador do nó espera 5 minutos entre marcar o nó como condição desconhecida (`ConditionUnknown`) e enviar a primeira solicitação de despejo. + +O controlador de nó verifica o estado de cada nó a cada `--node-monitor-period` segundos. + +### Limites de taxa de despejo + +Na maioria dos casos, o controlador de nós limita a taxa de despejo a `--node-eviction-rate` (0,1 por padrão) por segundo, o que significa que ele não despejará pods de mais de 1 nó por 10 segundos. + +O comportamento de despejo do nó muda quando um nó em uma determinada zona de disponibilidade se torna não saudável. O controlador de nós verifica qual porcentagem de nós na zona não são saudáveis (a condição `NodeReady` é desconhecida `ConditionUnknown` ou falsa `ConditionFalse`) ao mesmo tempo: + +- Se a fração de nós não saudáveis for ao menos `--unhealthy-zone-threshold` (padrão 0,55), então a taxa de despejo será reduzida. +- Se o cluster for pequeno (ou seja, tiver menos ou igual a nós `--large-cluster-size-threshold` - padrão 50), então os despejos serão interrompidos. +- Caso contrário, a taxa de despejo é reduzida para `--secondary-node-eviction-rate` de despejo de nós secundários (padrão 0,01) por segundo. + +A razão pela qual essas políticas são implementadas por zona de disponibilidade é porque uma zona de disponibilidade pode ser particionada a iniciar do plano de controle, enquanto as outras permanecem conectadas. Se o seu cluster não abranger várias zonas de disponibilidade de provedores de nuvem, o mecanismo de despejo não levará em conta a indisponibilidade por zona. + +Uma das principais razões para espalhar seus nós pelas zonas de disponibilidade é para que a carga de trabalho possa ser transferida para zonas saudáveis quando uma zona inteira cair. Portanto, se todos os nós em uma zona não forem saudáveis, o controlador do nó despeja na taxa normal de `--node-eviction-rate`. O caso é quando todas as zonas são completamente insalubres (nenhum dos nós do cluster será saudável). Nesse caso, o controlador do nó assume que há algum problema com a conectividade entre o plano de controle e os nós e não realiza nenhum despejo. (Se houver uma interrupção e alguns nós reaparecerem, o controlador do nó expulsa pods dos nós restantes que são insalubres ou inacessíveis). + +O controlador de nós também é responsável por despejar pods em execução nos nós com `NoExecute` taints, a menos que esses pods tolerem essa taint. O controlador de nó também adiciona as {{< glossary_tooltip text="taints" term_id="taint" >}} correspondentes aos problemas de nó, como nó inacessível ou não pronto. Isso significa que o agendador não colocará Pods em nós não saudáveis. + +## Rastreamento de capacidade de recursos {#node-capacity} + +Os objetos do nó rastreiam informações sobre a capacidade de recursos do nó: por exemplo, a quantidade de memória disponível e o número de CPUs. Os nós que se [auto-registram](#self-registration-of-nodes) relatam sua capacidade durante o registro. Se você adicionar [manualmente](#manual-node-administration) um nó, precisará definir as informações de capacidade do nó ao adicioná-lo. + +O {{< glossary_tooltip text="agendador" term_id="kube-scheduler" >}} do Kubernetes garante que haja recursos suficientes para todos os Pods em um nó. O agendador verifica se a soma das solicitações de contêineres no nó não é maior do que a capacidade do nó. Essa soma de solicitações inclui todos os contêineres gerenciados pelo kubelet, mas exclui quaisquer contêineres iniciados diretamente pelo tempo de execução do contêiner e também exclui quaisquer processos executados fora do controle do kubelet. + +{{< note >}} +Se você quiser reservar explicitamente recursos para processos que não sejam do Pod, consulte [reserva de recursos para daemons do sistema](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved). +{{< /note >}} + +## Topologia do Nó + +{{< feature-state state="alpha" for_k8s_version="v1.16" >}} + +Se você ativou os [recursos]](/docs/reference/command-line-tools-reference/feature-gates/) de `TopologyManager`, o kubelet pode usar dicas da topologia ao tomar decisões de atribuição de recursos. Consulte [Controle das Políticas de Gerenciamento de Topologia em um Nó](/docs/tasks/administer-cluster/topology-manager/) para obter mais informações. + +## Desligamento gracioso do nó {#graceful-node-shutdown} + +{{< feature-state state="beta" for_k8s_version="v1.21" >}} + +O kubelet tenta detectar o desligamento do sistema do nó e encerra os pods em execução no nó. + +O Kubelet garante que os pods sigam o processo normal de [término do pod](/docs/concepts/workloads/pods/)pod-lifecycle/#pod-termination) durante o desligamento do nó. + +O recurso de desligamento gradual do nó depende do systemd, pois aproveita os [bloqueios do inibidor do systemd(https://www.freedesktop.org/wiki/Software/systemd/inhibit/) para atrasar o desligamento do nó com uma determinada duração. + +O desligamento gradual do nó é controlado com [recursos](/docs/reference/command-line-tools-reference/feature-gates/) `GracefulNodeShutdown`, que é ativado por padrão na versão 1.21. + +Observe que, por padrão, ambas as opções de configuração descritas abaixo, `shutdownGracePeriod` and `shutdownGracePeriodCriticalPods` estão definidas como zero, não ativando assim a funcionalidade de desligamento gradual do nó. Para ativar o recurso, as duas configurações do kubelet devem ser configuradas adequadamente e definidas como valores diferentes de zero. + +Durante um desligamento gradual, o kubelet encerra os pods em duas fases: + +1. Encerra os pods regulares em execução no nó. +2. Encerra os [pods críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) em execução no nó. + +O recurso de desligamento gradual do nó é configurado com duas opções [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/): + +* `shutdownGracePeriod`: + * Especifica a duração total pela qual o nó deve atrasar o desligamento. Este é o período de carência total para o térmido dos pods regulares e os [críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical). + +* `shutdownGracePeriodCriticalPods`: + * Especifica a duração utlizada para encerrar [pods críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) durante um desligamento de nó. Este valor deve ser menor que `shutdownGracePeriod`. + +Por exemplo, se `shutdownGracePeriod=30s` e `shutdownGracePeriodCriticalPods=10s`, o kubelet atrasará o desligamento do nó em 30 segundos. Durante o desligamento, os primeiros 20 (30-10) segundos seriam reservados para encerrar gradualmente os pods normais, e os últimos 10 segundos seriam reservados para encerrar [pods críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical). + +{{< note >}} +Quando os pods forem despejados durante o desligamento gradual do nó, eles serão marcados como desligados. Executar o `kubectl get pods` para mostrar o status dos pods despejados como `Terminated`. E o `kubectl describe pod` indica que o pod foi despejado por causa do desligamento do nó: + +``` +Reason: Terminated +Message: Pod was terminated in response to imminent node shutdown. +``` +{{< /note >}} + +### Desligamento gradual do nó baseado em prioridade do Pod {#pod-priority-graceful-node-shutdown} + +{{< feature-state state="alpha" for_k8s_version="v1.23" >}} + +Assuming the following custom pod +[priority classes](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) +in a cluster, + +Para fornecer mais flexibilidade durante o desligamento gradual do nó em torno da ordem de pods durante o desligamento, o desligamento gradual do nó respeita a PriorityClass for Pods, desde que você tenha ativado esse recurso em seu cluster. O recurso permite que o cluster defina explicitamente a ordem dos pods durante o desligamento gradual do nó com base em [classes de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass). + +O recurso [Desligamento Gradual do Nó](#graceful-node-shutdown), conforme descrito acima, desliga pods em duas fases, pods não críticos, seguidos por pods críticos. Se for necessária flexibilidade adicional para definir explicitamente a ordem dos pods durante o desligamento de uma maneira mais granular, o desligamento gradual baseado na prioridade do pod pode ser usado. + +Quando o desligamento gradual do nó respeita as prioridades do pod, isso torna possível fazer o desligamento gradual do nó em várias fases, cada fase encerrando uma classe de prioridade específica de pods. O kubelet pode ser configurado com as fases exatas e o tempo de desligamento por fase. + +Assumindo as seguintes classes de prioridade de pod personalizadas em um cluster, + +|Nome das classes de prioridade|Valor das classes de prioridade| +|-------------------------|------------------------| +|`custom-class-a` | 100000 | +|`custom-class-b` | 10000 | +|`custom-class-c` | 1000 | +|`regular/unset` | 0 | + +Na [configuração do kubelet](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration), as configurações para `shutdownGracePeriodByPodPriority` podem se parecer com: + +|Valor das classes de prioridade|Tempo de desligamento| +|------------------------|---------------| +| 100000 |10 segundos | +| 10000 |180 segundos | +| 1000 |120 segundos | +| 0 |60 segundos | + +A configuração correspondente do kubelet YAML seria: + +```yaml +shutdownGracePeriodByPodPriority: + - priority: 100000 + shutdownGracePeriodSeconds: 10 + - priority: 10000 + shutdownGracePeriodSeconds: 180 + - priority: 1000 + shutdownGracePeriodSeconds: 120 + - priority: 0 + shutdownGracePeriodSeconds: 60 +``` + +A tabela acima implica que qualquer pod com valor `priority` >= 100000 terá apenas 10 segundos para parar qualquer pod com valor >= 10000 e < 100000 e terá 180 segundos para parar, qualquer pod com valor >= 1000 e < 10000 terá 120 segundos para parar. Finalmente, todos os outros pods terão 60 segundos para parar. + +Não é preciso especificar valores correspondentes para todas as classes. Por exemplo, você pode usar estas configurações: + + +|Valor das classes de prioridade|Tempo de desligamento| +|------------------------|---------------| +| 100000 |300 segundos | +| 1000 |120 segundos | +| 0 |60 segundos | + + +No caso acima, os pods com `custom-class-b` irão para o mesmo bucket que `custom-class-c` para desligamento. + +Se não houver pods em um intervalo específico, o kubelet não irá espera por pods nesse intervalo de prioridades. Em vez disso, o kubelet pula imediatamente para o próximo intervalo de valores da classe de prioridade. + +Se esse recurso estiver ativado e nenhuma configuração for fornecida, nenhuma ação de pedido será tomada. + +O uso desse recurso requer ativar os recursos `GracefulNodeShutdownBasedOnPodPriority` e definir o `ShutdownGracePeriodByPodPriority` da configuração do kubelet para a configuração desejada, contendo os valores da classe de prioridade do pod e seus respectivos períodos de desligamento. + +## Gerenciamento da memória swap {#swap-memory} + +{{< feature-state state="alpha" for_k8s_version="v1.22" >}} + +Antes do Kubernetes 1.22, os nós não suportavam o uso de memória swap, e um kubelet, por padrão, não iniciaria se a troca fosse detectada em um nó. A partir de 1.22, o suporte a memória swap pode ser ativado por nó. + +Para ativar a troca em um nó, o recursos `NodeSwap` deve estar ativado no kubelet, e a [configuração](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) de comando de linha `--fail-swap-on` ou `failSwapOn` deve ser definida como falsa. + +{{< warning >}} +Quando o recurso de memória swap está ativado, os dados do Kubernetes, como o conteúdo de objetos `Secret` que foram gravados no `tmpfs`, agora podem ser trocados para o disco. +{{< /warning >}} + +Opcionalmente, um usuário também pode configurar `memorySwap.swapBehavior` para especificar como um nó usará memória swap. Por exemplo, + +```yaml +memorySwap: + swapBehavior: LimitedSwap +``` + +As opções de configuração disponíveis para `swapBehavior` são: + +- `LimitedSwap`: As cargas de trabalho do Kubernetes são limitadas na quantidade de troca que podem usar. Cargas de trabalho no nó não gerenciadas pelo Kubernetes ainda podem ser trocadas. +- `UnlimitedSwap`: As cargas de trabalho do Kubernetes podem usar tanta memória de swap quanto solicitarem, até o limite do sistema. + +Se a configuração do `memorySwap` não for especificada e o recurso estiver ativado, por padrão, o kubelet aplicará o mesmo comportamento que a configuração `LimitedSwap`. + +O comportamento da configuração `LimitedSwap` depende se o nó estiver sendo executado com v1 ou v2 de grupos de controle (também conhecidos como "cgroups"): + +- **cgroupsv1**: As cargas de trabalho do Kubernetes podem usar qualquer combinação de memória e swap, até o limite de memória do pod, se definido. +- **cgroupsv2**: As cargas de trabalho do Kubernetes não podem usar memória swap. + +Para obter mais informações e para ajudar nos testes e fornecer feedback, consulte [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) e sua [proposta de design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md). + +## {{% heading "whatsnext" %}} + +* Saiba mais sobre [componentes](/docs/concepts/overview/components/#node-components) que compõem um nó. +* Leia a [definição da API para um Nó](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). +* Leia a seção [Nó](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) do documento de design de arquitetura. +* Leia sobre [taints e tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). From 5037dc9bdd46902e7905008e367e8ea911903d74 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Thu, 21 Apr 2022 14:15:44 -0300 Subject: [PATCH 002/292] Add content/pt-br/docs/reference/glossary/userns.md --- .../pt-br/docs/reference/glossary/userns.md | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 content/pt-br/docs/reference/glossary/userns.md diff --git a/content/pt-br/docs/reference/glossary/userns.md b/content/pt-br/docs/reference/glossary/userns.md new file mode 100644 index 0000000000000..3fa8dc5031d19 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/userns.md @@ -0,0 +1,24 @@ +--- +title: Namespace do usuário +id: userns +date: 2021-07-13 +full_link: https://man7.org/linux/man-pages/man7/user_namespaces.7.html +short_description: > + Um recurso do kernel Linux para emular privilégios de superusuário para usuários sem privilégios. + +aka: +tags: +- security +--- + +Um recurso do kernel para emular o root. Usado para "contêineres sem root". + + + +Os namespaces do usuário são um recurso do kernel Linux que permite que um usuário não root emule privilégios de superusuário ("root"), por exemplo, para executar contêineres sem ser um superusuário fora do contêiner. + +O namespace do usuário é eficaz para mitigar os danos de possíveis ataques fora de contêineres. + +No contexto de namespaces de usuário, o namespace é um recurso do kernel Linux, e não um {{< glossary_tooltip text="namespace" term_id="namespace" >}} no sentido do termo Kubernetes. + + From d353f8ceb332dbc7f10182a77bed7fa5261c0e23 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Thu, 21 Apr 2022 14:39:00 -0300 Subject: [PATCH 003/292] Add content/pt-br/docs/reference/glossary/sig.md --- content/pt-br/docs/reference/glossary/sig.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 content/pt-br/docs/reference/glossary/sig.md diff --git a/content/pt-br/docs/reference/glossary/sig.md b/content/pt-br/docs/reference/glossary/sig.md new file mode 100644 index 0000000000000..8d8770610076d --- /dev/null +++ b/content/pt-br/docs/reference/glossary/sig.md @@ -0,0 +1,19 @@ +--- +title: SIG (grupo de interesse especial) +id: sig +date: 2018-04-12 +full_link: https://github.com/kubernetes/community/blob/master/sig-list.md#master-sig-list +short_description: > + Membros da comunidade que gerenciam coletivamente uma parte ou continuamente um projeto maior de código aberto do Kubernetes. + +aka: +tags: +- community +--- + {{< glossary_tooltip text="Membros da comunidade" term_id="member" >}} que gerenciam coletivamente uma parte ou continuamente um projeto maior de código aberto do Kubernetes. + + + +Os membros dentro de um grupo de interesse especial (do inglês - Special Interest Group, SIG) têm um interesse comum em avançar em uma área específica, como arquitetura, API ou documentação. Os SIGs devem seguir as [diretrizes de governança](https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md) do SIG, mas podem ter sua própria política de contribuição e canais de comunicação. + +Para mais informações, consulte o repositório [kubernetes/community](https://github.com/kubernetes/community) e a lista atual de [SIGs e Grupos de Trabalho](https://github.com/kubernetes/community/blob/master/sig-list.md). From 1dcbae36ea3a0e7a856144bf4da7434088801356 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Fri, 22 Apr 2022 07:52:36 -0300 Subject: [PATCH 004/292] Add content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md --- .../setup-tools/kubeadm/kubeadm-version.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md new file mode 100644 index 0000000000000..5bf2ed0e31b6d --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md @@ -0,0 +1,13 @@ +--- +reviewers: +- luxas +- jbeda +title: kubeadm version +content_type: conceito +weight: 80 +--- + +Este comando exibe a versão do kubeadm. + + +{{< include "generated/kubeadm_version.md" >}} From a54d1d047719e85d127e2463f655ff028b26ff5c Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Fri, 22 Apr 2022 08:44:07 -0300 Subject: [PATCH 005/292] Add pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md --- .../setup-tools/kubeadm/kubeadm-token.md | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md new file mode 100644 index 0000000000000..406d3b2983d85 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md @@ -0,0 +1,27 @@ +--- +title: kubeadm token +content_type: conceito +weight: 70 +--- + + +Os Bootstrap tokens são usados para estabelecer uma relação de confiança bidirecional entre um nó que se junta ao cluster e um nó do plano de controle, conforme descrito na [autenticação com tokens de inicialização](/docs/reference/access-authn-authz/bootstrap-tokens/). + +O `kubeadm init` cria um token inicial com um TTL de 24 horas. Os comandos a seguir permitem que você gerencie esse token e também crie e gerencie os novos. + + +## Criar um token kubeadm {#cmd-token-create} +{{< include "generated/kubeadm_token_create.md" >}} + +## Excluir um token kubeadm {#cmd-token-delete} +{{< include "generated/kubeadm_token_delete.md" >}} + +## Gerar um token kubeadm {#cmd-token-generate} +{{< include "generated/kubeadm_token_generate.md" >}} + +## Listar um token kubeadm {#cmd-token-list} +{{< include "generated/kubeadm_token_list.md" >}} + +## {{% heading "O que vem a seguir?" %}} + +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join) para inicializar um nó `worker` do Kubernetes e associá-lo ao cluster \ No newline at end of file From ca7ff4b5c9de61a6c10a8dfadf80c09ac88733a2 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Fri, 22 Apr 2022 09:19:49 -0300 Subject: [PATCH 006/292] Add folder setup-tools/kubeadm/generated/ --- .../docs/reference/setup-tools/kubeadm/generated/README.md | 1 + .../docs/reference/setup-tools/kubeadm/generated/_index.md | 6 ++++++ 2 files changed, 7 insertions(+) create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md new file mode 100644 index 0000000000000..020bc76f624cd --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md @@ -0,0 +1 @@ +All files in this directory are auto-generated from other repos. **Do not edit them manually. You must edit them in their upstream repo.** \ No newline at end of file diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md new file mode 100644 index 0000000000000..7ebf753ae9d46 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md @@ -0,0 +1,6 @@ +--- +title: "Kubeadm Generated" +weight: 10 +toc_hide: true +--- + From 08a1fa5f13b29ed36d5f807f74cf758d4d8c7f5f Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sat, 23 Apr 2022 08:08:00 -0300 Subject: [PATCH 007/292] Remove folder generated --- .../docs/reference/setup-tools/kubeadm/generated/README.md | 1 - .../docs/reference/setup-tools/kubeadm/generated/_index.md | 6 ------ 2 files changed, 7 deletions(-) delete mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md delete mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md deleted file mode 100644 index 020bc76f624cd..0000000000000 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/README.md +++ /dev/null @@ -1 +0,0 @@ -All files in this directory are auto-generated from other repos. **Do not edit them manually. You must edit them in their upstream repo.** \ No newline at end of file diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md deleted file mode 100644 index 7ebf753ae9d46..0000000000000 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Kubeadm Generated" -weight: 10 -toc_hide: true ---- - From 5fab2d1d28f5f8a68d86b681f63b2d717b77815d Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sat, 23 Apr 2022 13:03:18 -0300 Subject: [PATCH 008/292] Add pt-br/docs/reference/glossary/kubectl.md --- .../pt-br/docs/reference/glossary/kubectl.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 content/pt-br/docs/reference/glossary/kubectl.md diff --git a/content/pt-br/docs/reference/glossary/kubectl.md b/content/pt-br/docs/reference/glossary/kubectl.md new file mode 100644 index 0000000000000..d3136c44230c4 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/kubectl.md @@ -0,0 +1,18 @@ +--- +title: Kubectl +id: kubectl +date: 2018-04-12 +full_link: /docs/user-guide/kubectl-overview/ +short_description: > + Uma ferramenta de linha de comando para se comunicar com um cluster Kubernetes. + +aka: +- kubectl +tags: +- tool +- fundamental +--- +Ferramenta de linha de comando para se comunicar com o {{< glossary_tooltip text="plano de controle" term_id="control-plane" >}} de um cluster Kubernetes usando a API do Kubernetes. + + +Você pode usar `kubectl` para criar, inspecionar, atualizar e excluir objetos Kubernetes. From e0143344e7deb4fa872920b43241817cbc5ed307 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Thu, 28 Apr 2022 10:30:54 -0300 Subject: [PATCH 009/292] Update sentence adjustment --- content/pt-br/docs/reference/glossary/sig.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/pt-br/docs/reference/glossary/sig.md b/content/pt-br/docs/reference/glossary/sig.md index 8d8770610076d..37d70a85f1852 100644 --- a/content/pt-br/docs/reference/glossary/sig.md +++ b/content/pt-br/docs/reference/glossary/sig.md @@ -4,13 +4,13 @@ id: sig date: 2018-04-12 full_link: https://github.com/kubernetes/community/blob/master/sig-list.md#master-sig-list short_description: > - Membros da comunidade que gerenciam coletivamente uma parte ou continuamente um projeto maior de código aberto do Kubernetes. + Membros da comunidade que gerenciam coletivamente e continuamente uma parte ou um projeto maior de código aberto do Kubernetes. aka: tags: - community --- - {{< glossary_tooltip text="Membros da comunidade" term_id="member" >}} que gerenciam coletivamente uma parte ou continuamente um projeto maior de código aberto do Kubernetes. + {{< glossary_tooltip text="Membros da comunidade" term_id="member" >}} que gerenciam coletivamente e continuamente uma parte ou um projeto maior de código aberto do Kubernetes. From c753d58337f4add22dc6a616fb6d4e6bfe427a7f Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sat, 4 Jun 2022 12:07:53 -0300 Subject: [PATCH 010/292] Improvements done --- .../pt-br/docs/concepts/architecture/nodes.md | 100 ++++++++---------- 1 file changed, 47 insertions(+), 53 deletions(-) diff --git a/content/pt-br/docs/concepts/architecture/nodes.md b/content/pt-br/docs/concepts/architecture/nodes.md index 23ee94bcb46b4..a26aaa932a531 100644 --- a/content/pt-br/docs/concepts/architecture/nodes.md +++ b/content/pt-br/docs/concepts/architecture/nodes.md @@ -1,7 +1,5 @@ --- reviewers: -- caesarxuchao -- dchen1107 title: Nós content_type: conceito weight: 10 @@ -9,22 +7,22 @@ weight: 10 -O Kubernetes executa sua carga de trabalho colocando contêineres em Pods para serem executados em _Nós_. Um nó pode ser uma máquina virtual ou física, dependendo do cluster. Cada nó é gerenciado pelo {{< glossary_tooltip text="plano de controle" term_id="control-plane" >}} e contém os serviços necessários para executar {{< glossary_tooltip text="Pods" term_id="pod" >}}. +O Kubernetes executa sua carga de trabalho colocando contêineres em Pods para serem executados em _Nós_. Um nó pode ser uma máquina virtual ou física, dependendo do cluster. Cada nó é gerenciado pela {{< glossary_tooltip text="camada de gerenciamento" term_id="control-plane" >}} e contém os serviços necessários para executar {{< glossary_tooltip text="Pods" term_id="pod" >}}. Normalmente, você tem vários nós em um cluster; em um ambiente de aprendizado ou limitado por recursos, você pode ter apenas um nó. -Os [componentes](/docs/concepts/overview/components/#node-components) em um nó incluem o {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, um {{< glossary_tooltip text="contêiner runtime" term_id="container-runtime" >}}, e o {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}. +Os [componentes](/docs/concepts/overview/components/#node-components) em um nó incluem o {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, um {{< glossary_tooltip text="agente de execução de contêiner" term_id="container-runtime" >}}, e o {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}. ## Administração -Existem duas maneiras principais de adicionar Nós ao {{< glossary_tooltip text="servidor API" term_id="kube-apiserver" >}}: +Existem duas maneiras principais de adicionar Nós ao {{< glossary_tooltip text="Servidor da API" term_id="kube-apiserver" >}}: -1. O kubelet em um nó se registra automaticamente no plano de controle +1. O kubelet em um nó se registra automaticamente na camada de gerenciamento 2. Você (ou outro usuário humano) adiciona manualmente um objeto Nó -Depois de criar um {{< glossary_tooltip text="objeto" term_id="object" >}} Nó, ou o kubelet em um nó se registra automaticamente, o plano de controle verifica se o novo objeto Nó é válido. Por exemplo, se você tentar criar um nó a partir do seguinte manifesto JSON: +Depois de criar um {{< glossary_tooltip text="objeto" term_id="object" >}} Nó, ou o kubelet em um nó se registra automaticamente, a camada de gerenciamento verifica se o novo objeto Nó é válido. Por exemplo, se você tentar criar um nó a partir do seguinte manifesto JSON: ```json { @@ -39,10 +37,10 @@ Depois de criar um {{< glossary_tooltip text="objeto" term_id="object" >}} Nó, } ``` -O Kubernetes cria um objeto nó internamente (a representação). O Kubernetes verifica se um kubelet se registrou no servidor API que corresponde ao campo `metadata.name` do Nó. Se o nó estiver saudável (ou seja, todos os serviços necessários estiverem em execução), ele será elegível para executar um Pod. Caso contrário, esse nó é ignorado para qualquer atividade de cluster até que se torne saudável. +O Kubernetes cria um objeto nó internamente (a representação). O Kubernetes verifica se um kubelet se registrou no servidor da API que corresponde ao campo `metadata.name` do Nó. Se o nó estiver íntegro (ou seja, todos os serviços necessários estiverem em execução), ele será elegível para executar um Pod. Caso contrário, esse nó é ignorado para qualquer atividade de cluster até que se torne íntegro. {{< note >}} -O Kubernetes mantém o objeto nó inválido e continua verificando se ele se torna saudável. +O Kubernetes mantém o objeto nó inválido e continua verificando se ele se torna íntegro. Você, ou um {{< glossary_tooltip term_id="controller" text="controlador">}}, deve excluir explicitamente o objeto Nó para interromper essa verificação de integridade. {{< /note >}} @@ -51,18 +49,18 @@ O nome de um objeto nó deve ser um nome de [subdomínio válido de DNS](/docs/c ### Singularidade de nome do nó -O [nome](/docs/concepts/overview/working-with-objects/names#names) identifica um nó. Dois nós não podem ter o mesmo nome ao mesmo tempo. O Kubernetes também assume que um recurso com o mesmo nome é o mesmo objeto. No caso de um nó, assume-se implicitamente que uma instância usando o mesmo nome terá o mesmo estado (por exemplo, configurações de rede, conteúdo do disco raiz) e atributos como rótulos de nó. Isso pode levar a inconsistências se uma instância for modificada sem alterar seu nome. Se o nó precisar ser substituído ou atualizado significativamente, o objeto Nó existente precisa ser removido do servidor API primeiro e adicionado novamente após a atualização. +O [nome](/docs/concepts/overview/working-with-objects/names#names) identifica um nó. Dois nós não podem ter o mesmo nome ao mesmo tempo. O Kubernetes também assume que um recurso com o mesmo nome é o mesmo objeto. No caso de um nó, assume-se implicitamente que uma instância usando o mesmo nome terá o mesmo estado (por exemplo, configurações de rede, conteúdo do disco raiz) e atributos como label de nó. Isso pode levar a inconsistências se uma instância for modificada sem alterar seu nome. Se o nó precisar ser substituído ou atualizado significativamente, o objeto Nó existente precisa ser removido do servidor da API primeiro e adicionado novamente após a atualização. ### Auto-registro de Nós -Quando a opção `--register-node` do kubelet for verdadeira (padrão), o kubelet tentará se registrar no servidor API. Este é o padrão preferido, usado pela maioria das distribuições. +Quando a opção `--register-node` do kubelet for verdadeira (padrão), o kubelet tentará se registrar no servidor da API. Este é o padrão preferido, usado pela maioria das distribuições. Para auto-registro, o kubelet é iniciado com as seguintes opções: -- `--kubeconfig` - O caminho das credenciais para se autenticar no servidor API. -- `--cloud-provider` - Como falar com um {{< glossary_tooltip text="provedor de nuvem" term_id="cloud-provider" >}} +- `--kubeconfig` - O caminho das credenciais para se autenticar no servidor da API. +- `--cloud-provider` - Como comunicar com um {{< glossary_tooltip text="provedor de nuvem" term_id="cloud-provider" >}} para ler metadados sobre si mesmo. -- `--register-node` - Registrar automaticamente no servidor API. +- `--register-node` - Registrar automaticamente no servidor da API. - `--register-with-taints` - Registra o nó com a lista fornecida de {{< glossary_tooltip text="taints" term_id="taint" >}} (separadas por vírgula `=:`). Não funciona se o `register-node` for falso. @@ -72,28 +70,28 @@ Não funciona se o `register-node` for falso. no cluster (consulte as restrições de label impostas pelo [plug-in de admissão NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)). - `--node-status-update-frequency` - Especifica com que frequência o kubelet publica o status do nó no servidor da API. -Quando o [modo de autorização do nó](/docs/reference/access-authn-authz/node/) e o [plug-in de admissão NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) estão ativados, os kubelets só estrão autorizados a criar/modificar seu próprio recurso do nó. +Quando o [modo de autorização do nó](/docs/reference/access-authn-authz/node/) e o [plug-in de admissão NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) estão ativados, os kubelets somente estarão autorizados a criar/modificar seu próprio recurso do nó. {{< note >}} Como mencionado na seção de [singularidade do nome do nó](#singularidade-de-nome-do-no), quando a configuração do nó precisa ser atualizada, é uma boa prática registrar novamente o nó no servidor da API. Por exemplo, se o kubelet estiver sendo reiniciado com o novo conjunto de `--node-labels`, mas o mesmo nome de nó for usado, a alteração não entrará em vigor, pois os labels estão sendo definidos no registro do Nó. -Pods já agendados no Nó podem se comportar mal ou causar problemas se a configuração do Nó for alterada na reinicialização do kubelet. Por exemplo, o Pod já em execução pode estar contaminado contra os novos rótulos atribuídos ao Nó, enquanto outros Pods, que são incompatíveis com esse Pod, serão agendados com base nesse novo rótulo. O novo registro do nó garante que todos os Pods sejam drenados e devidamente reprogramados. +Pods já agendados no Nó podem ter um comportamento anormal ou causar problemas se a configuração do Nó for alterada na reinicialização do kubelet. Por exemplo, o Pod já em execução pode estar marcado diferente dos labels atribuídas ao Nó, enquanto outros Pods, que são incompatíveis com esse Pod, serão agendados com base nesse novo label. O novo registro do nó garante que todos os Pods sejam drenados e devidamente reiniciados. {{< /note >}} ### Administração manual de nós -Você pode criar e modificar objetos Nó usando o {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}.. +Você pode criar e modificar objetos Nó usando o {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}. -Quando você quiser criar objetos Nó manualmente, defina a opção do kubelet `--register-node=false`. +Quando você quiser manualmente criar objetos Nó, defina a opção do kubelet `--register-node=false`. -Você pode modificar objetos Nó, independentemente da configuração de `--register-node`. Por exemplo, você pode definir labels em um nó existente ou marcá-lo como não programado. +Você pode modificar os objetos Nó, independentemente da configuração de `--register-node`. Por exemplo, você pode definir labels em um nó existente ou marcá-lo como não disponível. -Você pode usar labels nos Nós em conjunto com seletores de nós nos Pods para controlar o agendamento. Por exemplo, você pode restringir um Pod a ser elegível apenas para ser executado em um subconjunto dos nós disponíveis. +Você pode usar labels nos Nós em conjunto com seletores de nós nos Pods para controlar a disponibilidade. Por exemplo, você pode restringir um Pod a ser elegível apenas para ser executado em um subconjunto dos nós disponíveis. -Marcar um nó como não programável impede que o agendador coloque novos pods nesse nó, mas não afeta os Pods existentes no nó. Isso é útil como uma etapa preparatória antes da reinicialização de um nó ou outra manutenção. +Marcar um nó como não disponível impede que o escalonador coloque novos pods nesse nó, mas não afeta os Pods existentes no nó. Isso é útil como uma etapa preparatória antes da reinicialização de um nó ou outra manutenção. -Para marcar um nó como não programado, execute: +Para marcar um nó como não disponível, execute: ```shell kubectl cordon $NODENAME @@ -102,10 +100,10 @@ kubectl cordon $NODENAME Consulte [Drenar um nó com segurança](/docs/tasks/administer-cluster/safely-drain-node/) para obter mais detalhes. {{< note >}} -Os Pods que fazem parte de um {{< glossary_tooltip term_id="daemonset" >}} toleram ser executados em um nó não programável. Os DaemonSets geralmente fornecem serviços locais de nós que devem ser executados em um Nó, mesmo que ele esteja sendo drenado de aplicativos de carga de trabalho. +Os Pods que fazem parte de um {{< glossary_tooltip term_id="daemonset" >}} toleram ser executados em um nó não disponível. Os DaemonSets geralmente fornecem serviços locais de nós que devem ser executados em um Nó, mesmo que ele esteja sendo drenado de aplicativos de carga de trabalho. {{< /note >}} -## Situação do Nó +## Status do Nó O status de um nó contém as seguintes informações: @@ -124,7 +122,7 @@ Cada seção da saída está descrita abaixo. ### Endereços -O uso desses campos pode mudar dependendo do seu provedor de nuvem ou configuração `bare metal`. +O uso desses campos pode mudar dependendo do seu provedor de nuvem ou configuração `configuração dedicada`. * HostName: O nome do host relatado pelo `kernel` do nó. Pode ser substituído através do parâmetro kubelet `--hostname-override`. * ExternalIP: Geralmente, o endereço IP do nó que é roteável externamente (disponível fora do `cluster`). @@ -137,7 +135,7 @@ O campo `conditions` descreve o status de todos os nós em execução. Exemplos {{< table caption = "Node conditions, and a description of when each condition applies." >}} | Node Condition | Description | |----------------------|-------------| -| `Ready` | `True` Se o nó estiver saudável e pronto para aceitar pods, `False` se o nó não estiver saudável e não estiver aceitando pods, e desconhecido `Unknown` se o controlador do nó tiver sem notícias do nó no último `node-monitor-grace-period` (o padrão é de 40 segundos) | +| `Ready` | `True` Se o nó estiver íntegro e pronto para aceitar pods, `False` se o nó não estiver íntegro e não estiver aceitando pods, e desconhecido `Unknown` se o controlador do nó tiver sem notícias do nó no último `node-monitor-grace-period` (o padrão é de 40 segundos) | | `DiskPressure` | `True` Se houver pressão sobre o tamanho do disco, ou seja, se a capacidade do disco for baixa; caso contrário `False` | | `MemoryPressure` | `True` Se houver pressão na memória do nó, ou seja, se a memória do nó estiver baixa; caso contrário `False` | | `PIDPressure` | `True` Se houver pressão sobre os processos, ou seja, se houver muitos processos no nó; caso contrário `False` | @@ -148,7 +146,7 @@ O campo `conditions` descreve o status de todos os nós em execução. Exemplos Se você usar as ferramentas de linha de comando para mostrar os detalhes de um nó isolado, a `Condition` inclui `SchedulingDisabled`. `SchedulingDisabled` não é uma condição na API do Kubernetes; em vez disso, os nós isolados são marcados como `Unschedulable` em suas especificações. {{< /note >}} -Na API do Kubernetes, a condição de um nó é representada como parte do `.status` do recurso do nó. Por exemplo, a seguinte estrutura JSON descreve um nó saudável: +Na API do Kubernetes, a condição de um nó é representada como parte do `.status` do recurso do nó. Por exemplo, a seguinte estrutura JSON descreve um nó íntegro: ```json "conditions": [ @@ -163,13 +161,13 @@ Na API do Kubernetes, a condição de um nó é representada como parte do `.sta ] ``` -Se o status da condição `Ready` permanecer desconhecido (`Unknown`) ou falso (`False`) por mais tempo do que o limite de despejo do pod (`pod-eviction-timeout`) (um argumento passado para o {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager">}}), o [controlador de nó](#node-controller) acionará o {{< glossary_tooltip text="despejo iniciado pela API" term_id="api-eviction" >}} para todos os Pods atribuídos a esse nó. A duração padrão do tempo limite de despejo é de **cinco minutos**. Em alguns casos, quando o nó está inacessível, o servidor API não consegue se comunicar com o kubelet no nó. A decisão de excluir os pods não pode ser comunicada ao kubelet até que a comunicação com o servidor API seja restabelecida. Enquanto isso, os pods agendados para exclusão podem continuar a ser executados no nó particionado. +Se o status da condição `Ready` permanecer desconhecido (`Unknown`) ou falso (`False`) por mais tempo do que o limite de remoção do pod (`pod-eviction-timeout`) (um argumento passado para o {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager">}}), o [controlador de nó](#node-controller) acionará o {{< glossary_tooltip text="remoção iniciado pela API" term_id="api-eviction" >}} para todos os Pods atribuídos a esse nó. A duração padrão do tempo limite de remoção é de **cinco minutos**. Em alguns casos, quando o nó está inacessível, o servidor da API não consegue se comunicar com o kubelet no nó. A decisão de excluir os pods não pode ser comunicada ao kubelet até que a comunicação com o servidor da API seja restabelecida. Enquanto isso, os pods agendados para exclusão podem continuar a ser executados no nó particionado. O controlador de nós não força a exclusão dos pods até que seja confirmado que eles pararam de ser executados no cluster. Você pode ver os pods que podem estar sendo executados em um nó inacessível como estando no estado de terminando (`Terminating`) ou desconhecido (`Unknown`). Nos casos em que o Kubernetes não retirar da infraestrutura subjacente se um nó tiver deixado permanentemente um cluster, o administrador do cluster pode precisar excluir o objeto do nó manualmente. Excluir o objeto do nó do Kubernetes faz com que todos os objetos Pod em execução no nó sejam excluídos do servidor da API e libera seus nomes. -Quando ocorrem problemas nos nós, o plano de controle do Kubernetes cria automaticamente [`taints`](/docs/concepts/scheduling-eviction/taint-and-toleration/) que correspondem às condições que afetam o nó. O agendador leva em consideração as `taints` do Nó ao atribuir um Pod a um Nó. Os Pods também podem ter {{< glossary_tooltip text="tolerations" term_id="toleration" >}} que os permitem funcionar em um nó, mesmo que tenha uma `taint` específica. +Quando ocorrem problemas nos nós, a camada de gerenciamento do Kubernetes cria automaticamente [`taints`](/docs/concepts/scheduling-eviction/taint-and-toleration/) que correspondem às condições que afetam o nó. O escalonador leva em consideração as `taints` do Nó ao atribuir um Pod a um Nó. Os Pods também podem ter {{< glossary_tooltip text="tolerations" term_id="toleration" >}} que os permitem funcionar em um nó, mesmo que tenha uma `taint` específica. -Consulte [Nó Taint Nodes por Condição](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition) +Consulte [Nó Taint por Condição](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition) para mais detalhes. ### Capacidade e Alocável {#capacity} @@ -202,40 +200,40 @@ O kubelet é responsável por criar e atualizar o `.status` dos Nós e por atual ## Controlador de Nós -O {{< glossary_tooltip text="controlador" term_id="controller" >}} de nós é um componente do plano de controle do Kubernetes que gerencia vários aspectos dos nós. +O {{< glossary_tooltip text="controlador" term_id="controller" >}} de nós é um componente da camada de gerenciamento do Kubernetes que gerencia vários aspectos dos nós. O controlador de nó tem várias funções na vida útil de um nó. O primeiro é atribuir um bloco CIDR ao nó quando ele é registrado (se a atribuição CIDR estiver ativada). -O segundo é manter a lista interna de nós do controlador de nós atualizada com a lista de máquinas disponíveis do provedor de nuvem. Ao ser executado em um ambiente de nuvem e sempre que um nó não é saudável, o controlador de nó pergunta ao provedor de nuvem se a VM desse nó ainda está disponível. Caso contrário, o controlador de nós exclui o nó de sua lista de nós. +O segundo é manter a lista interna de nós do controlador de nós atualizada com a lista de máquinas disponíveis do provedor de nuvem. Ao ser executado em um ambiente de nuvem e sempre que um nó não é íntegro, o controlador de nó pergunta ao provedor de nuvem se a VM desse nó ainda está disponível. Caso contrário, o controlador de nós exclui o nó de sua lista de nós. O terceiro é monitorar a saúde dos nós. O controlador do nó é responsável por: - No caso de um nó se tornar inacessível, atualizando a condição NodeReady de dentro do `.status` do nó. Nesse caso, o controlador do nó define a condição de pronto (`NodeReady`) como condição desconhecida (`ConditionUnknown`). -- Se um nó permanecer inacessível: será iniciado o [despejo pela API](/docs/concepts/scheduling-eviction/api-eviction/) para todos os Pods no nó inacessível. Por padrão, o controlador do nó espera 5 minutos entre marcar o nó como condição desconhecida (`ConditionUnknown`) e enviar a primeira solicitação de despejo. +- Se um nó permanecer inacessível: será iniciado a [remoção pela API](/docs/concepts/scheduling-eviction/api-eviction/) para todos os Pods no nó inacessível. Por padrão, o controlador do nó espera 5 minutos entre marcar o nó como condição desconhecida (`ConditionUnknown`) e enviar a primeira solicitação de remoção. O controlador de nó verifica o estado de cada nó a cada `--node-monitor-period` segundos. -### Limites de taxa de despejo +### Limites de taxa de remoção -Na maioria dos casos, o controlador de nós limita a taxa de despejo a `--node-eviction-rate` (0,1 por padrão) por segundo, o que significa que ele não despejará pods de mais de 1 nó por 10 segundos. +Na maioria dos casos, o controlador de nós limita a taxa de remoção a `--node-eviction-rate` (0,1 por padrão) por segundo, o que significa que ele não despejará pods de mais de 1 nó por 10 segundos. -O comportamento de despejo do nó muda quando um nó em uma determinada zona de disponibilidade se torna não saudável. O controlador de nós verifica qual porcentagem de nós na zona não são saudáveis (a condição `NodeReady` é desconhecida `ConditionUnknown` ou falsa `ConditionFalse`) ao mesmo tempo: +O comportamento de remoção do nó muda quando um nó em uma determinada zona de disponibilidade se torna não íntegro. O controlador de nós verifica qual porcentagem de nós na zona não são íntegras (a condição `NodeReady` é desconhecida `ConditionUnknown` ou falsa `ConditionFalse`) ao mesmo tempo: -- Se a fração de nós não saudáveis for ao menos `--unhealthy-zone-threshold` (padrão 0,55), então a taxa de despejo será reduzida. -- Se o cluster for pequeno (ou seja, tiver menos ou igual a nós `--large-cluster-size-threshold` - padrão 50), então os despejos serão interrompidos. -- Caso contrário, a taxa de despejo é reduzida para `--secondary-node-eviction-rate` de despejo de nós secundários (padrão 0,01) por segundo. +- Se a fração de nós não íntegros for ao menos `--unhealthy-zone-threshold` (padrão 0,55), então a taxa de remoção será reduzida. +- Se o cluster for pequeno (ou seja, tiver número de nós menor ou igual ao valor da opção `--large-cluster-size-threshold` - padrão 50), então as remoções serão interrompidas. +- Caso contrário, a taxa de remoção é reduzida para `--secondary-node-eviction-rate` de nós secundários (padrão 0,01) por segundo. -A razão pela qual essas políticas são implementadas por zona de disponibilidade é porque uma zona de disponibilidade pode ser particionada a iniciar do plano de controle, enquanto as outras permanecem conectadas. Se o seu cluster não abranger várias zonas de disponibilidade de provedores de nuvem, o mecanismo de despejo não levará em conta a indisponibilidade por zona. +A razão pela qual essas políticas são implementadas por zona de disponibilidade é porque uma zona de disponibilidade pode ser particionada a iniciar da camada de gerenciamento, enquanto as outras permanecem conectadas. Se o seu cluster não abranger várias zonas de disponibilidade de provedores de nuvem, o mecanismo de remoção não levará em conta a indisponibilidade por zona. -Uma das principais razões para espalhar seus nós pelas zonas de disponibilidade é para que a carga de trabalho possa ser transferida para zonas saudáveis quando uma zona inteira cair. Portanto, se todos os nós em uma zona não forem saudáveis, o controlador do nó despeja na taxa normal de `--node-eviction-rate`. O caso é quando todas as zonas são completamente insalubres (nenhum dos nós do cluster será saudável). Nesse caso, o controlador do nó assume que há algum problema com a conectividade entre o plano de controle e os nós e não realiza nenhum despejo. (Se houver uma interrupção e alguns nós reaparecerem, o controlador do nó expulsa pods dos nós restantes que são insalubres ou inacessíveis). +Uma das principais razões para espalhar seus nós pelas zonas de disponibilidade é para que a carga de trabalho possa ser transferida para zonas íntegras quando uma zona inteira cair. Portanto, se todos os nós em uma zona não forem íntegros, o controlador do nó despeja na taxa normal de `--node-eviction-rate`. O caso especial é quando todas as zonas são completamente insalubres (nenhum dos nós do cluster será íntegro). Nesse caso, o controlador do nó assume que há algum problema com a conectividade entre a camada de gerenciamento e os nós e não realiza nenhuma remoção. (Se houver uma interrupção e alguns nós reaparecerem, o controlador do nó expulsa pods dos nós restantes que são insalubres ou inacessíveis). -O controlador de nós também é responsável por despejar pods em execução nos nós com `NoExecute` taints, a menos que esses pods tolerem essa taint. O controlador de nó também adiciona as {{< glossary_tooltip text="taints" term_id="taint" >}} correspondentes aos problemas de nó, como nó inacessível ou não pronto. Isso significa que o agendador não colocará Pods em nós não saudáveis. +O controlador de nós também é responsável por despejar pods em execução nos nós com `NoExecute` taints, a menos que esses pods tolerem essa taint. O controlador de nó também adiciona as {{< glossary_tooltip text="taints" term_id="taint" >}} correspondentes aos problemas de nó, como nó inacessível ou não pronto. Isso significa que o escalonador não colocará Pods em nós não íntegros. ## Rastreamento de capacidade de recursos {#node-capacity} Os objetos do nó rastreiam informações sobre a capacidade de recursos do nó: por exemplo, a quantidade de memória disponível e o número de CPUs. Os nós que se [auto-registram](#self-registration-of-nodes) relatam sua capacidade durante o registro. Se você adicionar [manualmente](#manual-node-administration) um nó, precisará definir as informações de capacidade do nó ao adicioná-lo. -O {{< glossary_tooltip text="agendador" term_id="kube-scheduler" >}} do Kubernetes garante que haja recursos suficientes para todos os Pods em um nó. O agendador verifica se a soma das solicitações de contêineres no nó não é maior do que a capacidade do nó. Essa soma de solicitações inclui todos os contêineres gerenciados pelo kubelet, mas exclui quaisquer contêineres iniciados diretamente pelo tempo de execução do contêiner e também exclui quaisquer processos executados fora do controle do kubelet. +O {{< glossary_tooltip text="escalonador" term_id="kube-scheduler" >}} do Kubernetes garante que haja recursos suficientes para todos os Pods em um nó. O escalonador verifica se a soma das solicitações de contêineres no nó não é maior do que a capacidade do nó. Essa soma de solicitações inclui todos os contêineres gerenciados pelo kubelet, mas exclui quaisquer contêineres iniciados diretamente pelo agente de execução de contêiner e também exclui quaisquer processos executados fora do controle do kubelet. {{< note >}} Se você quiser reservar explicitamente recursos para processos que não sejam do Pod, consulte [reserva de recursos para daemons do sistema](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved). @@ -255,7 +253,7 @@ O kubelet tenta detectar o desligamento do sistema do nó e encerra os pods em e O Kubelet garante que os pods sigam o processo normal de [término do pod](/docs/concepts/workloads/pods/)pod-lifecycle/#pod-termination) durante o desligamento do nó. -O recurso de desligamento gradual do nó depende do systemd, pois aproveita os [bloqueios do inibidor do systemd(https://www.freedesktop.org/wiki/Software/systemd/inhibit/) para atrasar o desligamento do nó com uma determinada duração. +O recurso de desligamento gradual do nó depende do systemd, pois aproveita os [bloqueios do inibidor do systemd](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) para atrasar o desligamento do nó com uma determinada duração. O desligamento gradual do nó é controlado com [recursos](/docs/reference/command-line-tools-reference/feature-gates/) `GracefulNodeShutdown`, que é ativado por padrão na versão 1.21. @@ -269,7 +267,7 @@ Durante um desligamento gradual, o kubelet encerra os pods em duas fases: O recurso de desligamento gradual do nó é configurado com duas opções [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/): * `shutdownGracePeriod`: - * Especifica a duração total pela qual o nó deve atrasar o desligamento. Este é o período de carência total para o térmido dos pods regulares e os [críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical). + * Especifica a duração total pela qual o nó deve atrasar o desligamento. Este é o período de carência total para o término dos pods regulares e os [críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical). * `shutdownGracePeriodCriticalPods`: * Especifica a duração utlizada para encerrar [pods críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) durante um desligamento de nó. Este valor deve ser menor que `shutdownGracePeriod`. @@ -289,11 +287,7 @@ Message: Pod was terminated in response to imminent node shutdown. {{< feature-state state="alpha" for_k8s_version="v1.23" >}} -Assuming the following custom pod -[priority classes](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) -in a cluster, - -Para fornecer mais flexibilidade durante o desligamento gradual do nó em torno da ordem de pods durante o desligamento, o desligamento gradual do nó respeita a PriorityClass for Pods, desde que você tenha ativado esse recurso em seu cluster. O recurso permite que o cluster defina explicitamente a ordem dos pods durante o desligamento gradual do nó com base em [classes de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass). +Assumindo as seguintes [classes de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) do pod em um cluster, para fornecer mais flexibilidade durante o desligamento gradual do nó em torno da ordem de pods durante o desligamento, o desligamento gradual do nó respeita a PriorityClass dos Pods, desde que você tenha ativado esse recurso em seu cluster. O recurso permite que o cluster defina explicitamente a ordem dos pods durante o desligamento gradual do nó com base em [classes de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass). O recurso [Desligamento Gradual do Nó](#graceful-node-shutdown), conforme descrito acima, desliga pods em duas fases, pods não críticos, seguidos por pods críticos. Se for necessária flexibilidade adicional para definir explicitamente a ordem dos pods durante o desligamento de uma maneira mais granular, o desligamento gradual baseado na prioridade do pod pode ser usado. @@ -308,7 +302,7 @@ Assumindo as seguintes classes de prioridade de pod personalizadas em um cluster |`custom-class-c` | 1000 | |`regular/unset` | 0 | -Na [configuração do kubelet](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration), as configurações para `shutdownGracePeriodByPodPriority` podem se parecer com: +Na [configuração do kubelet](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration), as configurações para `shutdownGracePeriodByPodPriority` são semelhantes a: |Valor das classes de prioridade|Tempo de desligamento| |------------------------|---------------| @@ -317,7 +311,7 @@ Na [configuração do kubelet](/docs/reference/config-api/kubelet-config.v1beta1 | 1000 |120 segundos | | 0 |60 segundos | -A configuração correspondente do kubelet YAML seria: +A configuração correspondente do YAML do kubelet seria: ```yaml shutdownGracePeriodByPodPriority: @@ -389,4 +383,4 @@ Para obter mais informações e para ajudar nos testes e fornecer feedback, cons * Saiba mais sobre [componentes](/docs/concepts/overview/components/#node-components) que compõem um nó. * Leia a [definição da API para um Nó](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). * Leia a seção [Nó](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) do documento de design de arquitetura. -* Leia sobre [taints e tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). +* Leia sobre [taints e tolerâncias](/docs/concepts/scheduling-eviction/taint-and-toleration/). From f6e466c275c8af4a161468cd74da3464b384e272 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sat, 4 Jun 2022 13:04:25 -0300 Subject: [PATCH 011/292] Add generated/kubeadm-version.md --- .../kubeadm/generated/kubeadm_version.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md new file mode 100644 index 0000000000000..b86c7259774d3 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md @@ -0,0 +1,72 @@ + + + +Print the version of kubeadm + +### Synopsis + + +Print the version of kubeadm + +``` +kubeadm version [flags] +``` + +### Options + + ++++ + + + + + + + + + + + + + + + + + +
-h, --help

help for version

-o, --output string

Output format; available options are 'yaml', 'json' and 'short'

+ + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + +
--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

+ + + From f852e54b24b086d1319d3f370dd8ed8edd085d5f Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sat, 4 Jun 2022 13:35:34 -0300 Subject: [PATCH 012/292] Add generated/kubeadm-token*.md --- .../kubeadm/generated/kubeadm_token.md | 96 +++++++++++++ .../kubeadm/generated/kubeadm_token_create.md | 135 ++++++++++++++++++ .../kubeadm/generated/kubeadm_token_delete.md | 84 +++++++++++ .../generated/kubeadm_token_generate.md | 89 ++++++++++++ .../kubeadm/generated/kubeadm_token_list.md | 102 +++++++++++++ 5 files changed, 506 insertions(+) create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md create mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md new file mode 100644 index 0000000000000..5384fc4d6cce2 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md @@ -0,0 +1,96 @@ + + + +Manage bootstrap tokens + +### Synopsis + + + +This command manages bootstrap tokens. It is optional and needed only for advanced use cases. + +In short, bootstrap tokens are used for establishing bidirectional trust between a client and a server. +A bootstrap token can be used when a client (for example a node that is about to join the cluster) needs +to trust the server it is talking to. Then a bootstrap token with the "signing" usage can be used. +bootstrap tokens can also function as a way to allow short-lived authentication to the API Server +(the token serves as a way for the API Server to trust the client), for example for doing the TLS Bootstrap. + +What is a bootstrap token more exactly? + - It is a Secret in the kube-system namespace of type "bootstrap.kubernetes.io/token". + - A bootstrap token must be of the form "[a-z0-9]{6}.[a-z0-9]{16}". The former part is the public token ID, + while the latter is the Token Secret and it must be kept private at all circumstances! + - The name of the Secret must be named "bootstrap-token-(token-id)". + +You can read more about bootstrap tokens here: + https://kubernetes.io/docs/admin/bootstrap-tokens/ + + +``` +kubeadm token [flags] +``` + +### Options + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run

Whether to enable dry-run mode or not

-h, --help

help for token

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

+ + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + +
--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

+ + + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md new file mode 100644 index 0000000000000..a2a217033c88b --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md @@ -0,0 +1,135 @@ + + + +Create bootstrap tokens on the server + +### Synopsis + + + +This command will create a bootstrap token for you. +You can specify the usages for this token, the "time to live" and an optional human friendly description. + +The [token] is the actual token to write. +This should be a securely generated random token of the form "[a-z0-9]{6}.[a-z0-9]{16}". +If no [token] is given, kubeadm will generate a random token instead. + + +``` +kubeadm token create [token] +``` + +### Options + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--certificate-key string

When used together with '--print-join-command', print the full 'kubeadm join' flag needed to join the cluster as a control-plane. To create a new certificate key you must use 'kubeadm init phase upload-certs --upload-certs'.

--config string

Path to a kubeadm configuration file.

--description string

A human friendly description of how this token is used.

--groups strings     Default: "system:bootstrappers:kubeadm:default-node-token"

Extra groups that this token will authenticate as when used for authentication. Must match "\Asystem:bootstrappers:[a-z0-9:-]{0,255}[a-z0-9]\z"

-h, --help

help for create

--print-join-command

Instead of printing only the token, print the full 'kubeadm join' flag needed to join the cluster using the token.

--ttl duration     Default: 24h0m0s

The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire

--usages strings     Default: "signing,authentication"

Describes the ways in which this token can be used. You can pass --usages multiple times or provide a comma separated list of options. Valid options: [signing,authentication]

+ + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run

Whether to enable dry-run mode or not

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

+ + + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md new file mode 100644 index 0000000000000..2040bd3f94ac1 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md @@ -0,0 +1,84 @@ + + + +Delete bootstrap tokens on the server + +### Synopsis + + + +This command will delete a list of bootstrap tokens for you. + +The [token-value] is the full Token of the form "[a-z0-9]{6}.[a-z0-9]{16}" or the +Token ID of the form "[a-z0-9]{6}" to delete. + + +``` +kubeadm token delete [token-value] ... +``` + +### Options + + ++++ + + + + + + + + + + +
-h, --help

help for delete

+ + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run

Whether to enable dry-run mode or not

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

+ + + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md new file mode 100644 index 0000000000000..60de389d6c07f --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md @@ -0,0 +1,89 @@ + + + +Generate and print a bootstrap token, but do not create it on the server + +### Synopsis + + + +This command will print out a randomly-generated bootstrap token that can be used with +the "init" and "join" commands. + +You don't have to use this command in order to generate a token. You can do so +yourself as long as it is in the format "[a-z0-9]{6}.[a-z0-9]{16}". This +command is provided for convenience to generate tokens in the given format. + +You can also use "kubeadm init" without specifying a token and it will +generate and print one for you. + + +``` +kubeadm token generate [flags] +``` + +### Options + + ++++ + + + + + + + + + + +
-h, --help

help for generate

+ + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run

Whether to enable dry-run mode or not

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

+ + + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md new file mode 100644 index 0000000000000..089424492e90d --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md @@ -0,0 +1,102 @@ + + + +List bootstrap tokens on the server + +### Synopsis + + + +This command will list all bootstrap tokens for you. + + +``` +kubeadm token list [flags] +``` + +### Options + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--allow-missing-template-keys     Default: true

If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.

-o, --experimental-output string     Default: "text"

Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.

-h, --help

help for list

--show-managed-fields

If true, keep the managedFields when printing objects in JSON or YAML format.

+ + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run

Whether to enable dry-run mode or not

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

+ + + From 1b90f44da6e6ea4b5f8a14db6bf0bebeda8f2a48 Mon Sep 17 00:00:00 2001 From: Paszymaja <36695377+Paszymaja@users.noreply.github.com> Date: Fri, 10 Jun 2022 12:40:02 +0200 Subject: [PATCH 013/292] Fixed typos Fixed some typos and improved grammar. --- .../docs/concepts/security/rbac-good-practices.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/en/docs/concepts/security/rbac-good-practices.md b/content/en/docs/concepts/security/rbac-good-practices.md index 7361c3a163177..0bf9440b42645 100644 --- a/content/en/docs/concepts/security/rbac-good-practices.md +++ b/content/en/docs/concepts/security/rbac-good-practices.md @@ -22,15 +22,15 @@ The good practices laid out here should be read in conjunction with the general ### Least privilege -Ideally minimal RBAC rights should be assigned to users and service accounts. Only permissions -explicitly required for their operation should be used. Whilst each cluster will be different, +Ideally, minimal RBAC rights should be assigned to users and service accounts. Only permissions +explicitly required for their operation should be used. While each cluster will be different, some general rules that can be applied are : - Assign permissions at the namespace level where possible. Use RoleBindings as opposed to ClusterRoleBindings to give users rights only within a specific namespace. - Avoid providing wildcard permissions when possible, especially to all resources. As Kubernetes is an extensible system, providing wildcard access gives rights - not just to all object types presently in the cluster, but also to all future object types + not just to all object types present in the cluster, but also to all future object types which are created in the future. - Administrators should not use `cluster-admin` accounts except where specifically needed. Providing a low privileged account with [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation) @@ -61,7 +61,7 @@ the RBAC rights provided by default can provide opportunities for security harde In general, changes should not be made to rights provided to `system:` accounts some options to harden cluster rights exist: -- Review bindings for the `system:unauthenticated` group and remove where possible, as this gives +- Review bindings for the `system:unauthenticated` group and remove them where possible, as this gives access to anyone who can contact the API server at a network level. - Avoid the default auto-mounting of service account tokens by setting `automountServiceAccountToken: false`. For more details, see @@ -122,19 +122,19 @@ PersistentVolumes, and constrained users should use PersistentVolumeClaims to ac ### Access to `proxy` subresource of Nodes Users with access to the proxy sub-resource of node objects have rights to the Kubelet API, -which allows for command execution on every pod on the node(s) which they have rights to. +which allows for command execution on every pod on the node(s) to which they have rights. This access bypasses audit logging and admission control, so care should be taken before granting rights to this resource. ### Escalate verb -Generally the RBAC system prevents users from creating clusterroles with more rights than +Generally, the RBAC system prevents users from creating clusterroles with more rights than they possess. The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), users with this right can effectively escalate their privileges. ### Bind verb -Similar to the `escalate` verb, granting users this right allows for bypass of Kubernetes +Similar to the `escalate` verb, granting users this right allows for the bypass of Kubernetes in-built protections against privilege escalation, allowing users to create bindings to roles with rights they do not already have. From 72c763f653170e5f05b454fe5393ccdbdfac92d4 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sat, 11 Jun 2022 07:54:19 -0300 Subject: [PATCH 014/292] Translate include files --- .../kubeadm/generated/kubeadm_token.md | 96 ------------------- .../kubeadm/generated/kubeadm_token_create.md | 48 ++++------ .../kubeadm/generated/kubeadm_token_delete.md | 26 +++-- .../generated/kubeadm_token_generate.md | 31 +++--- .../kubeadm/generated/kubeadm_token_list.md | 31 +++--- 5 files changed, 58 insertions(+), 174 deletions(-) delete mode 100644 content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md deleted file mode 100644 index 5384fc4d6cce2..0000000000000 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md +++ /dev/null @@ -1,96 +0,0 @@ - - - -Manage bootstrap tokens - -### Synopsis - - - -This command manages bootstrap tokens. It is optional and needed only for advanced use cases. - -In short, bootstrap tokens are used for establishing bidirectional trust between a client and a server. -A bootstrap token can be used when a client (for example a node that is about to join the cluster) needs -to trust the server it is talking to. Then a bootstrap token with the "signing" usage can be used. -bootstrap tokens can also function as a way to allow short-lived authentication to the API Server -(the token serves as a way for the API Server to trust the client), for example for doing the TLS Bootstrap. - -What is a bootstrap token more exactly? - - It is a Secret in the kube-system namespace of type "bootstrap.kubernetes.io/token". - - A bootstrap token must be of the form "[a-z0-9]{6}.[a-z0-9]{16}". The former part is the public token ID, - while the latter is the Token Secret and it must be kept private at all circumstances! - - The name of the Secret must be named "bootstrap-token-(token-id)". - -You can read more about bootstrap tokens here: - https://kubernetes.io/docs/admin/bootstrap-tokens/ - - -``` -kubeadm token [flags] -``` - -### Options - - ---- - - - - - - - - - - - - - - - - - - - - - - - - -
--dry-run

Whether to enable dry-run mode or not

-h, --help

help for token

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

- - - -### Options inherited from parent commands - - ---- - - - - - - - - - - -
--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

- - - diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md index a2a217033c88b..e2449ac886e6f 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md @@ -10,25 +10,19 @@ guide. You can file document formatting bugs against the --> -Create bootstrap tokens on the server +Crie tokens de inicialização no servidor -### Synopsis +### Sinopse +Este comando criará um token de inicialização. Você pode especificar os usos para este token, o "tempo de vida" e uma descrição amigável, que é opcional. - -This command will create a bootstrap token for you. -You can specify the usages for this token, the "time to live" and an optional human friendly description. - -The [token] is the actual token to write. -This should be a securely generated random token of the form "[a-z0-9]{6}.[a-z0-9]{16}". -If no [token] is given, kubeadm will generate a random token instead. - +O [token] é o token real para gravar. Este deve ser um token aleatório gerado com segurança da forma "[a-z0-9]{6}.[a-z0-9]{16}". Se nenhum [token] for fornecido, o kubeadm gerará um token aleatório. ``` kubeadm token create [token] ``` -### Options +### Opções @@ -41,56 +35,56 @@ kubeadm token create [token] - + - + - + - + - + - + - + - + - + - + - + @@ -98,7 +92,7 @@ kubeadm token create [token] -### Options inherited from parent commands +### Opções herdadas do comando superior
--certificate-key string

When used together with '--print-join-command', print the full 'kubeadm join' flag needed to join the cluster as a control-plane. To create a new certificate key you must use 'kubeadm init phase upload-certs --upload-certs'.

Quando usado em conjunto com '--print-join-command', exibe a flag completa 'kubeadm join' necessário para se unir ao cluster como uma camada de gerenciamento. Para criar uma nova chave de certificado, você deve usar 'kubeadm init phase upload-certs --upload-certs'.

--config string

Path to a kubeadm configuration file.

Caminho para o arquivo de configuração kubeadm.

--description string

A human friendly description of how this token is used.

Uma descrição amigável de como esse token é usado.

--groups strings     Default: "system:bootstrappers:kubeadm:default-node-token"--groups strings     Padrão: "system:bootstrappers:kubeadm:default-node-token"

Extra groups that this token will authenticate as when used for authentication. Must match "\Asystem:bootstrappers:[a-z0-9:-]{0,255}[a-z0-9]\z"

Grupos extras que este token autenticará quando usado para autenticação. Deve corresponder "\Asystem:bootstrappers:[a-z0-9:-]{0,255}[a-z0-9]\z"

-h, --help

help for create

ajuda para create

--print-join-command

Instead of printing only the token, print the full 'kubeadm join' flag needed to join the cluster using the token.

Em vez de exibir apenas o token, a flag completa 'kubeadm join' exibe o comando necessário para se associar ao cluster usando o token.

--ttl duration     Default: 24h0m0s--ttl duração     Padrão: 24h0m0s

The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire

A duração antes do token ser excluído automaticamente (por exemplo, 1s, 2m, 3h). Se definido como '0', o token nunca expirará

--usages strings     Default: "signing,authentication"--usages strings     Padrão: "signing,authentication"

Describes the ways in which this token can be used. You can pass --usages multiple times or provide a comma separated list of options. Valid options: [signing,authentication]

Descreve as maneiras pelas quais esse token pode ser usado. Você pode passar --usages várias vezes ou fornecer uma lista de opções separada por vírgulas. Opções válidas: [signing,authentication]

@@ -111,21 +105,21 @@ kubeadm token create [token] - + - + - + - + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md index 2040bd3f94ac1..b8cff9cb318ec 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md @@ -10,23 +10,19 @@ guide. You can file document formatting bugs against the --> -Delete bootstrap tokens on the server +Excluir tokens de inicialização no servidor -### Synopsis +### Sinopse +Este comando excluirá uma lista de tokens de inicialização para você. - -This command will delete a list of bootstrap tokens for you. - -The [token-value] is the full Token of the form "[a-z0-9]{6}.[a-z0-9]{16}" or the -Token ID of the form "[a-z0-9]{6}" to delete. - +O [token-value] é um Token completo na forma "[a-z0-9]{6}.[a-z0-9]{16}" ou o ID do Token na forma "[a-z0-9]{6}" a ser excluído. ``` kubeadm token delete [token-value] ... ``` -### Options +### Opções
--dry-run

Whether to enable dry-run mode or not

Ativar ou não o modo de execução dry-run

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"--kubeconfig string     Padrão: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

O arquivo kubeconfig a ser usado para se comunicar com o cluster. Se a flag não estiver definida, um conjunto de padrão locais pode ser pesquisado por um arquivo kubeconfig existente.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

[EXPERIMENTAL] O caminho para o 'real' sistema de arquivos raiz do host.

@@ -39,7 +35,7 @@ kubeadm token delete [token-value] ... - + @@ -47,7 +43,7 @@ kubeadm token delete [token-value] ... -### Options inherited from parent commands +### Opções herdadas do comando superior
-h, --help

help for delete

ajuda para delete

@@ -60,21 +56,21 @@ kubeadm token delete [token-value] ... - + - + - + - + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md index 60de389d6c07f..45b784623c11c 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md @@ -10,28 +10,21 @@ guide. You can file document formatting bugs against the --> -Generate and print a bootstrap token, but do not create it on the server +Gere e exiba um token de inicialização, mas não o crie no servidor -### Synopsis +### Sinopse +Este comando exibirá um token de inicialização gerado aleatoriamente que pode ser usado com os comandos "init" e "join". +Você não precisa usar este comando para gerar um token. Você pode fazer isso sozinho, desde que esteja no formato "[a-z0-9]{6}.[a-z0-9]{16}". Este comando é fornecido por conveniência para gerar tokens no formato fornecido. -This command will print out a randomly-generated bootstrap token that can be used with -the "init" and "join" commands. - -You don't have to use this command in order to generate a token. You can do so -yourself as long as it is in the format "[a-z0-9]{6}.[a-z0-9]{16}". This -command is provided for convenience to generate tokens in the given format. - -You can also use "kubeadm init" without specifying a token and it will -generate and print one for you. - +Você também pode usar "kubeadm init" sem especificar um token e ele gerará e exibirá um para você. ``` kubeadm token generate [flags] ``` -### Options +### Opções
--dry-run

Whether to enable dry-run mode or not

Ativar ou não o modo de execução dry-run

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"--kubeconfig string     Padrão: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

O arquivo kubeconfig a ser usado para se comunicar com o cluster. Se a flag não estiver definida, um conjunto de padrão locais pode ser pesquisado por um arquivo kubeconfig existente.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

[EXPERIMENTAL] O caminho para o 'real' sistema de arquivos raiz do host.

@@ -44,7 +37,7 @@ kubeadm token generate [flags] - + @@ -52,7 +45,7 @@ kubeadm token generate [flags] -### Options inherited from parent commands +### Opções herdadas do comando superior
-h, --help

help for generate

ajuda para generate

@@ -65,21 +58,21 @@ kubeadm token generate [flags] - + - + - + - + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md index 089424492e90d..13ea5260ae058 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md @@ -10,20 +10,17 @@ guide. You can file document formatting bugs against the --> -List bootstrap tokens on the server +Liste tokens de inicialização no servidor -### Synopsis - - - -This command will list all bootstrap tokens for you. +### Sinopse +Este comando listará todos os tokens de inicialização para você ``` kubeadm token list [flags] ``` -### Options +### Opções
--dry-run

Whether to enable dry-run mode or not

Ativar ou não o modo de execução dry-run

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"--kubeconfig string     Padrão: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

O arquivo kubeconfig a ser usado para se comunicar com o cluster. Se a flag não estiver definida, um conjunto de padrão locais pode ser pesquisado por um arquivo kubeconfig existente.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

[EXPERIMENTAL] O caminho para o 'real' sistema de arquivos raiz do host.

@@ -33,31 +30,31 @@ kubeadm token list [flags] - + - + - + - + - + - + @@ -78,21 +75,21 @@ kubeadm token list [flags] - + - + - + - + From d46b0c823adcee9abc0d265e43450152652b4229 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sat, 11 Jun 2022 08:37:54 -0300 Subject: [PATCH 015/292] Translate include files --- .../kubeadm/generated/kubeadm_version.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md index b86c7259774d3..b248ffcd05640 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md @@ -10,18 +10,18 @@ guide. You can file document formatting bugs against the --> -Print the version of kubeadm +Exibe a versão do kubeadm -### Synopsis +### Sinopse -Print the version of kubeadm +Exibe a versão do kubeadm ``` kubeadm version [flags] ``` -### Options +### Opções
--allow-missing-template-keys     Default: true--allow-missing-template-keys     Padrão: true

If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.

Se verdadeiro (true), ignora quaisquer erros nos modelos quando um campo ou chave de mapa estiver faltando no modelo. Aplica-se apenas aos formatos de saída golang e jsonpath.

-o, --experimental-output string     Default: "text"-o, --experimental-output string     Padrão: "text"

Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.

Formato de saída. Um dos: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.

-h, --help

help for list

ajuda para list

--show-managed-fields

If true, keep the managedFields when printing objects in JSON or YAML format.

Se verdadeiro (true), mantem os managedFields ao exibir os objetos no formato JSON ou YAML.

--dry-run

Whether to enable dry-run mode or not

Ativar ou não o modo de execução dry-run

--kubeconfig string     Default: "/etc/kubernetes/admin.conf"--kubeconfig string     Padrão: "/etc/kubernetes/admin.conf"

The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.

O arquivo kubeconfig a ser usado para se comunicar com o cluster. Se a flag não estiver definida, um conjunto de padrão locais pode ser pesquisado por um arquivo kubeconfig existente.

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

[EXPERIMENTAL] O caminho para o 'real' sistema de arquivos raiz do host.

@@ -34,14 +34,14 @@ kubeadm version [flags] - + - + @@ -49,7 +49,7 @@ kubeadm version [flags] -### Options inherited from parent commands +### Opção herdada do comando superior
-h, --help

help for version

ajuda para version

-o, --output string

Output format; available options are 'yaml', 'json' and 'short'

Formato de saída; as opções disponíveis são 'yaml', 'json' e 'short'

@@ -62,7 +62,7 @@ kubeadm version [flags] - + From 3eb9334ee21e9a0949834902840fb4dee2081ba4 Mon Sep 17 00:00:00 2001 From: SzymonPrzepiora Date: Wed, 15 Jun 2022 14:04:18 +0200 Subject: [PATCH 016/292] suggested changes --- content/en/docs/concepts/security/rbac-good-practices.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/content/en/docs/concepts/security/rbac-good-practices.md b/content/en/docs/concepts/security/rbac-good-practices.md index 0bf9440b42645..58da33c10aeb2 100644 --- a/content/en/docs/concepts/security/rbac-good-practices.md +++ b/content/en/docs/concepts/security/rbac-good-practices.md @@ -30,7 +30,7 @@ some general rules that can be applied are : ClusterRoleBindings to give users rights only within a specific namespace. - Avoid providing wildcard permissions when possible, especially to all resources. As Kubernetes is an extensible system, providing wildcard access gives rights - not just to all object types present in the cluster, but also to all future object types + not just to all object types that currently exist in the cluster, but also to all future object types which are created in the future. - Administrators should not use `cluster-admin` accounts except where specifically needed. Providing a low privileged account with [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation) @@ -128,8 +128,7 @@ granting rights to this resource. ### Escalate verb -Generally, the RBAC system prevents users from creating clusterroles with more rights than -they possess. The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), +Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses. The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), users with this right can effectively escalate their privileges. ### Bind verb From 8f460f7860d2cdfe1c544d0940a0f01d86ebcbc8 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" <98214640+MrErlison@users.noreply.github.com> Date: Wed, 15 Jun 2022 09:51:50 -0300 Subject: [PATCH 017/292] Update content/pt-br/docs/concepts/architecture/nodes.md Co-authored-by: Diego W. Antunes --- content/pt-br/docs/concepts/architecture/nodes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/concepts/architecture/nodes.md b/content/pt-br/docs/concepts/architecture/nodes.md index a26aaa932a531..0e49d907e61fb 100644 --- a/content/pt-br/docs/concepts/architecture/nodes.md +++ b/content/pt-br/docs/concepts/architecture/nodes.md @@ -208,7 +208,7 @@ O segundo é manter a lista interna de nós do controlador de nós atualizada co O terceiro é monitorar a saúde dos nós. O controlador do nó é responsável por: -- No caso de um nó se tornar inacessível, atualizando a condição NodeReady de dentro do `.status` do nó. Nesse caso, o controlador do nó define a condição de pronto (`NodeReady`) como condição desconhecida (`ConditionUnknown`). +- No caso de um nó se tornar inacessível, atualizar a condição NodeReady dentro do campo `.status` do nó. Nesse caso, o controlador do nó define a condição de pronto (`NodeReady`) como condição desconhecida (`ConditionUnknown`). - Se um nó permanecer inacessível: será iniciado a [remoção pela API](/docs/concepts/scheduling-eviction/api-eviction/) para todos os Pods no nó inacessível. Por padrão, o controlador do nó espera 5 minutos entre marcar o nó como condição desconhecida (`ConditionUnknown`) e enviar a primeira solicitação de remoção. O controlador de nó verifica o estado de cada nó a cada `--node-monitor-period` segundos. From 443f20914e4784d1e2c1540db44039d473fc3b23 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Wed, 15 Jun 2022 09:56:17 -0300 Subject: [PATCH 018/292] Major updates adjusted --- .../pt-br/docs/concepts/architecture/nodes.md | 29 +++++++++---------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/content/pt-br/docs/concepts/architecture/nodes.md b/content/pt-br/docs/concepts/architecture/nodes.md index a26aaa932a531..9f3fa920f944f 100644 --- a/content/pt-br/docs/concepts/architecture/nodes.md +++ b/content/pt-br/docs/concepts/architecture/nodes.md @@ -1,5 +1,4 @@ --- -reviewers: title: Nós content_type: conceito weight: 10 @@ -76,7 +75,7 @@ Quando o [modo de autorização do nó](/docs/reference/access-authn-authz/node/ {{< note >}} Como mencionado na seção de [singularidade do nome do nó](#singularidade-de-nome-do-no), quando a configuração do nó precisa ser atualizada, é uma boa prática registrar novamente o nó no servidor da API. Por exemplo, se o kubelet estiver sendo reiniciado com o novo conjunto de `--node-labels`, mas o mesmo nome de nó for usado, a alteração não entrará em vigor, pois os labels estão sendo definidos no registro do Nó. -Pods já agendados no Nó podem ter um comportamento anormal ou causar problemas se a configuração do Nó for alterada na reinicialização do kubelet. Por exemplo, o Pod já em execução pode estar marcado diferente dos labels atribuídas ao Nó, enquanto outros Pods, que são incompatíveis com esse Pod, serão agendados com base nesse novo label. O novo registro do nó garante que todos os Pods sejam drenados e devidamente reiniciados. +Pods já agendados no Nó podem ter um comportamento anormal ou causar problemas se a configuração do Nó for alterada na reinicialização do kubelet. Por exemplo, o Pod já em execução pode estar marcado diferente dos labels atribuídos ao Nó, enquanto outros Pods, que são incompatíveis com esse Pod, serão agendados com base nesse novo label. O novo registro do nó garante que todos os Pods sejam drenados e devidamente reiniciados. {{< /note >}} ### Administração manual de nós @@ -115,14 +114,14 @@ O status de um nó contém as seguintes informações: Você pode usar o `kubectl` para visualizar o status de um nó e outros detalhes: ```shell -kubectl describe node +kubectl describe node ``` Cada seção da saída está descrita abaixo. ### Endereços -O uso desses campos pode mudar dependendo do seu provedor de nuvem ou configuração `configuração dedicada`. +O uso desses campos pode mudar dependendo do seu provedor de nuvem ou configuração dedicada. * HostName: O nome do host relatado pelo `kernel` do nó. Pode ser substituído através do parâmetro kubelet `--hostname-override`. * ExternalIP: Geralmente, o endereço IP do nó que é roteável externamente (disponível fora do `cluster`). @@ -132,8 +131,8 @@ O uso desses campos pode mudar dependendo do seu provedor de nuvem ou configura O campo `conditions` descreve o status de todos os nós em execução. Exemplos de condições incluem: -{{< table caption = "Node conditions, and a description of when each condition applies." >}} -| Node Condition | Description | +{{< table caption = "Condições do nó e uma descrição de quando cada condição se aplica." >}} +| Condições do nó | Descrição | |----------------------|-------------| | `Ready` | `True` Se o nó estiver íntegro e pronto para aceitar pods, `False` se o nó não estiver íntegro e não estiver aceitando pods, e desconhecido `Unknown` se o controlador do nó tiver sem notícias do nó no último `node-monitor-grace-period` (o padrão é de 40 segundos) | | `DiskPressure` | `True` Se houver pressão sobre o tamanho do disco, ou seja, se a capacidade do disco for baixa; caso contrário `False` | @@ -161,13 +160,13 @@ Na API do Kubernetes, a condição de um nó é representada como parte do `.sta ] ``` -Se o status da condição `Ready` permanecer desconhecido (`Unknown`) ou falso (`False`) por mais tempo do que o limite de remoção do pod (`pod-eviction-timeout`) (um argumento passado para o {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager">}}), o [controlador de nó](#node-controller) acionará o {{< glossary_tooltip text="remoção iniciado pela API" term_id="api-eviction" >}} para todos os Pods atribuídos a esse nó. A duração padrão do tempo limite de remoção é de **cinco minutos**. Em alguns casos, quando o nó está inacessível, o servidor da API não consegue se comunicar com o kubelet no nó. A decisão de excluir os pods não pode ser comunicada ao kubelet até que a comunicação com o servidor da API seja restabelecida. Enquanto isso, os pods agendados para exclusão podem continuar a ser executados no nó particionado. +Se o status da condição `Ready` permanecer desconhecido (`Unknown`) ou falso (`False`) por mais tempo do que o limite da remoção do pod (`pod-eviction-timeout`) (um argumento passado para o {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager">}}), o [controlador de nó](#node-controller) acionará o {{< glossary_tooltip text="remoção iniciado pela API" term_id="api-eviction" >}} para todos os Pods atribuídos a esse nó. A duração padrão do tempo limite da remoção é de **cinco minutos**. Em alguns casos, quando o nó está inacessível, o servidor da API não consegue se comunicar com o kubelet no nó. A decisão de excluir os pods não pode ser comunicada ao kubelet até que a comunicação com o servidor da API seja restabelecida. Enquanto isso, os pods agendados para exclusão podem continuar a ser executados no nó particionado. O controlador de nós não força a exclusão dos pods até que seja confirmado que eles pararam de ser executados no cluster. Você pode ver os pods que podem estar sendo executados em um nó inacessível como estando no estado de terminando (`Terminating`) ou desconhecido (`Unknown`). Nos casos em que o Kubernetes não retirar da infraestrutura subjacente se um nó tiver deixado permanentemente um cluster, o administrador do cluster pode precisar excluir o objeto do nó manualmente. Excluir o objeto do nó do Kubernetes faz com que todos os objetos Pod em execução no nó sejam excluídos do servidor da API e libera seus nomes. Quando ocorrem problemas nos nós, a camada de gerenciamento do Kubernetes cria automaticamente [`taints`](/docs/concepts/scheduling-eviction/taint-and-toleration/) que correspondem às condições que afetam o nó. O escalonador leva em consideração as `taints` do Nó ao atribuir um Pod a um Nó. Os Pods também podem ter {{< glossary_tooltip text="tolerations" term_id="toleration" >}} que os permitem funcionar em um nó, mesmo que tenha uma `taint` específica. -Consulte [Nó Taint por Condição](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition) +Consulte [Nó Taint por Condição](/pt-br/docs/concepts/scheduling-eviction/taint-and-toleration/#taints-por-condições-de-nó) para mais detalhes. ### Capacidade e Alocável {#capacity} @@ -209,13 +208,13 @@ O segundo é manter a lista interna de nós do controlador de nós atualizada co O terceiro é monitorar a saúde dos nós. O controlador do nó é responsável por: - No caso de um nó se tornar inacessível, atualizando a condição NodeReady de dentro do `.status` do nó. Nesse caso, o controlador do nó define a condição de pronto (`NodeReady`) como condição desconhecida (`ConditionUnknown`). -- Se um nó permanecer inacessível: será iniciado a [remoção pela API](/docs/concepts/scheduling-eviction/api-eviction/) para todos os Pods no nó inacessível. Por padrão, o controlador do nó espera 5 minutos entre marcar o nó como condição desconhecida (`ConditionUnknown`) e enviar a primeira solicitação de remoção. +- Se um nó permanecer inacessível: será iniciada a [remoção pela API](/docs/concepts/scheduling-eviction/api-eviction/) para todos os Pods no nó inacessível. Por padrão, o controlador do nó espera 5 minutos entre marcar o nó como condição desconhecida (`ConditionUnknown`) e enviar a primeira solicitação de remoção. O controlador de nó verifica o estado de cada nó a cada `--node-monitor-period` segundos. ### Limites de taxa de remoção -Na maioria dos casos, o controlador de nós limita a taxa de remoção a `--node-eviction-rate` (0,1 por padrão) por segundo, o que significa que ele não despejará pods de mais de 1 nó por 10 segundos. +Na maioria dos casos, o controlador de nós limita a taxa de remoção a `--node-eviction-rate` (0,1 por padrão) por segundo, o que significa que ele não removerá pods de mais de 1 nó por 10 segundos. O comportamento de remoção do nó muda quando um nó em uma determinada zona de disponibilidade se torna não íntegro. O controlador de nós verifica qual porcentagem de nós na zona não são íntegras (a condição `NodeReady` é desconhecida `ConditionUnknown` ou falsa `ConditionFalse`) ao mesmo tempo: @@ -223,11 +222,11 @@ O comportamento de remoção do nó muda quando um nó em uma determinada zona d - Se o cluster for pequeno (ou seja, tiver número de nós menor ou igual ao valor da opção `--large-cluster-size-threshold` - padrão 50), então as remoções serão interrompidas. - Caso contrário, a taxa de remoção é reduzida para `--secondary-node-eviction-rate` de nós secundários (padrão 0,01) por segundo. -A razão pela qual essas políticas são implementadas por zona de disponibilidade é porque uma zona de disponibilidade pode ser particionada a iniciar da camada de gerenciamento, enquanto as outras permanecem conectadas. Se o seu cluster não abranger várias zonas de disponibilidade de provedores de nuvem, o mecanismo de remoção não levará em conta a indisponibilidade por zona. +A razão pela qual essas políticas são implementadas por zona de disponibilidade é porque a camada de gerenciamento pode perder conexão com uma zona de disponibilidade, enquanto as outras permanecem conectadas. Se o seu cluster não abranger várias zonas de disponibilidade de provedores de nuvem, o mecanismo de remoção não levará em conta a indisponibilidade por zona. -Uma das principais razões para espalhar seus nós pelas zonas de disponibilidade é para que a carga de trabalho possa ser transferida para zonas íntegras quando uma zona inteira cair. Portanto, se todos os nós em uma zona não forem íntegros, o controlador do nó despeja na taxa normal de `--node-eviction-rate`. O caso especial é quando todas as zonas são completamente insalubres (nenhum dos nós do cluster será íntegro). Nesse caso, o controlador do nó assume que há algum problema com a conectividade entre a camada de gerenciamento e os nós e não realiza nenhuma remoção. (Se houver uma interrupção e alguns nós reaparecerem, o controlador do nó expulsa pods dos nós restantes que são insalubres ou inacessíveis). +Uma das principais razões para espalhar seus nós pelas zonas de disponibilidade é para que a carga de trabalho possa ser transferida para zonas íntegras quando uma zona inteira cair. Portanto, se todos os nós em uma zona não forem íntegros, o controlador do nó remova na taxa normal de `--node-eviction-rate`. O caso especial é quando todas as zonas são completamente insalubres (nenhum dos nós do cluster será íntegro). Nesse caso, o controlador do nó assume que há algum problema com a conectividade entre a camada de gerenciamento e os nós e não realiza nenhuma remoção. (Se houver uma interrupção e alguns nós reaparecerem, o controlador do nó expulsa pods dos nós restantes que são insalubres ou inacessíveis). -O controlador de nós também é responsável por despejar pods em execução nos nós com `NoExecute` taints, a menos que esses pods tolerem essa taint. O controlador de nó também adiciona as {{< glossary_tooltip text="taints" term_id="taint" >}} correspondentes aos problemas de nó, como nó inacessível ou não pronto. Isso significa que o escalonador não colocará Pods em nós não íntegros. +O controlador de nós também é responsável por remover pods em execução nos nós com `NoExecute` taints, a menos que esses pods tolerem essa taint. O controlador de nó também adiciona as {{< glossary_tooltip text="taints" term_id="taint" >}} correspondentes aos problemas de nó, como nó inacessível ou não pronto. Isso significa que o escalonador não colocará Pods em nós não íntegros. ## Rastreamento de capacidade de recursos {#node-capacity} @@ -275,7 +274,7 @@ O recurso de desligamento gradual do nó é configurado com duas opções [`Kube Por exemplo, se `shutdownGracePeriod=30s` e `shutdownGracePeriodCriticalPods=10s`, o kubelet atrasará o desligamento do nó em 30 segundos. Durante o desligamento, os primeiros 20 (30-10) segundos seriam reservados para encerrar gradualmente os pods normais, e os últimos 10 segundos seriam reservados para encerrar [pods críticos](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical). {{< note >}} -Quando os pods forem despejados durante o desligamento gradual do nó, eles serão marcados como desligados. Executar o `kubectl get pods` para mostrar o status dos pods despejados como `Terminated`. E o `kubectl describe pod` indica que o pod foi despejado por causa do desligamento do nó: +Quando os pods forem removidos durante o desligamento gradual do nó, eles serão marcados como desligados. Executar o `kubectl get pods` para mostrar o status dos pods removidos como `Terminated`. E o `kubectl describe pod` indica que o pod foi removido por causa do desligamento do nó: ``` Reason: Terminated @@ -287,7 +286,7 @@ Message: Pod was terminated in response to imminent node shutdown. {{< feature-state state="alpha" for_k8s_version="v1.23" >}} -Assumindo as seguintes [classes de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) do pod em um cluster, para fornecer mais flexibilidade durante o desligamento gradual do nó em torno da ordem de pods durante o desligamento, o desligamento gradual do nó respeita a PriorityClass dos Pods, desde que você tenha ativado esse recurso em seu cluster. O recurso permite que o cluster defina explicitamente a ordem dos pods durante o desligamento gradual do nó com base em [classes de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass). +Para fornecer mais flexibilidade durante o desligamento gradual do nó em torno da ordem de pods durante o desligamento, o desligamento gradual do nó respeita a PriorityClass dos Pods, desde que você tenha ativado esse recurso em seu cluster. O recurso permite que o cluster defina explicitamente a ordem dos pods durante o desligamento gradual do nó com base em [classes de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass). O recurso [Desligamento Gradual do Nó](#graceful-node-shutdown), conforme descrito acima, desliga pods em duas fases, pods não críticos, seguidos por pods críticos. Se for necessária flexibilidade adicional para definir explicitamente a ordem dos pods durante o desligamento de uma maneira mais granular, o desligamento gradual baseado na prioridade do pod pode ser usado. From e37693785832747bb00f83f06f03d1e17bfa61e9 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Wed, 15 Jun 2022 10:36:48 -0300 Subject: [PATCH 019/292] Adjusting context, spelling and grammar --- content/pt-br/docs/concepts/architecture/nodes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/concepts/architecture/nodes.md b/content/pt-br/docs/concepts/architecture/nodes.md index 266815d8d431e..b28544f40bc8d 100644 --- a/content/pt-br/docs/concepts/architecture/nodes.md +++ b/content/pt-br/docs/concepts/architecture/nodes.md @@ -224,7 +224,7 @@ O comportamento de remoção do nó muda quando um nó em uma determinada zona d A razão pela qual essas políticas são implementadas por zona de disponibilidade é porque a camada de gerenciamento pode perder conexão com uma zona de disponibilidade, enquanto as outras permanecem conectadas. Se o seu cluster não abranger várias zonas de disponibilidade de provedores de nuvem, o mecanismo de remoção não levará em conta a indisponibilidade por zona. -Uma das principais razões para espalhar seus nós pelas zonas de disponibilidade é para que a carga de trabalho possa ser transferida para zonas íntegras quando uma zona inteira cair. Portanto, se todos os nós em uma zona não forem íntegros, o controlador do nó remova na taxa normal de `--node-eviction-rate`. O caso especial é quando todas as zonas são completamente insalubres (nenhum dos nós do cluster será íntegro). Nesse caso, o controlador do nó assume que há algum problema com a conectividade entre a camada de gerenciamento e os nós e não realiza nenhuma remoção. (Se houver uma interrupção e alguns nós reaparecerem, o controlador do nó expulsa pods dos nós restantes que são insalubres ou inacessíveis). +Uma das principais razões para espalhar seus nós pelas zonas de disponibilidade é para que a carga de trabalho possa ser transferida para zonas íntegras quando uma zona inteira cair. Portanto, se todos os nós em uma zona não estiverem íntegros, o controlador do nó removerá na taxa normal de `--node-eviction-rate`. O caso especial é quando todas as zonas estiverem completamente insalubres (nenhum dos nós do cluster será íntegro). Nesse caso, o controlador do nó assume que há algum problema com a conectividade entre a camada de gerenciamento e os nós e não realizará nenhuma remoção. (Se houver uma interrupção e alguns nós reaparecerem, o controlador do nó expulsará os pods dos nós restantes que estiverem insalubres ou inacessíveis). O controlador de nós também é responsável por remover pods em execução nos nós com `NoExecute` taints, a menos que esses pods tolerem essa taint. O controlador de nó também adiciona as {{< glossary_tooltip text="taints" term_id="taint" >}} correspondentes aos problemas de nó, como nó inacessível ou não pronto. Isso significa que o escalonador não colocará Pods em nós não íntegros. From 4e6d0f70d640d2c9874daaa4d9620b5b1f68b992 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Wed, 15 Jun 2022 10:47:03 -0300 Subject: [PATCH 020/292] Updated the explanation of the concept --- content/pt-br/docs/reference/glossary/userns.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/reference/glossary/userns.md b/content/pt-br/docs/reference/glossary/userns.md index 3fa8dc5031d19..f8599f0743855 100644 --- a/content/pt-br/docs/reference/glossary/userns.md +++ b/content/pt-br/docs/reference/glossary/userns.md @@ -17,7 +17,7 @@ Um recurso do kernel para emular o root. Usado para "contêineres sem root". Os namespaces do usuário são um recurso do kernel Linux que permite que um usuário não root emule privilégios de superusuário ("root"), por exemplo, para executar contêineres sem ser um superusuário fora do contêiner. -O namespace do usuário é eficaz para mitigar os danos de possíveis ataques fora de contêineres. +O namespace do usuário é eficaz para mitigar os danos de um potencial ataque em que o adversário escapa dos limites do contêiner. No contexto de namespaces de usuário, o namespace é um recurso do kernel Linux, e não um {{< glossary_tooltip text="namespace" term_id="namespace" >}} no sentido do termo Kubernetes. From 3e31bd465b56939b37bc6c9074fdcbe1b81d97ea Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Wed, 15 Jun 2022 11:04:08 -0300 Subject: [PATCH 021/292] Update term and sentence --- content/pt-br/docs/reference/glossary/sig.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/pt-br/docs/reference/glossary/sig.md b/content/pt-br/docs/reference/glossary/sig.md index 37d70a85f1852..cde20f5451f56 100644 --- a/content/pt-br/docs/reference/glossary/sig.md +++ b/content/pt-br/docs/reference/glossary/sig.md @@ -4,16 +4,16 @@ id: sig date: 2018-04-12 full_link: https://github.com/kubernetes/community/blob/master/sig-list.md#master-sig-list short_description: > - Membros da comunidade que gerenciam coletivamente e continuamente uma parte ou um projeto maior de código aberto do Kubernetes. + Membros da comunidade que gerenciam coletivamente e continuamente uma parte ou projeto maior do cõdigo aberto do Kubernetes. aka: tags: - community --- - {{< glossary_tooltip text="Membros da comunidade" term_id="member" >}} que gerenciam coletivamente e continuamente uma parte ou um projeto maior de código aberto do Kubernetes. + {{< glossary_tooltip text="Membros da comunidade" term_id="member" >}} que gerenciam coletivamente e continuamente uma parte ou projeto maior do cõdigo aberto do Kubernetes. -Os membros dentro de um grupo de interesse especial (do inglês - Special Interest Group, SIG) têm um interesse comum em avançar em uma área específica, como arquitetura, API ou documentação. Os SIGs devem seguir as [diretrizes de governança](https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md) do SIG, mas podem ter sua própria política de contribuição e canais de comunicação. +Os membros dentro de um grupo de interesse especial (do inglês - Special Interest Group, SIG) têm um interesse comum em contribuir em uma área específica, como arquitetura, API ou documentação. Os SIGs devem seguir as [diretrizes de governança](https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md) do SIG, mas podem ter sua própria política de contribuição e canais de comunicação. Para mais informações, consulte o repositório [kubernetes/community](https://github.com/kubernetes/community) e a lista atual de [SIGs e Grupos de Trabalho](https://github.com/kubernetes/community/blob/master/sig-list.md). From c161b83d727eb352e642abfa4cf72a5054bbe619 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Wed, 15 Jun 2022 11:09:36 -0300 Subject: [PATCH 022/292] Update term --- content/pt-br/docs/reference/glossary/kubectl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/reference/glossary/kubectl.md b/content/pt-br/docs/reference/glossary/kubectl.md index d3136c44230c4..c865afb14f11e 100644 --- a/content/pt-br/docs/reference/glossary/kubectl.md +++ b/content/pt-br/docs/reference/glossary/kubectl.md @@ -12,7 +12,7 @@ tags: - tool - fundamental --- -Ferramenta de linha de comando para se comunicar com o {{< glossary_tooltip text="plano de controle" term_id="control-plane" >}} de um cluster Kubernetes usando a API do Kubernetes. +Ferramenta de linha de comando para se comunicar com o {{< glossary_tooltip text="camada de gerenciamento" term_id="control-plane" >}} de um cluster Kubernetes usando a API do Kubernetes. Você pode usar `kubectl` para criar, inspecionar, atualizar e excluir objetos Kubernetes. From f5f79fde59eaf6d1e82c63bd0fb8ed03dfb4e841 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Fri, 17 Jun 2022 19:16:08 -0300 Subject: [PATCH 023/292] Update the (o/a) article --- content/pt-br/docs/reference/glossary/kubectl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/reference/glossary/kubectl.md b/content/pt-br/docs/reference/glossary/kubectl.md index c865afb14f11e..31f1d2e8efee3 100644 --- a/content/pt-br/docs/reference/glossary/kubectl.md +++ b/content/pt-br/docs/reference/glossary/kubectl.md @@ -12,7 +12,7 @@ tags: - tool - fundamental --- -Ferramenta de linha de comando para se comunicar com o {{< glossary_tooltip text="camada de gerenciamento" term_id="control-plane" >}} de um cluster Kubernetes usando a API do Kubernetes. +Ferramenta de linha de comando para se comunicar com a {{< glossary_tooltip text="camada de gerenciamento" term_id="control-plane" >}} de um cluster Kubernetes usando a API do Kubernetes. Você pode usar `kubectl` para criar, inspecionar, atualizar e excluir objetos Kubernetes. From caac7988b201e6c303f0bb1bb05f79d58c2d43b2 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" <98214640+MrErlison@users.noreply.github.com> Date: Fri, 17 Jun 2022 19:40:15 -0300 Subject: [PATCH 024/292] Update content/pt-br/docs/reference/glossary/kubectl.md Co-authored-by: Sean --- content/pt-br/docs/reference/glossary/kubectl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/reference/glossary/kubectl.md b/content/pt-br/docs/reference/glossary/kubectl.md index 31f1d2e8efee3..d0a5e1970feeb 100644 --- a/content/pt-br/docs/reference/glossary/kubectl.md +++ b/content/pt-br/docs/reference/glossary/kubectl.md @@ -2,7 +2,7 @@ title: Kubectl id: kubectl date: 2018-04-12 -full_link: /docs/user-guide/kubectl-overview/ +full_link: /pt-br/docs/user-guide/kubectl-overview/ short_description: > Uma ferramenta de linha de comando para se comunicar com um cluster Kubernetes. From 85090b4e20a5840cb65dcd6e49136d992be0759d Mon Sep 17 00:00:00 2001 From: Arhell Date: Mon, 20 Jun 2022 11:27:44 +0300 Subject: [PATCH 025/292] [pt] update bootstrap-tokens.md link --- .../pt-br/docs/reference/access-authn-authz/bootstrap-tokens.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/pt-br/docs/reference/access-authn-authz/bootstrap-tokens.md index b7455e5765e00..e3d63272563db 100644 --- a/content/pt-br/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/pt-br/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -57,7 +57,7 @@ do gerenciador de controle - kube-controller-manager. ## Formato do _secret_ dos tokens de inicialização Cada token válido possui um _secret_ no namespace `kube-system`. Você pode -encontrar a documentação completa [aqui](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md). +encontrar a documentação completa [aqui](https://github.com/kubernetes/design-proposals-archive/blob/main/cluster-lifecycle/bootstrap-discovery.md). Um _secret_ de token se parece com o exemplo abaixo: From 254b29b5e51e6ce368164161747be5f970168438 Mon Sep 17 00:00:00 2001 From: Mauren Berti Date: Fri, 15 Apr 2022 19:52:33 -0400 Subject: [PATCH 026/292] Update pt-br/secrets to reflect latest en version. * Update the Brazilian Portuguese version of the Secrets page to keep it up-to-date with the English version. --- .../docs/concepts/configuration/secret.md | 1215 +++++++++-------- 1 file changed, 643 insertions(+), 572 deletions(-) diff --git a/content/pt-br/docs/concepts/configuration/secret.md b/content/pt-br/docs/concepts/configuration/secret.md index 0fc63bc2e2d19..e7ec5d7d3941e 100644 --- a/content/pt-br/docs/concepts/configuration/secret.md +++ b/content/pt-br/docs/concepts/configuration/secret.md @@ -47,16 +47,18 @@ existentes. {{< /caution >}} +Consulte [Segurança da informação para Secrets](#information-security-for-secrets) +para mais detalhes. + -## Visão Geral de Secrets +## Usos para Secrets -Para utilizar um Secret, um Pod precisa referenciar o Secret. -Um Secret pode ser utilizado em um Pod de três maneiras diferentes: -- Como um [arquivo](#using-secrets-as-files-from-a-pod) em um +Existem três formas principais para um Pod utilizar um Secret: +- Como [arquivos](#using-secrets-as-files-from-a-pod) em um {{< glossary_tooltip text="volume" term_id="volume" >}} montado em um ou mais de seus contêineres. -- Como uma [variável de ambiente](#using-secrets-as-environment-variables) em um +- Como uma [variável de ambiente](#using-secrets-as-environment-variables) de um contêiner. - Pelo [kubelet ao baixar imagens de contêiner](#using-imagepullsecrets) para o Pod. @@ -65,7 +67,54 @@ A camada de gerenciamento do Kubernetes também utiliza Secrets. Por exemplo, os [Secrets de tokens de autoinicialização](#bootstrap-token-secrets) são um mecanismo que auxilia a automação do registro de nós. +### Alternativas a Secrets + +Ao invés de utilizar um Secret para proteger dados confidenciais, você pode +escolher uma maneira alternativa. Algumas das opções são: + +- se o seu componente cloud native precisa autenticar-se a outra aplicação que +está rodando no mesmo cluster Kubernetes, você pode utilizar uma +[ServiceAccount](/pt-br/docs/reference/access-authn-authz/authentication/#tokens-de-contas-de-serviço) +e seus tokens para identificar seu cliente. +- existem ferramentas fornecidas por terceiros que você pode rodar, no seu +cluster ou externamente, que providenciam gerenciamento de Secrets. Por exemplo, +um serviço que Pods accessam via HTTPS, que revelam um Secret se o cliente +autenticar-se corretamente (por exemplo, utilizando um token de ServiceAccount). +- para autenticação, você pode implementar um serviço de assinatura de +certificados X.509 personalizado, e utilizar +[CertificateSigningRequests](/docs/reference/access-authn-authz/certificate-signing-requests/) +para permitir ao serviço personalizado emitir certificados a pods que os +necessitam. +- você pode utilizar um [plugin de dispositivo](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) +para expor a um Pod específico um hardware de encriptação conectado a um nó. Por +exemplo, você pode agendar Pods confiáveis em nós que oferecem um _Trusted +Platform Module_, configurado em um fluxo de dados independente. + +Você pode também combinar duas ou mais destas opções, incluindo a opção de +utilizar objetos do tipo Secret. + +Por exemplo: implemente (ou instale) um +{{< glossary_tooltip text="operador" term_id="operator-pattern" >}} +que solicite tokens de sessão de curta duração a um serviço externo, e crie +Secrets baseado nestes tokens. Pods rodando no seu cluster podem fazer uso de +tokens de sessão, e o operador garante que estes permanecem válidos. Esta +separação significa que você pode rodar Pods que não precisam ter conhecimento +do mecanismo exato para geração e atualização de tais tokens de sessão. + +## Trabalhando com Secrets + +### Criando um Secret + +Existem diversas formas de criar um Secret: + +- [crie um Secret utilizando o comando `kubectl`](/pt-br/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- [crie um Secret a partir de um arquivo de configuração](/pt-br/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- [crie um Secret utilizando a ferramenta kustomize](/pt-br/docs/tasks/configmap-secret/managing-secret-using-kustomize/) + +#### Restrições de nomes de Secret e dados {#restriction-names-data} + O nome de um Secret deve ser um [subdomínio DNS válido](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + Você pode especificar o campo `data` e/ou o campo `stringData` na criação de um arquivo de configuração de um Secret. Ambos os campos `data` e `stringData` são opcionais. Os valores das chaves no campo `data` devem ser strings codificadas @@ -78,642 +127,297 @@ alfanuméricos, `-`, `_`, ou `.`. Todos os pares chave-valor no campo `stringDat são internamente combinados com os dados do campo `data`. Se uma chave aparece em ambos os campos, o valor informado no campo `stringData` toma a precedência. -## Tipos de Secrets {#secret-types} - -Ao criar um Secret, você pode especificar o seu tipo utilizando o campo `type` -do objeto Secret, ou algumas opções de linha de comando equivalentes no comando -`kubectl`, quando disponíveis. O campo `type` de um Secret é utilizado para -facilitar a manipulação programática de diferentes tipos de dados confidenciais. - -O Kubernetes oferece vários tipos embutidos de Secret para casos de uso comuns. -Estes tipos variam em termos de validações efetuadas e limitações que o -Kubernetes impõe neles. +#### Limite de tamanho {#restriction-data-size} -| Tipo embutido | Caso de uso | -|----------------------------------------|----------------------------------------------------| -| `Opaque` | dados arbitrários definidos pelo usuário | -| `kubernetes.io/service-account-token` | token de service account (conta de serviço) | -| `kubernetes.io/dockercfg` | arquivo `~/.dockercfg` serializado | -| `kubernetes.io/dockerconfigjson` | arquivo `~/.docker/config.json` serializado | -| `kubernetes.io/basic-auth` | credenciais para autenticação básica (basic auth) | -| `kubernetes.io/ssh-auth` | credenciais para autenticação SSH | -| `kubernetes.io/tls` | dados para um cliente ou servidor TLS | -| `bootstrap.kubernetes.io/token` | dados de token de autoinicialização | +Secrets individuais são limitados a 1MiB em tamanho. Esta limitação tem por +objetivo desencorajar a criação de Secrets muito grandes que possam exaurir o +servidor da API e a memória do kubelet. No entanto, a criação de vários Secrets +pequenos pode também exaurir a memória. Você pode utilizar uma +[quota de recurso](/docs/concepts/policy/resource-quotas/) a fim de limitar o +número de Secrets (ou outros recursos) em um namespace. -Você pode definir e utilizar seu próprio tipo de Secret definindo o valor do -campo `type` como uma string não-nula em um objeto Secret. Uma string em branco -é tratada como o tipo `Opaque`. O Kubernetes não restringe nomes de tipos. No -entanto, quando tipos embutidos são utilizados, você precisa atender a todos os -requisitos daquele tipo. +### Editando um Secret -### Secrets tipo Opaque +Você pode editar um Secret existente utilizando kubectl: -`Opaque` é o tipo predefinido de Secret quando o campo `type` não é informado -em um arquivo de configuração. Quando um Secret é criado usando o comando -`kubectl`, você deve usar o subcomando `generic` para indicar que um Secret é -do tipo `Opaque`. Por exemplo, o comando a seguir cria um Secret vazio do tipo -`Opaque`: ```shell -kubectl create secret generic empty-secret -kubectl get secret empty-secret -``` - -O resultado será semelhante ao abaixo: - -``` -NAME TYPE DATA AGE -empty-secret Opaque 0 2m6s +kubectl edit secrets mysecret ``` -A coluna `DATA` demonstra a quantidade de dados armazenados no Secret. Neste -caso, `0` significa que este objeto Secret está vazio. - -### Secrets de token de service account (conta de serviço) - -Secrets do tipo `kubernetes.io/service-account-token` são utilizados para -armazenar um token que identifica uma service account (conta de serviço). Ao -utilizar este tipo de Secret, você deve garantir que a anotação -`kubernetes.io/service-account.name` contém um nome de uma service account -existente. Um controlador do Kubernetes preenche outros campos, como por exemplo -a anotação `kubernetes.io/service-account.uid` e a chave `token` no campo `data` -com o conteúdo do token. - -O exemplo de configuração abaixo declara um Secret de token de service account: +Este comando abre o seu editor padrão configurado e permite a modificação dos +valores do Secret codificados em base64 no campo `data`. Por exemplo: ```yaml +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file, it will be +# reopened with the relevant failures. +# apiVersion: v1 +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm kind: Secret metadata: - name: secret-sa-sample annotations: - kubernetes.io/service-account-name: "sa-name" -type: kubernetes.io/service-account-token -data: - # Você pode incluir pares chave-valor adicionais, da mesma forma que faria com - # Secrets do tipo Opaque - extra: YmFyCg== + kubectl.kubernetes.io/last-applied-configuration: { ... } + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque ``` -Ao criar um {{< glossary_tooltip text="Pod" term_id="pod" >}}, o Kubernetes -automaticamente cria um Secret de service account e automaticamente atualiza o -seu Pod para utilizar este Secret. O Secret de token de service account contém -credenciais para acessar a API. +Este manifesto de exemplo define um Secret com duas chaves no campo `data`: +`username` and `password`. +Os valores são strings codificadas em formato base64. No entanto, quando um +Secret é utilizado em um Pod, o kubelet fornece os dados _decodificados_ ao Pod +e seus contêineres. -A criação automática e o uso de credenciais de API podem ser desativados se -desejado. Porém, se tudo que você necessita é poder acessar o servidor da API -de forma segura, este é o processo recomendado. +Você pode especificar muitas chaves e valores em um Secret só, ou utilizar +muitos Secrets. Escolha a opção que for mais conveniente para o caso de uso. -Veja a documentação de -[ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) -para mais informações sobre o funcionamento de service accounts. Você pode -verificar também os campos `automountServiceAccountToken` e `serviceAccountName` -do [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) -para mais informações sobre como referenciar service accounts em Pods. +### Utilizando Secrets -### Secrets de configuração do Docker +Secrets podem ser montados como volumes de dados ou expostos como +{{< glossary_tooltip text="variáveis de ambiente" term_id="container-env-variables" >}} +para serem utilizados num container de um Pod. Secrets também podem ser +utilizados por outras partes do sistema, sem serem diretamente expostos ao Pod. +Por exemplo, Secrets podem conter credenciais que outras partes do sistema devem +utilizar para interagir com sistemas externos no lugar do usuário. -Você pode utilizar um dos tipos abaixo para criar um Secret que armazena -credenciais para accesso a um registro de contêineres compatível com Docker -para busca de imagens: -- `kubernetes.io/dockercfg` -- `kubernetes.io/dockerconfigjson` +Secrets montados como volumes são verificados para garantir que o nome +referenciado realmente é um objeto do tipo Secret. Portanto, um Secret deve ser +criado antes de quaisquer Pods que o referenciam. -O tipo `kubernetes.io/dockercfg` é reservado para armazenamento de um arquivo -`~/.dockercfg` serializado. Este arquivo é o formato legado para configuração -do utilitário de linha de comando do Docker. Ao utilizar este tipo de Secret, -é preciso garantir que o campo `data` contém uma chave `.dockercfg` cujo valor -é o conteúdo do arquivo `~/.dockercfg` codificado no formato base64. +Se um Secret não puder ser encontrado (porque não existe, ou devido a um problema +de conectividade com o servidor da API) o kubelet tenta periodicamente reiniciar +aquele Pod. O kubelet também relata um evento para aquele Pod, incluindo detalhes +do problema ao buscar o Secret. -O tipo `kubernetes.io/dockerconfigjson` foi projetado para armazenamento de um -conteúdo JSON serializado que obedece às mesmas regras de formato que o arquivo -`~/.docker/config.json`. Este arquivo é um formato mais moderno para o conteúdo -do arquivo `~/.dockercfg`. Ao utilizar este tipo de Secret, o conteúdo do campo -`data` deve conter uma chave `.dockerconfigjson` em que o conteúdo do arquivo -`~/.docker/config.json` é fornecido codificado no formato base64. +### Utilizando Secrets como arquivos em um Pod {#using-secrets-as-files-from-a-pod} -Um exemplo de um Secret do tipo `kubernetes.io/dockercfg`: +Para consumir um Secret em um volume em um Pod: +1. Crie um Secret ou utilize um previamente existente. Múltiplos Pods podem +referenciar o mesmo secret. +1. Modifique sua definição de Pod para adicionar um volume na lista +`.spec.volumes[]`. Escolha um nome qualquer para o seu volume e adicione um +campo `.spec.volumes[].secret.secretName` com o mesmo valor do seu objeto +Secret. +1. Adicione um ponto de montagem de volume à lista +`.spec.containers[].volumeMounts[]` de cada contêiner que requer o Secret. +Especifique `.spec.containers[].volumeMounts[].readOnly = true` e especifique o +valor do campo `.spec.containers[].volumeMounts[].mountPath` com o nome de um +diretório não utilizado onde você deseja que os Secrets apareçam. +1. Modifique sua imagem ou linha de comando de modo que o programa procure por +arquivos naquele diretório. Cada chave no campo `data` se torna um nome de +arquivo no diretório especificado em `mountPath`. + +Este é um exemplo de Pod que monta um Secret em um volume: ```yaml apiVersion: v1 -kind: Secret +kind: Pod metadata: - name: secret-dockercfg -type: kubernetes.io/dockercfg -data: - .dockercfg: | - "" + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret ``` -{{< note >}} -Se você não desejar fazer a codificação em formato base64, você pode utilizar o -campo `stringData` como alternativa. -{{< /note >}} +Cada Secret que você deseja utilizar deve ser referenciado na lista +`.spec.volumes`. -Ao criar estes tipos de Secret utilizando um manifesto (arquivo YAML), o servidor -da API verifica se a chave esperada existe no campo `data` e se o valor fornecido -pode ser interpretado como um conteúdo JSON válido. O servidor da API não verifica -se o conteúdo informado é realmente um arquivo de configuração do Docker. +Se existirem múltiplos contêineres em um Pod, cada um dos contêineres necessitará +seu próprio bloco `volumeMounts`, mas somente um volume na lista `.spec.volumes` +é necessário por Secret. -Quando você não tem um arquivo de configuração do Docker, ou quer utilizar o -comando `kubectl` para criar um Secret de registro de contêineres compatível -com o Docker, você pode executar: -```shell -kubectl create secret docker-registry secret-tiger-docker \ - --docker-username=tiger \ - --docker-password=pass113 \ - --docker-email=tiger@acme.com \ - --docker-server=my-registry.example:5000 -``` +Você pode armazenar vários arquivos em um Secret ou utilizar vários Secrets +distintos, o que for mais conveniente. -Esse comando cria um secret do tipo `kubernetes.io/dockerconfigjson`, cujo -conteúdo é semelhante ao exemplo abaixo: +#### Projeção de chaves de Secrets a caminhos específicos -```json -{ - "apiVersion": "v1", - "data": { - ".dockerconfigjson": "eyJhdXRocyI6eyJteS1yZWdpc3RyeTo1MDAwIjp7InVzZXJuYW1lIjoidGlnZXIiLCJwYXNzd29yZCI6InBhc3MxMTMiLCJlbWFpbCI6InRpZ2VyQGFjbWUuY29tIiwiYXV0aCI6ImRHbG5aWEk2Y0dGemN6RXhNdz09In19fQ==" - }, - "kind": "Secret", - "metadata": { - "creationTimestamp": "2021-07-01T07:30:59Z", - "name": "secret-tiger-docker", - "namespace": "default", - "resourceVersion": "566718", - "uid": "e15c1d7b-9071-4100-8681-f3a7a2ce89ca" - }, - "type": "kubernetes.io/dockerconfigjson" -} +Você pode também controlar os caminhos dentro do volume onde as chaves do Secret +são projetadas. Você pode utilizar o campo `.spec.volumes[].secret.items` para +mudar o caminho de destino de cada chave: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username ``` -Se você extrair o conteúdo da chave `.dockerconfigjson`, presente no campo -`data`, e decodificá-lo do formato base64, você irá obter o objeto JSON abaixo, -que é uma configuração válida do Docker criada automaticamente: +Neste caso: +* O valor da chave `username` é armazenado no arquivo +`/etc/foo/my-group/my-username` ao invés de `/etc/foo/username`. +* O valor da chave `password` não é projetado no sistema de arquivos. -```json -{ - "auths":{ - "my-registry:5000":{ - "username":"tiger", - "password":"pass113", - "email":"tiger@acme.com", - "auth":"dGlnZXI6cGFzczExMw==" - } - } -} -``` +Se `.spec.volumes[].secret.items` for utilizado, somente chaves especificadas +na lista `items` são projetadas. Para consumir todas as chaves do Secret, deve +haver um item para cada chave no campo `items`. Todas as chaves listadas precisam +existir no Secret correspondente. Caso contrário, o volume não é criado. -### Secret de autenticação básica +#### Permissões de arquivos de Secret -O tipo `kubernetes.io/basic-auth` é fornecido para armazenar credenciais -necessárias para autenticação básica. Ao utilizar este tipo de Secret, o campo -`data` do Secret deve conter as duas chaves abaixo: -- `username`: o usuário utilizado para autenticação; -- `password`: a senha ou token para autenticação. - -Ambos os valores para estas duas chaves são textos codificados em formato base64. -Você pode fornecer os valores como texto simples utilizando o campo `stringData` -na criação do Secret. +Você pode trocar os bits de permissão de uma chave avulsa de Secret. +Se nenhuma permissão for especificada, `0644` é utilizado por padrão. +Você pode também especificar uma permissão padrão para o volume inteiro de +Secret e sobrescrever esta permissão por chave, se necessário. -O arquivo YAML abaixo é um exemplo de configuração para um Secret de autenticação -básica: +Por exemplo, você pode especificar uma permissão padrão da seguinte maneira: ```yaml apiVersion: v1 -kind: Secret +kind: Pod metadata: - name: secret-basic-auth -type: kubernetes.io/basic-auth -stringData: - username: admin - password: t0p-Secret + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + defaultMode: 0400 ``` -O tipo de autenticação básica é fornecido unicamente por conveniência. Você pode -criar um Secret do tipo `Opaque` utilizado para autenticação básica. No entanto, -utilizar o tipo embutido de Secret auxilia a unificação dos formatos das suas -credenciais. O tipo embutido também fornece verificação de presença das chaves -requeridas pelo servidor da API. - -### Secret de autenticação SSH +Dessa forma, o Secret será montado em `/etc/foo` e todos os arquivos criados +no volume terão a permissão `0400`. -O tipo embutido `kubernetes.io/ssh-auth` é fornecido para armazenamento de dados -utilizados em autenticação SSH. Ao utilizar este tipo de Secret, você deve -especificar um par de chave-valor `ssh-privatekey` no campo `data` ou no campo -`stringData` com a credencial SSH a ser utilizada. +Note que a especificação JSON não suporta notação octal. Neste caso, utilize o +valor 256 para permissões equivalentes a 0400. Se você utilizar YAML ao invés +de JSON para o Pod, você pode utilizar notação octal para especificar permissões +de uma forma mais natural. -O YAML abaixo é um exemplo de configuração para um Secret de autenticação SSH: +Perceba que se você acessar o Pod com `kubectl exec`, você precisará seguir o +vínculo simbólico para encontrar a permissão esperada. Por exemplo, -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-ssh-auth -type: kubernetes.io/ssh-auth -data: - # os dados estão abreviados neste exemplo - ssh-privatekey: | - MIIEpQIBAAKCAQEAulqb/Y ... +Verifique as permissões do arquivo de Secret no pod. ``` +kubectl exec mypod -it sh -O Secret de autenticação SSH é fornecido apenas para a conveniência do usuário. -Você pode criar um Secret do tipo `Opaque` para credentials utilizadas para -autenticação SSH. No entanto, a utilização do tipo embutido auxilia na -unificação dos formatos das suas credenciais e o servidor da API fornece -verificação dos campos requeridos em uma configuração de Secret. +cd /etc/foo +ls -l +``` -{{< caution >}} -Chaves privadas SSH não estabelecem, por si só, uma comunicação confiável -entre um cliente SSH e um servidor. Uma forma secundária de estabelecer -confiança é necessária para mitigar ataques "machine-in-the-middle", como -por exemplo um arquivo `known_hosts` adicionado a um ConfigMap. -{{< /caution >}} +O resultado é semelhante ao abaixo: +``` +total 0 +lrwxrwxrwx 1 root root 15 May 18 00:18 password -> ..data/password +lrwxrwxrwx 1 root root 15 May 18 00:18 username -> ..data/username +``` -### Secrets TLS +Siga o vínculo simbólico para encontrar a permissão correta do arquivo. +``` +cd /etc/foo/..data +ls -l +``` -O Kubernetes fornece o tipo embutido de Secret `kubernetes.io/tls` para -armazenamento de um certificado e sua chave associada que são tipicamente -utilizados para TLS. Estes dados são utilizados primariamente para a -finalização TLS do recurso Ingress, mas podem ser utilizados com outros -recursos ou diretamente por uma carga de trabalho. Ao utilizar este tipo de -Secret, as chaves `tls.key` e `tls.crt` devem ser informadas no campo `data` -(ou `stringData`) da configuração do Secret, embora o servidor da API não -valide o conteúdo de cada uma destas chaves. +O resultado é semelhante ao abaixo: +``` +total 8 +-r-------- 1 root root 12 May 18 00:18 password +-r-------- 1 root root 5 May 18 00:18 username +``` -O YAML a seguir tem um exemplo de configuração para um Secret TLS: +Você pode também utilizar mapeamento, como no exemplo anterior, e especificar +permissões diferentes para arquivos diferentes conforme abaixo: ```yaml apiVersion: v1 -kind: Secret +kind: Pod metadata: - name: secret-tls -type: kubernetes.io/tls -data: - # os dados estão abreviados neste exemplo - tls.crt: | - MIIC2DCCAcCgAwIBAgIBATANBgkqh ... - tls.key: | - MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username + mode: 0777 ``` -O tipo TLS é fornecido para a conveniência do usuário. Você pode criar um -Secret do tipo `Opaque` para credenciais utilizadas para o servidor e/ou -cliente TLS. No entanto, a utilização do tipo embutido auxilia a manter a -consistência dos formatos de Secret no seu projeto; o servidor da API -valida se os campos requeridos estão presentes na configuração do Secret. - -Ao criar um Secret TLS utilizando a ferramenta de linha de comando `kubectl`, -você pode utilizar o subcomando `tls` conforme demonstrado no exemplo abaixo: -```shell -kubectl create secret tls my-tls-secret \ - --cert=path/to/cert/file \ - --key=path/to/key/file -``` +Neste caso, o arquivo resultante em `/etc/foo/my-group/my-username` terá as +permissões `0777`. Se você utilizar JSON, devido às limitações do formato, +você precisará informar as permissões em base decimal, ou o valor `511` neste +exemplo. -O par de chaves pública/privada deve ser criado separadamente. O certificado -de chave pública a ser utilizado no argumento `--cert` deve ser codificado em -formato .PEM (formato DER codificado em texto base64) e deve corresponder à -chave privada fornecida no argumento `--key`. -A chave privada deve estar no formato de chave privada PEM não-encriptado. Em -ambos os casos, as linhas inicial e final do formato PEM (por exemplo, -`--------BEGIN CERTIFICATE-----` e `-------END CERTIFICATE----` para um -certificado) *não* são incluídas. +Note que os valores de permissões podem ser exibidos em formato decimal se você +ler essa informação posteriormente. -### Secret de token de autoinicialização {#bootstrap-token-secrets} +#### Consumindo valores de Secrets em volumes -Um Secret de token de autoinicialização pode ser criado especificando o tipo de -um Secret explicitamente com o valor `bootstrap.kubernetes.io/token`. Este tipo -de Secret é projetado para tokens utilizados durante o processo de inicialização -de nós. Este tipo de Secret armazena tokens utilizados para assinar ConfigMaps -conhecidos. +Dentro do contêiner que monta um volume de Secret, as chaves deste Secret +aparecem como arquivos e os valores dos Secrets são decodificados do formato +base64 e armazenados dentro destes arquivos. Ao executar comandos dentro do +contêiner do exemplo anterior, obteremos os seguintes resultados: -Um Secret de token de autoinicialização é normalmente criado no namespace -`kube-system` e nomeado na forma `bootstrap-token-`, onde -`` é um texto com 6 caracteres contendo a identificação do token. +```shell +ls /etc/foo +``` -No formato de manifesto do Kubernetes, um Secret de token de autoinicialização -se assemelha ao exemplo abaixo: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: bootstrap-token-5emitj - namespace: kube-system -type: bootstrap.kubernetes.io/token -data: - auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= - expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= - token-id: NWVtaXRq - token-secret: a3E0Z2lodnN6emduMXAwcg== - usage-bootstrap-authentication: dHJ1ZQ== - usage-bootstrap-signing: dHJ1ZQ== +O resultado é semelhante a: +``` +username +password ``` -Um Secret do tipo token de autoinicialização possui as seguintes chaves no campo -`data`: -- `token-id`: Uma string com 6 caracteres aleatórios como identificador do - token. Requerido. -- `token-secret`: Uma string de 16 caracteres aleatórios como o conteúdo do - token. Requerido. -- `description`: Uma string contendo uma descrição do propósito para o qual este - token é utilizado. Opcional. -- `expiration`: Um horário absoluto UTC no formato RFC3339 especificando quando - o token deve expirar. Opcional. -- `usage-bootstrap-`: Um conjunto de flags booleanas indicando outros - usos para este token de autoinicialização. -- `auth-extra-groups`: Uma lista separada por vírgulas de nomes de grupos que - serão autenticados adicionalmente, além do grupo `system:bootstrappers`. +```shell +cat /etc/foo/username +``` -O YAML acima pode parecer confuso, já que os valores estão todos codificados em -formato base64. Você pode criar o mesmo Secret utilizando este YAML: -```yaml -apiVersion: v1 -kind: Secret -metadata: - # Observe como o Secret é nomeado - name: bootstrap-token-5emitj - # Um Secret de token de inicialização geralmente fica armazenado no namespace - # kube-system - namespace: kube-system -type: bootstrap.kubernetes.io/token -stringData: - auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" - expiration: "2020-09-13T04:39:10Z" - # Esta identificação de token é utilizada no nome - token-id: "5emitj" - token-secret: "kq4gihvszzgn1p0r" - # Este token pode ser utilizado para autenticação. - usage-bootstrap-authentication: "true" - # e pode ser utilizado para assinaturas - usage-bootstrap-signing: "true" +O resultado é semelhante a: +``` +admin ``` -## Criando um Secret +```shell +cat /etc/foo/password +``` -Há várias formas diferentes de criar um Secret: -- [criar um Secret utilizando o comando `kubectl`](/pt-br/docs/tasks/configmap-secret/managing-secret-using-kubectl/) -- [criar um Secret a partir de um arquivo de configuração](/pt-br/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- [criar um Secret utilizando a ferramenta kustomize](/pt-br/docs/tasks/configmap-secret/managing-secret-using-kustomize/) +O resultado é semelhante a: +``` +1f2d1e2e67df +``` -## Editando um Secret +A aplicação rodando dentro do contêiner é responsável pela leitura dos Secrets +dentro dos arquivos. -Um Secret existente no cluster pode ser editado com o seguinte comando: -```shell -kubectl edit secrets mysecret -``` - -Este comando abrirá o editor padrão configurado e permitirá a modificação dos -valores codificados em base64 no campo `data`: -```yaml -# Please edit the object below. Lines beginning with a '#' will be ignored, -# and an empty file will abort the edit. If an error occurs while saving this file will be -# reopened with the relevant failures. -# -apiVersion: v1 -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -kind: Secret -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: { ... } - creationTimestamp: 2016-01-22T18:41:56Z - name: mysecret - namespace: default - resourceVersion: "164619" - uid: cfee02d6-c137-11e5-8d73-42010af00002 -type: Opaque -``` - -## Utilizando Secrets - -Secrets podem ser montados como volumes de dados ou expostos como -{{< glossary_tooltip text="variáveis de ambiente" term_id="container-env-variables" >}} -para serem utilizados num container de um Pod. Secrets também podem ser -utilizados por outras partes do sistema, sem serem diretamente expostos ao Pod. -Por exemplo, Secrets podem conter credenciais que outras partes do sistema devem -utilizar para interagir com sistemas externos no lugar do usuário. - -### Utilizando Secrets como arquivos em um Pod {#using-secrets-as-files-from-a-pod} - -Para consumir um Secret em um volume em um Pod: -1. Crie um Secret ou utilize um previamente existente. Múltiplos Pods podem -referenciar o mesmo secret. -1. Modifique sua definição de Pod para adicionar um volume na lista -`.spec.volumes[]`. Escolha um nome qualquer para o seu volume e adicione um -campo `.spec.volumes[].secret.secretName` com o mesmo valor do seu objeto -Secret. -1. Adicione um ponto de montagem de volume à lista -`.spec.containers[].volumeMounts[]` de cada contêiner que requer o Secret. -Especifique `.spec.containers[].volumeMounts[].readOnly = true` e especifique o -valor do campo `.spec.containers[].volumeMounts[].mountPath` com o nome de um -diretório não utilizado onde você deseja que os Secrets apareçam. -1. Modifique sua imagem ou linha de comando de modo que o programa procure por -arquivos naquele diretório. Cada chave no campo `data` se torna um nome de -arquivo no diretório especificado em `mountPath`. - -Este é um exemplo de Pod que monta um Secret em um volume: -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - readOnly: true - volumes: - - name: foo - secret: - secretName: mysecret -``` - -Cada Secret que você deseja utilizar deve ser referenciado na lista -`.spec.volumes`. - -Se existirem múltiplos contêineres em um Pod, cada um dos contêineres necessitará -seu próprio bloco `volumeMounts`, mas somente um volume na lista `.spec.volumes` -é necessário por Secret. - -Você pode armazenar vários arquivos em um Secret ou utilizar vários Secrets -distintos, o que for mais conveniente. - -#### Projeção de chaves de Secrets a caminhos específicos - -Você pode também controlar os caminhos dentro do volume onde as chaves do Secret -são projetadas. Você pode utilizar o campo `.spec.volumes[].secret.items` para -mudar o caminho de destino de cada chave: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - readOnly: true - volumes: - - name: foo - secret: - secretName: mysecret - items: - - key: username - path: my-group/my-username -``` - -Neste caso: -* O valor da chave `username` é armazenado no arquivo -`/etc/foo/my-group/my-username` ao invés de `/etc/foo/username`. -* O valor da chave `password` não é projetado no sistema de arquivos. - -Se `.spec.volumes[].secret.items` for utilizado, somente chaves especificadas -na lista `items` são projetadas. Para consumir todas as chaves do Secret, deve -haver um item para cada chave no campo `items`. Todas as chaves listadas precisam -existir no Secret correspondente. Caso contrário, o volume não é criado. - -#### Permissões de arquivos de Secret - -Você pode trocar os bits de permissão de uma chave avulsa de Secret. -Se nenhuma permissão for especificada, `0644` é utilizado por padrão. -Você pode também especificar uma permissão padrão para o volume inteiro de -Secret e sobrescrever esta permissão por chave, se necessário. - -Por exemplo, você pode especificar uma permissão padrão da seguinte maneira: -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - volumes: - - name: foo - secret: - secretName: mysecret - defaultMode: 0400 -``` - -Dessa forma, o Secret será montado em `/etc/foo` e todos os arquivos criados -no volume terão a permissão `0400`. - -Note que a especificação JSON não suporta notação octal. Neste caso, utilize o -valor 256 para permissões equivalentes a 0400. Se você utilizar YAML ao invés -de JSON para o Pod, você pode utilizar notação octal para especificar permissões -de uma forma mais natural. - -Perceba que se você acessar o Pod com `kubectl exec`, você precisará seguir o -vínculo simbólico para encontrar a permissão esperada. Por exemplo, - -Verifique as permissões do arquivo de Secret no pod. -``` -kubectl exec mypod -it sh - -cd /etc/foo -ls -l -``` - -O resultado é semelhante ao abaixo: -``` -total 0 -lrwxrwxrwx 1 root root 15 May 18 00:18 password -> ..data/password -lrwxrwxrwx 1 root root 15 May 18 00:18 username -> ..data/username -``` - -Siga o vínculo simbólico para encontrar a permissão correta do arquivo. -``` -cd /etc/foo/..data -ls -l -``` - -O resultado é semelhante ao abaixo: -``` -total 8 --r-------- 1 root root 12 May 18 00:18 password --r-------- 1 root root 5 May 18 00:18 username -``` - -Você pode também utilizar mapeamento, como no exemplo anterior, e especificar -permissões diferentes para arquivos diferentes conforme abaixo: -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - volumes: - - name: foo - secret: - secretName: mysecret - items: - - key: username - path: my-group/my-username - mode: 0777 -``` - -Neste caso, o arquivo resultante em `/etc/foo/my-group/my-username` terá as -permissões `0777`. Se você utilizar JSON, devido às limitações do formato, -você precisará informar as permissões em base decimal, ou o valor `511` neste -exemplo. - -Note que os valores de permissões podem ser exibidos em formato decimal se você -ler essa informação posteriormente. - -#### Consumindo valores de Secrets em volumes - -Dentro do contêiner que monta um volume de Secret, as chaves deste Secret -aparecem como arquivos e os valores dos Secrets são decodificados do formato -base64 e armazenados dentro destes arquivos. Ao executar comandos dentro do -contêiner do exemplo anterior, obteremos os seguintes resultados: - -```shell -ls /etc/foo -``` - -O resultado é semelhante a: -``` -username -password -``` - -```shell -cat /etc/foo/username -``` - -O resultado é semelhante a: -``` -admin -``` - -```shell -cat /etc/foo/password -``` - -O resultado é semelhante a: -``` -1f2d1e2e67df -``` - -A aplicação rodando dentro do contêiner é responsável pela leitura dos Secrets -dentro dos arquivos. - -#### Secrets montados são atualizados automaticamente +#### Secrets montados são atualizados automaticamente Quando um Secret que está sendo consumido a partir de um volume é atualizado, as chaves projetadas são atualizadas após algum tempo também. O kubelet verifica @@ -815,6 +519,373 @@ seja reiniciado. Existem ferramentas de terceiros que oferecem reinicializações automáticas quando Secrets são atualizados. +## Tipos de Secrets {#secret-types} + +Ao criar um Secret, você pode especificar o seu tipo utilizando o campo `type` +do objeto Secret, ou algumas opções de linha de comando equivalentes no comando +`kubectl`, quando disponíveis. O campo `type` de um Secret é utilizado para +facilitar a manipulação programática de diferentes tipos de dados confidenciais. + +O Kubernetes oferece vários tipos embutidos de Secret para casos de uso comuns. +Estes tipos variam em termos de validações efetuadas e limitações que o +Kubernetes impõe neles. + +| Tipo embutido | Caso de uso | +|----------------------------------------|----------------------------------------------------| +| `Opaque` | dados arbitrários definidos pelo usuário | +| `kubernetes.io/service-account-token` | token de service account (conta de serviço) | +| `kubernetes.io/dockercfg` | arquivo `~/.dockercfg` serializado | +| `kubernetes.io/dockerconfigjson` | arquivo `~/.docker/config.json` serializado | +| `kubernetes.io/basic-auth` | credenciais para autenticação básica (basic auth) | +| `kubernetes.io/ssh-auth` | credenciais para autenticação SSH | +| `kubernetes.io/tls` | dados para um cliente ou servidor TLS | +| `bootstrap.kubernetes.io/token` | dados de token de autoinicialização | + +Você pode definir e utilizar seu próprio tipo de Secret definindo o valor do +campo `type` como uma string não-nula em um objeto Secret. Uma string em branco +é tratada como o tipo `Opaque`. O Kubernetes não restringe nomes de tipos. No +entanto, quando tipos embutidos são utilizados, você precisa atender a todos os +requisitos daquele tipo. + +### Secrets tipo Opaque + +`Opaque` é o tipo predefinido de Secret quando o campo `type` não é informado +em um arquivo de configuração. Quando um Secret é criado usando o comando +`kubectl`, você deve usar o subcomando `generic` para indicar que um Secret é +do tipo `Opaque`. Por exemplo, o comando a seguir cria um Secret vazio do tipo +`Opaque`: +```shell +kubectl create secret generic empty-secret +kubectl get secret empty-secret +``` + +O resultado será semelhante ao abaixo: + +``` +NAME TYPE DATA AGE +empty-secret Opaque 0 2m6s +``` + +A coluna `DATA` demonstra a quantidade de dados armazenados no Secret. Neste +caso, `0` significa que este objeto Secret está vazio. + +### Secrets de token de service account (conta de serviço) + +Secrets do tipo `kubernetes.io/service-account-token` são utilizados para +armazenar um token que identifica uma service account (conta de serviço). Ao +utilizar este tipo de Secret, você deve garantir que a anotação +`kubernetes.io/service-account.name` contém um nome de uma service account +existente. Um controlador do Kubernetes preenche outros campos, como por exemplo +a anotação `kubernetes.io/service-account.uid` e a chave `token` no campo `data` +com o conteúdo do token. + +O exemplo de configuração abaixo declara um Secret de token de service account: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-sa-sample + annotations: + kubernetes.io/service-account-name: "sa-name" +type: kubernetes.io/service-account-token +data: + # Você pode incluir pares chave-valor adicionais, da mesma forma que faria com + # Secrets do tipo Opaque + extra: YmFyCg== +``` + +Ao criar um {{< glossary_tooltip text="Pod" term_id="pod" >}}, o Kubernetes +automaticamente cria um Secret de service account e automaticamente atualiza o +seu Pod para utilizar este Secret. O Secret de token de service account contém +credenciais para acessar a API. + +A criação automática e o uso de credenciais de API podem ser desativados se +desejado. Porém, se tudo que você necessita é poder acessar o servidor da API +de forma segura, este é o processo recomendado. + +Veja a documentação de +[ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) +para mais informações sobre o funcionamento de service accounts. Você pode +verificar também os campos `automountServiceAccountToken` e `serviceAccountName` +do [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) +para mais informações sobre como referenciar service accounts em Pods. + +### Secrets de configuração do Docker + +Você pode utilizar um dos tipos abaixo para criar um Secret que armazena +credenciais para accesso a um registro de contêineres compatível com Docker +para busca de imagens: +- `kubernetes.io/dockercfg` +- `kubernetes.io/dockerconfigjson` + +O tipo `kubernetes.io/dockercfg` é reservado para armazenamento de um arquivo +`~/.dockercfg` serializado. Este arquivo é o formato legado para configuração +do utilitário de linha de comando do Docker. Ao utilizar este tipo de Secret, +é preciso garantir que o campo `data` contém uma chave `.dockercfg` cujo valor +é o conteúdo do arquivo `~/.dockercfg` codificado no formato base64. + +O tipo `kubernetes.io/dockerconfigjson` foi projetado para armazenamento de um +conteúdo JSON serializado que obedece às mesmas regras de formato que o arquivo +`~/.docker/config.json`. Este arquivo é um formato mais moderno para o conteúdo +do arquivo `~/.dockercfg`. Ao utilizar este tipo de Secret, o conteúdo do campo +`data` deve conter uma chave `.dockerconfigjson` em que o conteúdo do arquivo +`~/.docker/config.json` é fornecido codificado no formato base64. + +Um exemplo de um Secret do tipo `kubernetes.io/dockercfg`: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-dockercfg +type: kubernetes.io/dockercfg +data: + .dockercfg: | + "" +``` + +{{< note >}} +Se você não desejar fazer a codificação em formato base64, você pode utilizar o +campo `stringData` como alternativa. +{{< /note >}} + +Ao criar estes tipos de Secret utilizando um manifesto (arquivo YAML), o servidor +da API verifica se a chave esperada existe no campo `data` e se o valor fornecido +pode ser interpretado como um conteúdo JSON válido. O servidor da API não verifica +se o conteúdo informado é realmente um arquivo de configuração do Docker. + +Quando você não tem um arquivo de configuração do Docker, ou quer utilizar o +comando `kubectl` para criar um Secret de registro de contêineres compatível +com o Docker, você pode executar: +```shell +kubectl create secret docker-registry secret-tiger-docker \ + --docker-username=tiger \ + --docker-password=pass113 \ + --docker-email=tiger@acme.com \ + --docker-server=my-registry.example:5000 +``` + +Esse comando cria um secret do tipo `kubernetes.io/dockerconfigjson`, cujo +conteúdo é semelhante ao exemplo abaixo: + +```json +{ + "apiVersion": "v1", + "data": { + ".dockerconfigjson": "eyJhdXRocyI6eyJteS1yZWdpc3RyeTo1MDAwIjp7InVzZXJuYW1lIjoidGlnZXIiLCJwYXNzd29yZCI6InBhc3MxMTMiLCJlbWFpbCI6InRpZ2VyQGFjbWUuY29tIiwiYXV0aCI6ImRHbG5aWEk2Y0dGemN6RXhNdz09In19fQ==" + }, + "kind": "Secret", + "metadata": { + "creationTimestamp": "2021-07-01T07:30:59Z", + "name": "secret-tiger-docker", + "namespace": "default", + "resourceVersion": "566718", + "uid": "e15c1d7b-9071-4100-8681-f3a7a2ce89ca" + }, + "type": "kubernetes.io/dockerconfigjson" +} +``` + +Se você extrair o conteúdo da chave `.dockerconfigjson`, presente no campo +`data`, e decodificá-lo do formato base64, você irá obter o objeto JSON abaixo, +que é uma configuração válida do Docker criada automaticamente: + +```json +{ + "auths":{ + "my-registry:5000":{ + "username":"tiger", + "password":"pass113", + "email":"tiger@acme.com", + "auth":"dGlnZXI6cGFzczExMw==" + } + } +} +``` + +### Secret de autenticação básica + +O tipo `kubernetes.io/basic-auth` é fornecido para armazenar credenciais +necessárias para autenticação básica. Ao utilizar este tipo de Secret, o campo +`data` do Secret deve conter as duas chaves abaixo: +- `username`: o usuário utilizado para autenticação; +- `password`: a senha ou token para autenticação. + +Ambos os valores para estas duas chaves são textos codificados em formato base64. +Você pode fornecer os valores como texto simples utilizando o campo `stringData` +na criação do Secret. + +O arquivo YAML abaixo é um exemplo de configuração para um Secret de autenticação +básica: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-basic-auth +type: kubernetes.io/basic-auth +stringData: + username: admin + password: t0p-Secret +``` + +O tipo de autenticação básica é fornecido unicamente por conveniência. Você pode +criar um Secret do tipo `Opaque` utilizado para autenticação básica. No entanto, +utilizar o tipo embutido de Secret auxilia a unificação dos formatos das suas +credenciais. O tipo embutido também fornece verificação de presença das chaves +requeridas pelo servidor da API. + +### Secret de autenticação SSH + +O tipo embutido `kubernetes.io/ssh-auth` é fornecido para armazenamento de dados +utilizados em autenticação SSH. Ao utilizar este tipo de Secret, você deve +especificar um par de chave-valor `ssh-privatekey` no campo `data` ou no campo +`stringData` com a credencial SSH a ser utilizada. + +O YAML abaixo é um exemplo de configuração para um Secret de autenticação SSH: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-ssh-auth +type: kubernetes.io/ssh-auth +data: + # os dados estão abreviados neste exemplo + ssh-privatekey: | + MIIEpQIBAAKCAQEAulqb/Y ... +``` + +O Secret de autenticação SSH é fornecido apenas para a conveniência do usuário. +Você pode criar um Secret do tipo `Opaque` para credentials utilizadas para +autenticação SSH. No entanto, a utilização do tipo embutido auxilia na +unificação dos formatos das suas credenciais e o servidor da API fornece +verificação dos campos requeridos em uma configuração de Secret. + +{{< caution >}} +Chaves privadas SSH não estabelecem, por si só, uma comunicação confiável +entre um cliente SSH e um servidor. Uma forma secundária de estabelecer +confiança é necessária para mitigar ataques "machine-in-the-middle", como +por exemplo um arquivo `known_hosts` adicionado a um ConfigMap. +{{< /caution >}} + +### Secrets TLS + +O Kubernetes fornece o tipo embutido de Secret `kubernetes.io/tls` para +armazenamento de um certificado e sua chave associada que são tipicamente +utilizados para TLS. Estes dados são utilizados primariamente para a +finalização TLS do recurso Ingress, mas podem ser utilizados com outros +recursos ou diretamente por uma carga de trabalho. Ao utilizar este tipo de +Secret, as chaves `tls.key` e `tls.crt` devem ser informadas no campo `data` +(ou `stringData`) da configuração do Secret, embora o servidor da API não +valide o conteúdo de cada uma destas chaves. + +O YAML a seguir tem um exemplo de configuração para um Secret TLS: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-tls +type: kubernetes.io/tls +data: + # os dados estão abreviados neste exemplo + tls.crt: | + MIIC2DCCAcCgAwIBAgIBATANBgkqh ... + tls.key: | + MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... +``` + +O tipo TLS é fornecido para a conveniência do usuário. Você pode criar um +Secret do tipo `Opaque` para credenciais utilizadas para o servidor e/ou +cliente TLS. No entanto, a utilização do tipo embutido auxilia a manter a +consistência dos formatos de Secret no seu projeto; o servidor da API +valida se os campos requeridos estão presentes na configuração do Secret. + +Ao criar um Secret TLS utilizando a ferramenta de linha de comando `kubectl`, +você pode utilizar o subcomando `tls` conforme demonstrado no exemplo abaixo: +```shell +kubectl create secret tls my-tls-secret \ + --cert=path/to/cert/file \ + --key=path/to/key/file +``` + +O par de chaves pública/privada deve ser criado separadamente. O certificado +de chave pública a ser utilizado no argumento `--cert` deve ser codificado em +formato .PEM (formato DER codificado em texto base64) e deve corresponder à +chave privada fornecida no argumento `--key`. +A chave privada deve estar no formato de chave privada PEM não-encriptado. Em +ambos os casos, as linhas inicial e final do formato PEM (por exemplo, +`--------BEGIN CERTIFICATE-----` e `-------END CERTIFICATE----` para um +certificado) *não* são incluídas. + +### Secret de token de autoinicialização {#bootstrap-token-secrets} + +Um Secret de token de autoinicialização pode ser criado especificando o tipo de +um Secret explicitamente com o valor `bootstrap.kubernetes.io/token`. Este tipo +de Secret é projetado para tokens utilizados durante o processo de inicialização +de nós. Este tipo de Secret armazena tokens utilizados para assinar ConfigMaps +conhecidos. + +Um Secret de token de autoinicialização é normalmente criado no namespace +`kube-system` e nomeado na forma `bootstrap-token-`, onde +`` é um texto com 6 caracteres contendo a identificação do token. + +No formato de manifesto do Kubernetes, um Secret de token de autoinicialização +se assemelha ao exemplo abaixo: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: bootstrap-token-5emitj + namespace: kube-system +type: bootstrap.kubernetes.io/token +data: + auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= + expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= + token-id: NWVtaXRq + token-secret: a3E0Z2lodnN6emduMXAwcg== + usage-bootstrap-authentication: dHJ1ZQ== + usage-bootstrap-signing: dHJ1ZQ== +``` + +Um Secret do tipo token de autoinicialização possui as seguintes chaves no campo +`data`: +- `token-id`: Uma string com 6 caracteres aleatórios como identificador do + token. Requerido. +- `token-secret`: Uma string de 16 caracteres aleatórios como o conteúdo do + token. Requerido. +- `description`: Uma string contendo uma descrição do propósito para o qual este + token é utilizado. Opcional. +- `expiration`: Um horário absoluto UTC no formato RFC3339 especificando quando + o token deve expirar. Opcional. +- `usage-bootstrap-`: Um conjunto de flags booleanas indicando outros + usos para este token de autoinicialização. +- `auth-extra-groups`: Uma lista separada por vírgulas de nomes de grupos que + serão autenticados adicionalmente, além do grupo `system:bootstrappers`. + +O YAML acima pode parecer confuso, já que os valores estão todos codificados em +formato base64. Você pode criar o mesmo Secret utilizando este YAML: +```yaml +apiVersion: v1 +kind: Secret +metadata: + # Observe como o Secret é nomeado + name: bootstrap-token-5emitj + # Um Secret de token de inicialização geralmente fica armazenado no namespace + # kube-system + namespace: kube-system +type: bootstrap.kubernetes.io/token +stringData: + auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" + expiration: "2020-09-13T04:39:10Z" + # Esta identificação de token é utilizada no nome + token-id: "5emitj" + token-secret: "kq4gihvszzgn1p0r" + # Este token pode ser utilizado para autenticação. + usage-bootstrap-authentication: "true" + # e pode ser utilizado para assinaturas + usage-bootstrap-signing: "true" +``` + ## Secrets imutáveis {#secret-immutable} {{< feature-state for_k8s_version="v1.21" state="stable" >}} From c8e83be64cb88e2e249da43c1b1226cde0e19fab Mon Sep 17 00:00:00 2001 From: Mauren Berti Date: Mon, 23 May 2022 14:01:17 -0400 Subject: [PATCH 027/292] Synchronize pt-BR translation to en version. --- .../docs/concepts/configuration/secret.md | 1629 ++++++++--------- 1 file changed, 812 insertions(+), 817 deletions(-) diff --git a/content/pt-br/docs/concepts/configuration/secret.md b/content/pt-br/docs/concepts/configuration/secret.md index e7ec5d7d3941e..8b7b591ec3807 100644 --- a/content/pt-br/docs/concepts/configuration/secret.md +++ b/content/pt-br/docs/concepts/configuration/secret.md @@ -125,16 +125,16 @@ como valores. As chaves dos campos `data` e `stringData` devem consistir de caracteres alfanuméricos, `-`, `_`, ou `.`. Todos os pares chave-valor no campo `stringData` são internamente combinados com os dados do campo `data`. Se uma chave aparece -em ambos os campos, o valor informado no campo `stringData` toma a precedência. +em ambos os campos, o valor informado no campo `stringData` tem a precedência. #### Limite de tamanho {#restriction-data-size} Secrets individuais são limitados a 1MiB em tamanho. Esta limitação tem por -objetivo desencorajar a criação de Secrets muito grandes que possam exaurir o -servidor da API e a memória do kubelet. No entanto, a criação de vários Secrets -pequenos pode também exaurir a memória. Você pode utilizar uma -[quota de recurso](/docs/concepts/policy/resource-quotas/) a fim de limitar o -número de Secrets (ou outros recursos) em um namespace. +objetivo desencorajar a criação de Secrets muito grandes que possam exaurir a +memória do servidor da API e do kubelet. No entanto, a criação de vários Secrets +pequenos também pode exaurir a memória. Você pode utilizar uma +[cota de recurso](/pt-br/docs/concepts/policy/resource-quotas/) a fim de limitar +o número de Secrets (ou outros recursos) em um namespace. ### Editando um Secret @@ -188,16 +188,34 @@ utilizar para interagir com sistemas externos no lugar do usuário. Secrets montados como volumes são verificados para garantir que o nome referenciado realmente é um objeto do tipo Secret. Portanto, um Secret deve ser -criado antes de quaisquer Pods que o referenciam. +criado antes de quaisquer Pods que dependem deste Secret. Se um Secret não puder ser encontrado (porque não existe, ou devido a um problema de conectividade com o servidor da API) o kubelet tenta periodicamente reiniciar aquele Pod. O kubelet também relata um evento para aquele Pod, incluindo detalhes do problema ao buscar o Secret. +#### Secrets Opcionais {#restriction-secret-must-exist} + +Quando você define uma variável de ambiente em um contêiner baseada em um Secret, +você pode especificar que o Secret em questão será _opcional_. O padrão é o +Secret ser requerido. + +Nenhum dos contêineres de um Pod irão inicializar até que todos os Secrets +requeridos estejam disponíveis. + +Se um Pod referencia uma chave específica em um Secret e o Secret existe, mas +não possui a chave com o nome referenciado, o Pod falha durante a inicialização. + ### Utilizando Secrets como arquivos em um Pod {#using-secrets-as-files-from-a-pod} -Para consumir um Secret em um volume em um Pod: +Se você deseja acessar dados de um Secret em um Pod, uma das formas de consumir +esta informação é fazer com que o Kubernetes deixe o valor daquele Secret +disponível como um arquivo dentro do sistema de arquivos de um ou mais dos +contêineres daquele Pod. + +Para configurar isso: + 1. Crie um Secret ou utilize um previamente existente. Múltiplos Pods podem referenciar o mesmo secret. 1. Modifique sua definição de Pod para adicionar um volume na lista @@ -213,7 +231,7 @@ diretório não utilizado onde você deseja que os Secrets apareçam. arquivos naquele diretório. Cada chave no campo `data` se torna um nome de arquivo no diretório especificado em `mountPath`. -Este é um exemplo de Pod que monta um Secret em um volume: +Este é um exemplo de Pod que monta um Secret de nome `mysecret` em um volume: ```yaml apiVersion: v1 kind: Pod @@ -230,20 +248,38 @@ spec: volumes: - name: foo secret: - secretName: mysecret + secretName: mysecret # configuração padrão; "mysecret" precisa existir ``` Cada Secret que você deseja utilizar deve ser referenciado na lista `.spec.volumes`. -Se existirem múltiplos contêineres em um Pod, cada um dos contêineres necessitará -seu próprio bloco `volumeMounts`, mas somente um volume na lista `.spec.volumes` -é necessário por Secret. +Se existirem múltiplos contêineres em um Pod, cada um dos contêineres +necessitará seu próprio bloco `volumeMounts`, mas somente um volume na lista +`.spec.volumes` é necessário por Secret. -Você pode armazenar vários arquivos em um Secret ou utilizar vários Secrets -distintos, o que for mais conveniente. +{{< note >}} +Versões do Kubernetes anteriores a v1.22 criavam automaticamente credenciais +para acesso à API do Kubernetes. Este mecanismo antigo era baseado na criação de +Secrets com tokens que podiam então ser montados em Pods em execução. +Em versões mais recentes, incluindo o Kubernetes v{{< skew currentVersion >}}, +credenciais para acesso à API são obtidas diretamente através da API +[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +e são montadas em Pods utilizando um +[volume projetado](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume). +Os tokens obtidos através deste método possuem tempo de vida limitado e são +automaticamente invalidados quando o Pod em que estão montados é removido. + +Você ainda pode +[criar manualmente](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token) +um Secret de token de service account se você precisa de um token que não expire, +por exemplo. No entanto, o uso do subrecurso +[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +é recomendado para obtenção de um token para acesso à API ao invés do uso de +Secrets de token de service account. +{{< /note >}} -#### Projeção de chaves de Secrets a caminhos específicos +#### Projeção de chaves de Secrets em caminhos específicos Você pode também controlar os caminhos dentro do volume onde as chaves do Secret são projetadas. Você pode utilizar o campo `.spec.volumes[].secret.items` para @@ -272,18 +308,21 @@ spec: ``` Neste caso: + * O valor da chave `username` é armazenado no arquivo -`/etc/foo/my-group/my-username` ao invés de `/etc/foo/username`. + `/etc/foo/my-group/my-username` ao invés de `/etc/foo/username`. * O valor da chave `password` não é projetado no sistema de arquivos. Se `.spec.volumes[].secret.items` for utilizado, somente chaves especificadas na lista `items` são projetadas. Para consumir todas as chaves do Secret, deve -haver um item para cada chave no campo `items`. Todas as chaves listadas precisam +haver um item para cada chave no campo `items`. + +Se você listar as chaves explicitamente, então todas as chaves listadas precisam existir no Secret correspondente. Caso contrário, o volume não é criado. #### Permissões de arquivos de Secret -Você pode trocar os bits de permissão de uma chave avulsa de Secret. +Você pode trocar os bits de permissão POSIX de uma chave avulsa de Secret. Se nenhuma permissão for especificada, `0644` é utilizado por padrão. Você pode também especificar uma permissão padrão para o volume inteiro de Secret e sobrescrever esta permissão por chave, se necessário. @@ -311,86 +350,30 @@ spec: Dessa forma, o Secret será montado em `/etc/foo` e todos os arquivos criados no volume terão a permissão `0400`. -Note que a especificação JSON não suporta notação octal. Neste caso, utilize o -valor 256 para permissões equivalentes a 0400. Se você utilizar YAML ao invés -de JSON para o Pod, você pode utilizar notação octal para especificar permissões -de uma forma mais natural. - -Perceba que se você acessar o Pod com `kubectl exec`, você precisará seguir o -vínculo simbólico para encontrar a permissão esperada. Por exemplo, - -Verifique as permissões do arquivo de Secret no pod. -``` -kubectl exec mypod -it sh - -cd /etc/foo -ls -l -``` - -O resultado é semelhante ao abaixo: -``` -total 0 -lrwxrwxrwx 1 root root 15 May 18 00:18 password -> ..data/password -lrwxrwxrwx 1 root root 15 May 18 00:18 username -> ..data/username -``` - -Siga o vínculo simbólico para encontrar a permissão correta do arquivo. -``` -cd /etc/foo/..data -ls -l -``` - -O resultado é semelhante ao abaixo: -``` -total 8 --r-------- 1 root root 12 May 18 00:18 password --r-------- 1 root root 5 May 18 00:18 username -``` - -Você pode também utilizar mapeamento, como no exemplo anterior, e especificar -permissões diferentes para arquivos diferentes conforme abaixo: -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - volumes: - - name: foo - secret: - secretName: mysecret - items: - - key: username - path: my-group/my-username - mode: 0777 -``` - -Neste caso, o arquivo resultante em `/etc/foo/my-group/my-username` terá as -permissões `0777`. Se você utilizar JSON, devido às limitações do formato, -você precisará informar as permissões em base decimal, ou o valor `511` neste -exemplo. - -Note que os valores de permissões podem ser exibidos em formato decimal se você -ler essa informação posteriormente. +{{< note >}} +Se você estiver definindo um Pod ou um template de Pod utilizando JSON, observe +que a especificação JSON não suporta a notação octal. Você pode utilizar o valor +decimal para o campo `defaultMode` (por exemplo, 0400 em base octal equivale a +256 na base decimal). +Se você estiver escrevendo YAML, você pode escrever o valor para `defaultMode` +em octal. +{{< /note >}} #### Consumindo valores de Secrets em volumes Dentro do contêiner que monta um volume de Secret, as chaves deste Secret aparecem como arquivos e os valores dos Secrets são decodificados do formato -base64 e armazenados dentro destes arquivos. Ao executar comandos dentro do -contêiner do exemplo anterior, obteremos os seguintes resultados: +base64 e armazenados dentro destes arquivos. + +Ao executar comandos dentro do contêiner do exemplo anterior, obteremos os +seguintes resultados: ```shell ls /etc/foo ``` O resultado é semelhante a: + ``` username password @@ -401,6 +384,7 @@ cat /etc/foo/username ``` O resultado é semelhante a: + ``` admin ``` @@ -410,6 +394,7 @@ cat /etc/foo/password ``` O resultado é semelhante a: + ``` 1f2d1e2e67df ``` @@ -419,46 +404,52 @@ dentro dos arquivos. #### Secrets montados são atualizados automaticamente -Quando um Secret que está sendo consumido a partir de um volume é atualizado, as -chaves projetadas são atualizadas após algum tempo também. O kubelet verifica -se o Secret montado está atualizado a cada sincronização periódica. No entanto, -o kubelet utiliza seu cache local para buscar o valor corrente de um Secret. O -tipo do cache é configurável utilizando o campo `ConfigMapAndSecretChangeDetectionStrategy` -na estrutura [KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/). -Um Secret pode ser propagado através de um _watch_ (comportamento padrão), que -é o sistema de propagação de mudanças incrementais em objetos do Kubernetes; -baseado em TTL (_time to live_, ou tempo de expiração); ou redirecionando todas -as requisições diretamente para o servidor da API. - -Como resultado, o tempo decorrido total entre o momento em que o Secret foi -atualizado até o momento em que as novas chaves são projetadas nos Pods pode -ser tão longo quanto o tempo de sincronização do kubelet somado ao tempo de -propagação do cache, onde o tempo de propagação do cache depende do tipo de -cache escolhido: o tempo de propagação pode ser igual ao tempo de propagação -do _watch_, TTL do cache, ou zero, de acordo com cada um dos tipos de cache. +Quando um volume contém dados de um Secret, e o Secret referenciado é atualizado, +o Kubernetes rastreia a atualização e atualiza os dados no volume, utilizando +uma abordagem de consistência eventual. {{< note >}} -Um contêiner que utiliza Secrets através de um ponto de montagem com a -propriedade -[subPath](/docs/concepts/storage/volumes#using-subpath) não recebe atualizações -deste Secret. +Um contêiner que utiliza Secrets através de um volume montado com a propriedade +[`subPath`](/docs/concepts/storage/volumes#using-subpath) não recebe +atualizações automatizadas para este Secret. {{< /note >}} +O kubelet mantém um cache das chaves e valores atuais dos Secrets que são +utilizados em volumes de Pods daquele nó. Você pode configurar a forma que o +kubelet detecta diferenças dos valores armazenados em cache. O campo +`configMapAndSecretDetectionStrategy` na +[configuração do kubelet](/docs/reference/config-api/kubelet-config.v1beta1/) +controla qual estratégia o kubelet usa. A estratégia padrão é `Watch`. + +Atualizações em Secrets podem ser propagadas por um mecanismo de observação da +API (estratégia padrão), baseado em cache com um tempo de expiração definido +(_time-to-live_), ou solicitado diretamente ao servidor da API do cluster a cada +iteração do ciclo de sincronização do kubelet. + +Como resultado, o atraso total entre o momento em que o Secret foi atualizado +até o momento em que as novas chaves são projetadas no Pod pode ser tão longo +quanto a soma do tempo de sincronização do kubelet somado ao tempo de atraso de +propagação do cache, onde o atraso de propagação do cache depende do tipo de +cache escolhido. Seguindo a mesma ordem listada no parágrafo anterior, estes +valores são: atraso de propagação via _watch_, tempo de expiração configurado no +cache (_time-to-live_, ou TTL), ou zero para solicitação direta ao servidor da +API. + ### Utilizando Secrets como variáveis de ambiente {#using-secrets-as-environment-variables} Para utilizar um secret em uma {{< glossary_tooltip text="variável de ambiente" term_id="container-env-variables" >}} em um Pod: 1. Crie um Secret ou utilize um já existente. Múltiplos Pods podem referenciar o -mesmo Secret. + mesmo Secret. 1. Modifique a definição de cada contêiner do Pod em que desejar consumir o -Secret, adicionando uma variável de ambiente para cada uma das chaves que deseja -consumir. -A variável de ambiente que consumir o valor da chave em questão deverá popular o -nome do Secret e a sua chave correspondente no campo -`env[].valueFrom.secretKeyRef`. + Secret, adicionando uma variável de ambiente para cada uma das chaves que + deseja consumir. + A variável de ambiente que consumir o valor da chave em questão deverá + popular o nome do Secret e a sua chave correspondente no campo + `env[].valueFrom.secretKeyRef`. 1. Modifique sua imagem de contêiner ou linha de comando de forma que o programa -busque os valores nas variáveis de ambiente especificadas. + busque os valores nas variáveis de ambiente especificadas. Este é um exemplo de um Pod que utiliza Secrets em variáveis de ambiente: ```yaml @@ -476,18 +467,45 @@ spec: secretKeyRef: name: mysecret key: username + optional: false # valor padrão; "mysecret" deve existir + # e incluir uma chave com o nome "username" - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password + optional: false # valor padrão; "mysecret" deve existir + # e incluir uma chave com o nome "password" restartPolicy: Never ``` +#### Variáveis de ambiente inválidas {#restriction-env-from-invalid} + +Secrets utilizados para popular variáveis de ambiente através do campo `envFrom` +que possuem chaves consideradas inválidas para nomes de variáveis de ambiente +têm tais chaves ignoradas. O Pod irá iniciar normalmente. + +Se você definir um Pod contendo um nome de variável de ambiente inválido, os +eventos de inicialização do Pod incluirão um evento com a razão +`InvalidVariableNames` e uma mensagem que lista as chaves inválidas ignoradas. +O exemplo abaixo demonstra um Pod que referencia um Secret chamado `mysecret`, +onde `mysecret` contém duas chaves inválidas: `1badkey` and `2alsobad`. + +```shell +kubectl get events +``` + +O resultado é semelhante a: + +``` +LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON +0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. +``` + #### Consumindo valores de Secret em variáveis de ambiente -Dentro de um contêiner que consome um Secret em variáveis de ambiente, a chave -do Secret aparece como uma variável de ambiente comum, contendo os dados do +Dentro de um contêiner que consome um Secret em variáveis de ambiente, as chaves +do Secret aparecem como variáveis de ambiente comuns, contendo os dados do Secret decodificados do formato base64. Ao executar comandos no contêiner do exemplo anterior, obteremos os resultados abaixo: @@ -511,598 +529,154 @@ O resultado é semelhante a: 1f2d1e2e67df ``` -#### Variáveis de ambiente não são atualizadas após uma atualização no Secret +{{< note >}} +Se um contêiner já consome um Secret em uma variável de ambiente, uma +atualização do Secret não será detectada pelo contêiner a menos que este seja +reiniciado. Há soluções de terceiros que fornecem a funcionalidade de +reinicialização automática de Pods quando o valor dos Secrets mudam. +{{< /note >}} -Se um contêiner já consome um Secret em uma variável de ambiente, uma atualização -dos valores do Secret não será refletida no contêiner a menos que o contêiner -seja reiniciado. -Existem ferramentas de terceiros que oferecem reinicializações automáticas -quando Secrets são atualizados. +### Secrets para obtenção de imagens de contêiner {#using-imagepullsecrets} -## Tipos de Secrets {#secret-types} +Se você deseja obter imagens de contêiner de um repositório privado, você +precisa fornecer ao kubelet uma maneira de se autenticar a este repositório. +Você pode configurar o campo `imagePullSecrets` para esta finalidade. Estes +Secrets são configurados a nível de Pod. -Ao criar um Secret, você pode especificar o seu tipo utilizando o campo `type` -do objeto Secret, ou algumas opções de linha de comando equivalentes no comando -`kubectl`, quando disponíveis. O campo `type` de um Secret é utilizado para -facilitar a manipulação programática de diferentes tipos de dados confidenciais. +O campo `imagePullSecrets` de um Pod é uma lista de referências a Secrets +no mesmo namespace que o Pod. +Você pode utilizar `imagePullSecrets` para enviar credenciais para acesso a um +registro de contêineres ao kubelet. O kubelet utiliza essa informação para +baixar uma imagem privada no lugar do seu Pod. +Veja o campo `PodSpec` na +[referência da API de Pods](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) +para maiores detalhes sobre o campo `imagePullSecrets`. -O Kubernetes oferece vários tipos embutidos de Secret para casos de uso comuns. -Estes tipos variam em termos de validações efetuadas e limitações que o -Kubernetes impõe neles. +#### Usando `imagePullSecrets` -| Tipo embutido | Caso de uso | -|----------------------------------------|----------------------------------------------------| -| `Opaque` | dados arbitrários definidos pelo usuário | -| `kubernetes.io/service-account-token` | token de service account (conta de serviço) | -| `kubernetes.io/dockercfg` | arquivo `~/.dockercfg` serializado | -| `kubernetes.io/dockerconfigjson` | arquivo `~/.docker/config.json` serializado | -| `kubernetes.io/basic-auth` | credenciais para autenticação básica (basic auth) | -| `kubernetes.io/ssh-auth` | credenciais para autenticação SSH | -| `kubernetes.io/tls` | dados para um cliente ou servidor TLS | -| `bootstrap.kubernetes.io/token` | dados de token de autoinicialização | +O campo `imagePullSecrets` é uma lista de referências a Secrets no mesmo +namespace. +Você pode utilizar o campo `imagePullSecrets` para enviar um Secret que contém +uma senha para um registro de imagens de contêiner do Docker (ou outro registro +de imagens de contêiner). O kubelet utiliza essa informação para baixar uma +imagem privada no lugar do seu Pod. +Veja a [API `PodSpec`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) +para mais informações sobre o campo `imagePullSecrets`. -Você pode definir e utilizar seu próprio tipo de Secret definindo o valor do -campo `type` como uma string não-nula em um objeto Secret. Uma string em branco -é tratada como o tipo `Opaque`. O Kubernetes não restringe nomes de tipos. No -entanto, quando tipos embutidos são utilizados, você precisa atender a todos os -requisitos daquele tipo. +##### Especificando `imagePullSecrets` manualmente -### Secrets tipo Opaque +Você pode ler sobre como especificar `imagePullSecrets` em um Pod na +[documentação de imagens de contêiner](/pt-br/docs/concepts/containers/images/#especificando-imagepullsecrets-em-um-pod). -`Opaque` é o tipo predefinido de Secret quando o campo `type` não é informado -em um arquivo de configuração. Quando um Secret é criado usando o comando -`kubectl`, você deve usar o subcomando `generic` para indicar que um Secret é -do tipo `Opaque`. Por exemplo, o comando a seguir cria um Secret vazio do tipo -`Opaque`: -```shell -kubectl create secret generic empty-secret -kubectl get secret empty-secret -``` +##### Configurando `imagePullSecrets` para serem adicionados automaticamente -O resultado será semelhante ao abaixo: +Você pode criar manualmente `imagePullSecrets` e referenciá-los em uma +ServiceAccount. Quaisquer Pods criados com esta ServiceAccount, especificada +explicitamente ou por padrão, têm o campo `imagePullSecrets` populado com os +mesmos valores existentes na service account. +Veja [adicionando `imagePullSecrets` a uma service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) +para uma explicação detalhada do processo. -``` -NAME TYPE DATA AGE -empty-secret Opaque 0 2m6s -``` +### Utilizando Secrets com pods estáticos {#restriction-static-pod} -A coluna `DATA` demonstra a quantidade de dados armazenados no Secret. Neste -caso, `0` significa que este objeto Secret está vazio. +Você não pode utilizar ConfigMaps ou Secrets em +{{< glossary_tooltip text="Pods estáticos" term_id="static-pod" >}}. -### Secrets de token de service account (conta de serviço) +## Casos de uso -Secrets do tipo `kubernetes.io/service-account-token` são utilizados para -armazenar um token que identifica uma service account (conta de serviço). Ao -utilizar este tipo de Secret, você deve garantir que a anotação -`kubernetes.io/service-account.name` contém um nome de uma service account -existente. Um controlador do Kubernetes preenche outros campos, como por exemplo -a anotação `kubernetes.io/service-account.uid` e a chave `token` no campo `data` -com o conteúdo do token. +### Caso de uso: Como variáveis de ambiente em um contêiner -O exemplo de configuração abaixo declara um Secret de token de service account: +Crie um manifesto de Secret ```yaml apiVersion: v1 kind: Secret metadata: - name: secret-sa-sample - annotations: - kubernetes.io/service-account-name: "sa-name" -type: kubernetes.io/service-account-token + name: mysecret +type: Opaque data: - # Você pode incluir pares chave-valor adicionais, da mesma forma que faria com - # Secrets do tipo Opaque - extra: YmFyCg== + USER_NAME: YWRtaW4= + PASSWORD: MWYyZDFlMmU2N2Rm ``` -Ao criar um {{< glossary_tooltip text="Pod" term_id="pod" >}}, o Kubernetes -automaticamente cria um Secret de service account e automaticamente atualiza o -seu Pod para utilizar este Secret. O Secret de token de service account contém -credenciais para acessar a API. - -A criação automática e o uso de credenciais de API podem ser desativados se -desejado. Porém, se tudo que você necessita é poder acessar o servidor da API -de forma segura, este é o processo recomendado. - -Veja a documentação de -[ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) -para mais informações sobre o funcionamento de service accounts. Você pode -verificar também os campos `automountServiceAccountToken` e `serviceAccountName` -do [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) -para mais informações sobre como referenciar service accounts em Pods. - -### Secrets de configuração do Docker - -Você pode utilizar um dos tipos abaixo para criar um Secret que armazena -credenciais para accesso a um registro de contêineres compatível com Docker -para busca de imagens: -- `kubernetes.io/dockercfg` -- `kubernetes.io/dockerconfigjson` +Crie o Secret no seu cluster: -O tipo `kubernetes.io/dockercfg` é reservado para armazenamento de um arquivo -`~/.dockercfg` serializado. Este arquivo é o formato legado para configuração -do utilitário de linha de comando do Docker. Ao utilizar este tipo de Secret, -é preciso garantir que o campo `data` contém uma chave `.dockercfg` cujo valor -é o conteúdo do arquivo `~/.dockercfg` codificado no formato base64. +```shell +kubectl apply -f mysecret.yaml +``` -O tipo `kubernetes.io/dockerconfigjson` foi projetado para armazenamento de um -conteúdo JSON serializado que obedece às mesmas regras de formato que o arquivo -`~/.docker/config.json`. Este arquivo é um formato mais moderno para o conteúdo -do arquivo `~/.dockercfg`. Ao utilizar este tipo de Secret, o conteúdo do campo -`data` deve conter uma chave `.dockerconfigjson` em que o conteúdo do arquivo -`~/.docker/config.json` é fornecido codificado no formato base64. +Utilize `envFrom` para definir todos os dados do Secret como variáveis de +ambiente do contêiner. Cada chave do Secret se torna o nome de uma variável de +ambiente no Pod. -Um exemplo de um Secret do tipo `kubernetes.io/dockercfg`: ```yaml apiVersion: v1 -kind: Secret +kind: Pod metadata: - name: secret-dockercfg -type: kubernetes.io/dockercfg -data: - .dockercfg: | - "" + name: secret-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "env" ] + envFrom: + - secretRef: + name: mysecret + restartPolicy: Never ``` -{{< note >}} -Se você não desejar fazer a codificação em formato base64, você pode utilizar o -campo `stringData` como alternativa. -{{< /note >}} +### Caso de uso: Pod com chaves SSH -Ao criar estes tipos de Secret utilizando um manifesto (arquivo YAML), o servidor -da API verifica se a chave esperada existe no campo `data` e se o valor fornecido -pode ser interpretado como um conteúdo JSON válido. O servidor da API não verifica -se o conteúdo informado é realmente um arquivo de configuração do Docker. +Crie um Secret contendo chaves SSH: -Quando você não tem um arquivo de configuração do Docker, ou quer utilizar o -comando `kubectl` para criar um Secret de registro de contêineres compatível -com o Docker, você pode executar: ```shell -kubectl create secret docker-registry secret-tiger-docker \ - --docker-username=tiger \ - --docker-password=pass113 \ - --docker-email=tiger@acme.com \ - --docker-server=my-registry.example:5000 +kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub ``` -Esse comando cria um secret do tipo `kubernetes.io/dockerconfigjson`, cujo -conteúdo é semelhante ao exemplo abaixo: +O resultado é semelhante a: -```json -{ - "apiVersion": "v1", - "data": { - ".dockerconfigjson": "eyJhdXRocyI6eyJteS1yZWdpc3RyeTo1MDAwIjp7InVzZXJuYW1lIjoidGlnZXIiLCJwYXNzd29yZCI6InBhc3MxMTMiLCJlbWFpbCI6InRpZ2VyQGFjbWUuY29tIiwiYXV0aCI6ImRHbG5aWEk2Y0dGemN6RXhNdz09In19fQ==" - }, - "kind": "Secret", - "metadata": { - "creationTimestamp": "2021-07-01T07:30:59Z", - "name": "secret-tiger-docker", - "namespace": "default", - "resourceVersion": "566718", - "uid": "e15c1d7b-9071-4100-8681-f3a7a2ce89ca" - }, - "type": "kubernetes.io/dockerconfigjson" -} ``` - -Se você extrair o conteúdo da chave `.dockerconfigjson`, presente no campo -`data`, e decodificá-lo do formato base64, você irá obter o objeto JSON abaixo, -que é uma configuração válida do Docker criada automaticamente: - -```json -{ - "auths":{ - "my-registry:5000":{ - "username":"tiger", - "password":"pass113", - "email":"tiger@acme.com", - "auth":"dGlnZXI6cGFzczExMw==" - } - } -} +secret "ssh-key-secret" created ``` -### Secret de autenticação básica +Você também pode criar um manifesto `kustomization.yaml` com um campo +`secretGenerator` contendo chaves SSH. -O tipo `kubernetes.io/basic-auth` é fornecido para armazenar credenciais -necessárias para autenticação básica. Ao utilizar este tipo de Secret, o campo -`data` do Secret deve conter as duas chaves abaixo: -- `username`: o usuário utilizado para autenticação; -- `password`: a senha ou token para autenticação. +{{< caution >}} +Analise cuidadosamente antes de enviar suas próprias chaves SSH: outros usuários +do cluster podem ter acesso a este Secret. -Ambos os valores para estas duas chaves são textos codificados em formato base64. -Você pode fornecer os valores como texto simples utilizando o campo `stringData` -na criação do Secret. +Como alternativa, você pode criar uma chave SSH privada representando a +identidade de um serviço que você deseja que seja acessível a todos os usuários +com os quais você compartilha o cluster do Kubernetes em questão. Desse modo, +você pode revogar esta credencial em caso de comprometimento. +{{< /caution >}} + +Agora você pode criar um Pod que referencia o Secret com a chave SSH e consome-o +em um volume: -O arquivo YAML abaixo é um exemplo de configuração para um Secret de autenticação -básica: ```yaml apiVersion: v1 -kind: Secret +kind: Pod metadata: - name: secret-basic-auth -type: kubernetes.io/basic-auth -stringData: - username: admin - password: t0p-Secret + name: secret-test-pod + labels: + name: secret-test +spec: + volumes: + - name: secret-volume + secret: + secretName: ssh-key-secret + containers: + - name: ssh-test-container + image: mySshImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" ``` -O tipo de autenticação básica é fornecido unicamente por conveniência. Você pode -criar um Secret do tipo `Opaque` utilizado para autenticação básica. No entanto, -utilizar o tipo embutido de Secret auxilia a unificação dos formatos das suas -credenciais. O tipo embutido também fornece verificação de presença das chaves -requeridas pelo servidor da API. - -### Secret de autenticação SSH - -O tipo embutido `kubernetes.io/ssh-auth` é fornecido para armazenamento de dados -utilizados em autenticação SSH. Ao utilizar este tipo de Secret, você deve -especificar um par de chave-valor `ssh-privatekey` no campo `data` ou no campo -`stringData` com a credencial SSH a ser utilizada. - -O YAML abaixo é um exemplo de configuração para um Secret de autenticação SSH: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-ssh-auth -type: kubernetes.io/ssh-auth -data: - # os dados estão abreviados neste exemplo - ssh-privatekey: | - MIIEpQIBAAKCAQEAulqb/Y ... -``` - -O Secret de autenticação SSH é fornecido apenas para a conveniência do usuário. -Você pode criar um Secret do tipo `Opaque` para credentials utilizadas para -autenticação SSH. No entanto, a utilização do tipo embutido auxilia na -unificação dos formatos das suas credenciais e o servidor da API fornece -verificação dos campos requeridos em uma configuração de Secret. - -{{< caution >}} -Chaves privadas SSH não estabelecem, por si só, uma comunicação confiável -entre um cliente SSH e um servidor. Uma forma secundária de estabelecer -confiança é necessária para mitigar ataques "machine-in-the-middle", como -por exemplo um arquivo `known_hosts` adicionado a um ConfigMap. -{{< /caution >}} - -### Secrets TLS - -O Kubernetes fornece o tipo embutido de Secret `kubernetes.io/tls` para -armazenamento de um certificado e sua chave associada que são tipicamente -utilizados para TLS. Estes dados são utilizados primariamente para a -finalização TLS do recurso Ingress, mas podem ser utilizados com outros -recursos ou diretamente por uma carga de trabalho. Ao utilizar este tipo de -Secret, as chaves `tls.key` e `tls.crt` devem ser informadas no campo `data` -(ou `stringData`) da configuração do Secret, embora o servidor da API não -valide o conteúdo de cada uma destas chaves. - -O YAML a seguir tem um exemplo de configuração para um Secret TLS: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-tls -type: kubernetes.io/tls -data: - # os dados estão abreviados neste exemplo - tls.crt: | - MIIC2DCCAcCgAwIBAgIBATANBgkqh ... - tls.key: | - MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... -``` - -O tipo TLS é fornecido para a conveniência do usuário. Você pode criar um -Secret do tipo `Opaque` para credenciais utilizadas para o servidor e/ou -cliente TLS. No entanto, a utilização do tipo embutido auxilia a manter a -consistência dos formatos de Secret no seu projeto; o servidor da API -valida se os campos requeridos estão presentes na configuração do Secret. - -Ao criar um Secret TLS utilizando a ferramenta de linha de comando `kubectl`, -você pode utilizar o subcomando `tls` conforme demonstrado no exemplo abaixo: -```shell -kubectl create secret tls my-tls-secret \ - --cert=path/to/cert/file \ - --key=path/to/key/file -``` - -O par de chaves pública/privada deve ser criado separadamente. O certificado -de chave pública a ser utilizado no argumento `--cert` deve ser codificado em -formato .PEM (formato DER codificado em texto base64) e deve corresponder à -chave privada fornecida no argumento `--key`. -A chave privada deve estar no formato de chave privada PEM não-encriptado. Em -ambos os casos, as linhas inicial e final do formato PEM (por exemplo, -`--------BEGIN CERTIFICATE-----` e `-------END CERTIFICATE----` para um -certificado) *não* são incluídas. - -### Secret de token de autoinicialização {#bootstrap-token-secrets} - -Um Secret de token de autoinicialização pode ser criado especificando o tipo de -um Secret explicitamente com o valor `bootstrap.kubernetes.io/token`. Este tipo -de Secret é projetado para tokens utilizados durante o processo de inicialização -de nós. Este tipo de Secret armazena tokens utilizados para assinar ConfigMaps -conhecidos. - -Um Secret de token de autoinicialização é normalmente criado no namespace -`kube-system` e nomeado na forma `bootstrap-token-`, onde -`` é um texto com 6 caracteres contendo a identificação do token. - -No formato de manifesto do Kubernetes, um Secret de token de autoinicialização -se assemelha ao exemplo abaixo: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: bootstrap-token-5emitj - namespace: kube-system -type: bootstrap.kubernetes.io/token -data: - auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= - expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= - token-id: NWVtaXRq - token-secret: a3E0Z2lodnN6emduMXAwcg== - usage-bootstrap-authentication: dHJ1ZQ== - usage-bootstrap-signing: dHJ1ZQ== -``` - -Um Secret do tipo token de autoinicialização possui as seguintes chaves no campo -`data`: -- `token-id`: Uma string com 6 caracteres aleatórios como identificador do - token. Requerido. -- `token-secret`: Uma string de 16 caracteres aleatórios como o conteúdo do - token. Requerido. -- `description`: Uma string contendo uma descrição do propósito para o qual este - token é utilizado. Opcional. -- `expiration`: Um horário absoluto UTC no formato RFC3339 especificando quando - o token deve expirar. Opcional. -- `usage-bootstrap-`: Um conjunto de flags booleanas indicando outros - usos para este token de autoinicialização. -- `auth-extra-groups`: Uma lista separada por vírgulas de nomes de grupos que - serão autenticados adicionalmente, além do grupo `system:bootstrappers`. - -O YAML acima pode parecer confuso, já que os valores estão todos codificados em -formato base64. Você pode criar o mesmo Secret utilizando este YAML: -```yaml -apiVersion: v1 -kind: Secret -metadata: - # Observe como o Secret é nomeado - name: bootstrap-token-5emitj - # Um Secret de token de inicialização geralmente fica armazenado no namespace - # kube-system - namespace: kube-system -type: bootstrap.kubernetes.io/token -stringData: - auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" - expiration: "2020-09-13T04:39:10Z" - # Esta identificação de token é utilizada no nome - token-id: "5emitj" - token-secret: "kq4gihvszzgn1p0r" - # Este token pode ser utilizado para autenticação. - usage-bootstrap-authentication: "true" - # e pode ser utilizado para assinaturas - usage-bootstrap-signing: "true" -``` - -## Secrets imutáveis {#secret-immutable} - -{{< feature-state for_k8s_version="v1.21" state="stable" >}} - -A funcionalidade do Kubernetes _Secrets e ConfigMaps imutáveis_ fornece uma -opção para marcar Secrets e ConfigMaps individuais como imutáveis. Em clusters -que fazem uso extensivo de Secrets (pelo menos dezenas de milhares de montagens -únicas de Secrets em Pods), prevenir alterações aos dados dos Secrets traz as -seguintes vantagens: -- protege você de alterações acidentais ou indesejadas que poderiam provocar -disrupções na execução de aplicações; -- melhora a performance do seu cluster através da redução significativa de carga -no kube-apiserver, devido ao fechamento de _watches_ de Secrets marcados como -imutáveis. - -Esta funcionalidade é controlada pelo -[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -`ImmutableEphemeralVolumes`, que está habilitado por padrão desde a versão -v1.19. Você pode criar um Secret imutável adicionando o campo `immutable` com -o valor `true`. Por exemplo: -```yaml -apiVersion: v1 -kind: Secret -metadata: - ... -data: - ... -immutable: true -``` - -{{< note >}} -Uma vez que um Secret ou ConfigMap seja marcado como imutável, _não_ é mais -possível reverter esta mudança, nem alterar os conteúdos do campo `data`. Você -pode somente apagar e recriar o Secret. Pods existentes mantém um ponto de -montagem referenciando o Secret removido - é recomendado recriar tais Pods. -{{< /note >}} - -### Usando `imagePullSecrets` {#using-imagepullsecrets} - -O campo `imagePullSecrets` é uma lista de referências para Secrets no mesmo -namespace. Você pode utilizar a lista `imagePullSecrets` para enviar Secrets -que contém uma senha para acesso a um registro de contêineres do Docker (ou -outros registros de contêineres) ao kubelet. O kubelet utiliza essa informação -para baixar uma imagem privada no lugar do seu Pod. -Veja a [API PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) -para maiores detalhes sobre o campo `imagePullSecrets`. - -#### Especificando `imagePullSecrets` manualmente - -Você pode ler sobre como especificar `imagePullSecrets` em um Pod na -[documentação de imagens de contêiner](/pt-br/docs/concepts/containers/images/#especificando-imagepullsecrets-em-um-pod). - -### Configurando `imagePullSecrets` para serem vinculados automaticamente - -Você pode criar manualmente `imagePullSecrets` e referenciá-los em uma -ServiceAccount. Quaisquer Pods criados com esta ServiceAccount, especificada -explicitamente ou por padrão, têm o campo `imagePullSecrets` populado com os -mesmos valores existentes na service account. -Veja [adicionando `imagePullSecrets` a uma service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) -para uma explicação detalhada do processo. - -## Detalhes - -### Restrições - -Referências a Secrets em volumes são validadas para garantir que o objeto -especificado realmente existe e é um objeto do tipo Secret. Portanto, um Secret -precisa ser criado antes de quaisquer Pods que dependam deste. - -Objetos Secret residem em um {{< glossary_tooltip text="namespace" term_id="namespace" >}}. -Secrets podem ser referenciados somente por Pods no mesmo namespace. - -Secrets individuais são limitados ao tamanho de 1MiB. Esta limitação ter por -objetivo desencorajar a criação de Secrets muito grandes que poderiam exaurir -a memória do servidor da API e do kubelet. No entanto, a criação de muitos -Secrets pequenos também pode exaurir a memória. Limites mais completos de uso -de memória em função de Secrets é uma funcionalidade prevista para o futuro. - -O kubelet suporta apenas o uso de Secrets em Pods onde os Secrets são obtidos -do servidor da API. Isso inclui quaisquer Pods criados usando o comando -`kubectl`, ou indiretamente através de um controlador de replicação, mas não -inclui Pods criados como resultado das flags `--manifest-url` e `--config` do -kubelet, ou a sua API REST (estas são formas incomuns de criar um Pod). -A `spec` de um {{< glossary_tooltip text="Pod estático" term_id="static-pod" >}} -não pode se referir a um Secret ou a qualquer outro objeto da API. - -Secrets precisam ser criados antes de serem consumidos em Pods como variáveis de -ambiente, exceto quando são marcados como opcionais. Referências a Secrets que -não existem provocam falhas na inicialização do Pod. - -Referências (campo `secretKeyRef`) a chaves que não existem em um Secret nomeado -provocam falhas na inicialização do Pod. - -Secrets utilizados para popular variáveis de ambiente através do campo `envFrom` -que contém chaves inválidas para utilização como nome de uma variável de ambiente -terão tais chaves ignoradas. O Pod inicializará normalmente. Porém, um evento -será gerado com a razão `InvalidVariableNames` e a mensagem gerada conterá a lista -de chaves inválidas que foram ignoradas. O exemplo abaixo demonstra um Pod que se -refere ao Secret default/mysecret, contendo duas chaves inválidas: `1badkey` e -`2alsobad`. - -```shell -kubectl get events -``` - -O resultado é semelhante a: - -``` -LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON -0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. -``` - -### Interações do ciclo de vida entre Secrets e Pods - -Quando um Pod é criado através de chamadas à API do Kubernetes, não há validação -da existência de um Secret referenciado. Uma vez que um Pod seja agendado, o -kubelet tentará buscar o valor do Secret. Se o Secret não puder ser encontrado -porque não existe ou porque houve uma falha de comunicação temporária entre o -kubelet e o servidor da API, o kubelet fará novas tentativas periodicamente. -O kubelet irá gerar um evento sobre o Pod, explicando a razão pela qual o Pod -ainda não foi inicializado. Uma vez que o Secret tenha sido encontrado, o -kubelet irá criar e montar um volume contendo este Secret. Nenhum dos contêineres -do Pod irá iniciar até que todos os volumes estejam montados. - -## Casos de uso - -### Caso de uso: Como variáveis de ambiente em um contêiner - -Crie um manifesto de Secret - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - USER_NAME: YWRtaW4= - PASSWORD: MWYyZDFlMmU2N2Rm -``` - -Crie o Secret no seu cluster: - -```shell -kubectl apply -f mysecret.yaml -``` - -Utilize `envFrom` para definir todos os dados do Secret como variáveis de -ambiente do contêiner. Cada chave do Secret se torna o nome de uma variável de -ambiente no Pod. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: secret-test-pod -spec: - containers: - - name: test-container - image: k8s.gcr.io/busybox - command: [ "/bin/sh", "-c", "env" ] - envFrom: - - secretRef: - name: mysecret - restartPolicy: Never -``` - -### Caso de uso: Pod com chaves SSH - -Crie um Secret contendo chaves SSH: - -```shell -kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub -``` - -O resultado é semelhante a: - -``` -secret "ssh-key-secret" created -``` - -Você também pode criar um manifesto `kustomization.yaml` com um campo -`secretGenerator` contendo chaves SSH. - -{{< caution >}} -Analise cuidadosamente antes de enviar suas próprias chaves SSH: outros usuários -do cluster podem ter acesso a este Secret. Utilize uma service account que você -deseje que seja acessível a todos os usuários com os quais você compartilha o -cluster do Kubernetes em questão. Desse modo, você pode revogar esta service -account caso os usuários sejam comprometidos. -{{< /caution >}} - -Agora você pode criar um Pod que referencia o Secret com a chave SSH e consome-o -em um volume: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: secret-test-pod - labels: - name: secret-test -spec: - volumes: - - name: secret-volume - secret: - secretName: ssh-key-secret - containers: - - name: ssh-test-container - image: mySshImage - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -``` - -Ao rodar o comando do contêiner, as partes da chave estarão disponíveis em: +Ao rodar o comando do contêiner, as partes da chave estarão disponíveis em: ``` /etc/secret-volume/ssh-publickey @@ -1145,11 +719,12 @@ secret "test-db-secret" created {{< note >}} Caracteres especiais como `$`, `\`, `*`, `+` e `!` serão interpretados pelo seu -[shell](https://pt.wikipedia.org/wiki/Shell_(computa%C3%A7%C3%A3o)) e precisam de -sequências de escape. Na maioria dos shells, a forma mais fácil de gerar sequências -de escape para suas senhas é escrevê-las entre aspas simples (`'`). Por exemplo, -se a sua senha for `S!B\*d$zDsb=`, você deve executar o comando da seguinte -forma: +[shell](https://pt.wikipedia.org/wiki/Shell_(computa%C3%A7%C3%A3o)) e precisam +de sequências de escape. + +Na maioria dos shells, a forma mais fácil de gerar sequências de escape para +suas senhas é escrevê-las entre aspas simples (`'`). Por exemplo, se a sua senha +for `S!B\*d$zDsb=`, você deve executar o comando da seguinte forma: ```shell kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' @@ -1205,216 +780,636 @@ items: EOF ``` -Adicione os Pods a um manifesto `kustomization.yaml`: +Adicione os Pods a um manifesto `kustomization.yaml`: + +```shell +cat <> kustomization.yaml +resources: +- pod.yaml +EOF +``` + +Crie todos estes objetos no servidor da API rodando o comando: + +```shell +kubectl apply -k . +``` + +Ambos os contêineres terão os seguintes arquivos presentes nos seus sistemas de +arquivos, com valores para cada um dos ambientes dos contêineres: + +``` +/etc/secret-volume/username +/etc/secret-volume/password +``` + +Observe como as `spec`s para cada um dos Pods diverge somente em um campo. Isso +facilita a criação de Pods com capacidades diferentes a partir de um template +mais genérico. + +Você pode simplificar ainda mais a definição básica do Pod através da utilização +de duas service accounts diferentes: + +1. `prod-user` com o Secret `prod-db-secret` +1. `test-user` com o Secret `test-db-secret` + +A especificação do Pod é reduzida para: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: prod-db-client-pod + labels: + name: prod-db-client +spec: + serviceAccount: prod-db-client + containers: + - name: db-client-container + image: myClientImage +``` + +### Caso de uso: _dotfiles_ em um volume de Secret + +Você pode fazer com que seus dados fiquem "ocultos" definindo uma chave que se +inicia com um ponto (`.`). Este tipo de chave representa um _dotfile_, ou +arquivo "oculto". Por exemplo, quando o Secret abaixo é montado em um volume, +`secret-volume`: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: dotfile-secret +data: + .secret-file: dmFsdWUtMg0KDQo= +--- +apiVersion: v1 +kind: Pod +metadata: + name: secret-dotfiles-pod +spec: + volumes: + - name: secret-volume + secret: + secretName: dotfile-secret + containers: + - name: dotfile-test-container + image: k8s.gcr.io/busybox + command: + - ls + - "-l" + - "/etc/secret-volume" + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + +Este volume irá conter um único arquivo, chamado `.secret-file`, e o contêiner +`dotfile-test-container` terá este arquivo presente no caminho +`/etc/secret-volume/.secret-file`. + +{{< note >}} +Arquivos com nomes iniciados por um caractere de ponto são ocultados do +resultado do comando `ls -l`. Você precisa utilizar `ls -la` para vê-los ao +listar o conteúdo de um diretório. +{{< /note >}} + +### Caso de uso: Secret visível somente em um dos contêineres de um pod {#use-case-secret-visible-to-one-container-in-a-pod} + +Suponha que um programa necessita manipular requisições HTTP, executar regras +de negócio complexas e então assinar mensagens com HMAC. Devido à natureza +complexa da aplicação, pode haver um _exploit_ despercebido que lê arquivos +remotos no servidor e que poderia expor a chave privada para um invasor. + +Esta aplicação poderia ser dividida em dois processos, separados em dois +contêineres distintos: um contêiner de _front-end_, que manipula as interações +com o usuário e a lógica de negócio, mas não consegue ver a chave privada; e +um contêiner assinador, que vê a chave privada e responde a requisições simples +de assinatura do _front-end_ (por exemplo, através de rede local). + +Com essa abordagem particionada, um invasor agora precisa forçar o servidor de +aplicação a rodar comandos arbitrários, o que é mais difícil de ser feito do que +apenas ler um arquivo presente no disco. + +## Tipos de Secrets {#secret-types} + +Ao criar um Secret, você pode especificar o seu tipo utilizando o campo `type` +do objeto Secret, ou algumas opções de linha de comando equivalentes no comando +`kubectl`, quando disponíveis. O campo `type` de um Secret é utilizado para +facilitar a manipulação programática de diferentes tipos de dados confidenciais. + +O Kubernetes oferece vários tipos embutidos de Secret para casos de uso comuns. +Estes tipos variam em termos de validações efetuadas e limitações que o +Kubernetes impõe neles. + +| Tipo embutido | Caso de uso | +|----------------------------------------|----------------------------------------------------| +| `Opaque` | dados arbitrários definidos pelo usuário | +| `kubernetes.io/service-account-token` | token de service account (conta de serviço) | +| `kubernetes.io/dockercfg` | arquivo `~/.dockercfg` serializado | +| `kubernetes.io/dockerconfigjson` | arquivo `~/.docker/config.json` serializado | +| `kubernetes.io/basic-auth` | credenciais para autenticação básica (basic auth) | +| `kubernetes.io/ssh-auth` | credenciais para autenticação SSH | +| `kubernetes.io/tls` | dados para um cliente ou servidor TLS | +| `bootstrap.kubernetes.io/token` | dados de token de autoinicialização | + +Você pode definir e utilizar seu próprio tipo de Secret definindo o valor do +campo `type` como uma string não-nula em um objeto Secret (uma string em branco +é tratada como o tipo `Opaque`). + +O Kubernetes não restringe nomes de tipos. No entanto, quando tipos embutidos +são utilizados, você precisa atender a todos os requisitos daquele tipo. + +Se você estiver definindo um tipo de Secret que seja para uso público, siga a +convenção e estruture o tipo de Secret para conter o seu domínio antes do nome, +separado por uma barra (`/`). +Por exemplo: `cloud-hosting.example.net/cloud-api-credentials`. + +### Secrets tipo `Opaque` + +`Opaque` é o tipo predefinido de Secret quando o campo `type` é omitido em um +arquivo de configuração de Secret. Quando um Secret é criado usando o comando +`kubectl`, você deve usar o subcomando `generic` para indicar que um Secret é +do tipo `Opaque`. Por exemplo, o comando a seguir cria um Secret vazio do tipo +`Opaque`: +```shell +kubectl create secret generic empty-secret +kubectl get secret empty-secret +``` + +O resultado será semelhante ao abaixo: + +``` +NAME TYPE DATA AGE +empty-secret Opaque 0 2m6s +``` + +A coluna `DATA` demonstra a quantidade de dados armazenados no Secret. Neste +caso, `0` significa que este objeto Secret está vazio. + +### Secrets de token de service account (conta de serviço) + +Secrets do tipo `kubernetes.io/service-account-token` são utilizados para +armazenar um token que identifica uma service account (conta de serviço). Ao +utilizar este tipo de Secret, você deve garantir que a anotação +`kubernetes.io/service-account.name` contém um nome de uma service account +existente. Um controlador do Kubernetes preenche outros campos, como por exemplo +a anotação `kubernetes.io/service-account.uid` e a chave `token` no campo `data` +com o conteúdo do token. + +O exemplo de configuração abaixo declara um Secret de token de service account: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-sa-sample + annotations: + kubernetes.io/service-account-name: "sa-name" +type: kubernetes.io/service-account-token +data: + # Você pode incluir pares chave-valor adicionais, da mesma forma que faria com + # Secrets do tipo Opaque + extra: YmFyCg== +``` + +Ao criar um {{< glossary_tooltip text="Pod" term_id="pod" >}}, o Kubernetes +automaticamente cria um Secret de service account e automaticamente atualiza o +seu Pod para utilizar este Secret. O Secret de token de service account contém +credenciais para acessar a API. + +A criação automática e o uso de credenciais de API podem ser desativados ou +substituídos se desejado. Porém, se tudo que você necessita é poder acessar o +servidor da API de forma segura, este é o processo recomendado. + +Veja a documentação de +[ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) +para mais informações sobre o funcionamento de service accounts. Você pode +verificar também os campos `automountServiceAccountToken` e `serviceAccountName` +do [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) +para mais informações sobre como referenciar service accounts em Pods. + +### Secrets de configuração do Docker + +Você pode utilizar um dos tipos abaixo para criar um Secret que armazena +credenciais para accesso a um registro de contêineres para busca de imagens: + +- `kubernetes.io/dockercfg` +- `kubernetes.io/dockerconfigjson` + +O tipo `kubernetes.io/dockercfg` é reservado para armazenamento de um arquivo +`~/.dockercfg` serializado. Este arquivo é o formato legado para configuração +do utilitário de linha de comando do Docker. Ao utilizar este tipo de Secret, +é preciso garantir que o campo `data` contém uma chave `.dockercfg` cujo valor +é o conteúdo do arquivo `~/.dockercfg` codificado no formato base64. + +O tipo `kubernetes.io/dockerconfigjson` foi projetado para armazenamento de um +conteúdo JSON serializado que obedece às mesmas regras de formato que o arquivo +`~/.docker/config.json`. Este arquivo é um formato mais moderno para o conteúdo +do arquivo `~/.dockercfg`. Ao utilizar este tipo de Secret, o conteúdo do campo +`data` deve conter uma chave `.dockerconfigjson` em que o conteúdo do arquivo +`~/.docker/config.json` é fornecido codificado no formato base64. + +Um exemplo de um Secret do tipo `kubernetes.io/dockercfg`: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-dockercfg +type: kubernetes.io/dockercfg +data: + .dockercfg: | + "" +``` + +{{< note >}} +Se você não desejar fazer a codificação em formato base64, você pode utilizar o +campo `stringData` como alternativa. +{{< /note >}} + +Ao criar estes tipos de Secret utilizando um manifesto (arquivo YAML), o +servidor da API verifica se a chave esperada existe no campo `data` e se o valor +fornecido pode ser interpretado como um conteúdo JSON válido. O servidor da API +não verifica se o conteúdo informado é realmente um arquivo de configuração do +Docker. + +Quando você não tem um arquivo de configuração do Docker, ou quer utilizar o +comando `kubectl` para criar um Secret de registro de contêineres, você pode +rodar o comando: ```shell -cat <> kustomization.yaml -resources: -- pod.yaml -EOF +kubectl create secret docker-registry secret-tiger-docker \ + --docker-email=tiger@acme.example \ + --docker-username=tiger \ + --docker-password=pass1234 \ + --docker-server=my-registry.example:5000 ``` -Crie todos estes objetos no servidor da API rodando o comando: +Esse comando cria um secret do tipo `kubernetes.io/dockerconfigjson`. Se você +obtiver o conteúdo do campo `.data.dockerconfigjson` deste novo Secret e +decodificá-lo do formato base64: ```shell -kubectl apply -k . +kubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d ``` -Ambos os contêineres terão os seguintes arquivos presentes nos seus sistemas de -arquivos, com valores para cada um dos ambientes dos contêineres: +o resultado será equivalente a este documento JSON (que também é um arquivo de +configuração válido do Docker): -``` -/etc/secret-volume/username -/etc/secret-volume/password +```json +{ + "auths": { + "my-registry.example:5000": { + "username": "tiger", + "password": "pass1234", + "email": "tiger@acme.example", + "auth": "dGlnZXI6cGFzczEyMzQ=" + } + } +} ``` -Observe como as `spec`s para cada um dos Pods diverge somente em um campo. Isso -facilita a criação de Pods com capacidades diferentes a partir de um template -mais genérico. +{{< note >}} +O valor do campo `auth` no exemplo acima é codificado em base64; ele está +ofuscado mas não criptografado. Qualquer pessoa com acesso a este Secret pode +ler o conteúdo do token _bearer_. +{{< /note >}} -Você pode simplificar ainda mais a definição básica do Pod através da utilização -de duas service accounts diferentes: +### Secret de autenticação básica -1. `prod-user` com o Secret `prod-db-secret` -1. `test-user` com o Secret `test-db-secret` +O tipo `kubernetes.io/basic-auth` é fornecido para armazenar credenciais +necessárias para autenticação básica. Ao utilizar este tipo de Secret, o campo +`data` do Secret deve conter as duas chaves abaixo: -A especificação do Pod é reduzida para: +- `username`: o usuário utilizado para autenticação; +- `password`: a senha ou token para autenticação. + +Ambos os valores para estas duas chaves são textos codificados em formato base64. +Você pode fornecer os valores como texto simples utilizando o campo `stringData` +na criação do Secret. +O arquivo YAML abaixo é um exemplo de configuração para um Secret de autenticação +básica: ```yaml apiVersion: v1 -kind: Pod +kind: Secret metadata: - name: prod-db-client-pod - labels: - name: prod-db-client -spec: - serviceAccount: prod-db-client - containers: - - name: db-client-container - image: myClientImage + name: secret-basic-auth +type: kubernetes.io/basic-auth +stringData: + username: admin # required field for kubernetes.io/basic-auth + password: t0p-Secret # required field for kubernetes.io/basic-auth ``` -### Caso de uso: _dotfiles_ em um volume de Secret +O tipo de autenticação básica é fornecido unicamente por conveniência. Você pode +criar um Secret do tipo `Opaque` utilizado para autenticação básica. No entanto, +utilizar o tipo embutido e público de Secret (`kubernetes.io/basic-auth`) +auxilia outras pessoas a compreenderem o propósito do seu Secret, e define uma +convenção de expectativa de nomes de chaves +O tipo embutido também fornece verificação dos campos requeridos pelo servidor +da API. -Você pode fazer com que seus dados fiquem "ocultos" definindo uma chave que se -inicia com um ponto (`.`). Este tipo de chave representa um _dotfile_, ou -arquivo "oculto". Por exemplo, quando o Secret abaixo é montado em um volume, -`secret-volume`: +### Secret de autenticação SSH + +O tipo embutido `kubernetes.io/ssh-auth` é fornecido para armazenamento de dados +utilizados em autenticação SSH. Ao utilizar este tipo de Secret, você deve +especificar um par de chave-valor `ssh-privatekey` no campo `data` (ou no campo +`stringData`) com a credencial SSH a ser utilizada. + +O manifesto abaixo é um exemplo de configuração para um Secret de autenticação +SSH com um par de chaves pública/privada: ```yaml apiVersion: v1 kind: Secret metadata: - name: dotfile-secret + name: secret-ssh-auth +type: kubernetes.io/ssh-auth data: - .secret-file: dmFsdWUtMg0KDQo= ---- + # os dados estão abreviados neste exemplo + ssh-privatekey: | + MIIEpQIBAAKCAQEAulqb/Y ... +``` + +O Secret de autenticação SSH é fornecido apenas para a conveniência do usuário. +Você pode criar um Secret do tipo `Opaque` para credentials utilizadas para +autenticação SSH. No entanto, a utilização do tipo embutido e público de Secret +(`kubernetes.io/ssh-auth`) auxilia outras pessoas a compreenderem o propósito do +seu Secret, e define uma convenção de quais chaves podem ser esperadas. +O tipo embutido também fornece verificação dos campos requeridos em uma +configuração de Secret. + +{{< caution >}} +Chaves privadas SSH não estabelecem, por si só, uma comunicação confiável +entre um cliente SSH e um servidor. Uma forma secundária de estabelecer +confiança é necessária para mitigar ataques _man-in-the-middle_ (MITM), como por +exemplo um arquivo `known_hosts` adicionado a um ConfigMap. +{{< /caution >}} + +### Secrets TLS + +O Kubernetes fornece o tipo embutido de Secret `kubernetes.io/tls` para +armazenamento de um certificado e sua chave associada que são tipicamente +utilizados para TLS. + +Uma utilização comum de Secrets TLS é a configuração de encriptação em trânsito +para um recurso [Ingress](/docs/concepts/services-networking/ingress/), mas +este tipo de secret pode também ser utilizado com outros recursos ou diretamente +por uma carga de trabalho. + +Ao utilizar este tipo de Secret, as chaves `tls.key` e `tls.crt` devem ser +informadas no campo `data` (ou `stringData`) da configuração do Secret, embora o +servidor da API não valide o conteúdo de cada uma destas chaves. + +O YAML a seguir tem um exemplo de configuração para um Secret TLS: + +```yaml apiVersion: v1 -kind: Pod +kind: Secret metadata: - name: secret-dotfiles-pod -spec: - volumes: - - name: secret-volume - secret: - secretName: dotfile-secret - containers: - - name: dotfile-test-container - image: k8s.gcr.io/busybox - command: - - ls - - "-l" - - "/etc/secret-volume" - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" + name: secret-tls +type: kubernetes.io/tls +data: + # os dados estão abreviados neste exemplo + tls.crt: | + MIIC2DCCAcCgAwIBAgIBATANBgkqh ... + tls.key: | + MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... ``` -Este volume irá conter um único arquivo, chamado `.secret-file`, e o contêiner -`dotfile-test-container` terá este arquivo presente no caminho -`/etc/secret-volume/.secret-file`. +O tipo TLS é fornecido para a conveniência do usuário. Você pode criar um +Secret do tipo `Opaque` para credenciais utilizadas para o servidor e/ou +cliente TLS. No entanto, a utilização do tipo embutido auxilia a manter a +consistência dos formatos de Secret no seu projeto; o servidor da API +valida se os campos requeridos estão presentes na configuração do Secret. + +Ao criar um Secret TLS utilizando a ferramenta de linha de comando `kubectl`, +você pode utilizar o subcomando `tls` conforme demonstrado no exemplo abaixo: +```shell +kubectl create secret tls my-tls-secret \ + --cert=path/to/cert/file \ + --key=path/to/key/file +``` + +O par de chaves pública/privada deve ser criado previamente. O certificado +de chave pública a ser utilizado no argumento `--cert` deve ser codificado em +formato DER conforme especificado na +[seção 5.1 da RFC 7468](https://datatracker.ietf.org/doc/html/rfc7468#section-5.1) +e deve corresponder à chave privada fornecida no argumento `--key` +(PKCS #8 no formato DER; +[seção 11 da RFC 7468](https://datatracker.ietf.org/doc/html/rfc7468#section-11)). {{< note >}} -Arquivos com nomes iniciados por um caractere de ponto são ocultos do resultado -do comando `ls -l`. Você precisa utilizar `ls -la` para vê-los ao listar o -conteúdo de um diretório. +Um Secret kubernetes.io/tls armazena o conteúdo de chaves e certificados em +formato DER codificado em base64. Se você tem familiaridade com o formato PEM +para chaves privadas e certificados, o conteúdo é o mesmo do formato PEM, +excluindo-se a primeira e a última linhas. + +Por exemplo, para um certificado, você **não** inclui as linhas +`--------BEGIN CERTIFICATE-----` e `-------END CERTIFICATE----`. {{< /note >}} -### Caso de uso: Secret visível somente em um dos contêineres de um pod {#use-case-secret-visible-to-one-container-in-a-pod} +### Secret de token de autoinicialização {#bootstrap-token-secrets} -Suponha que um programa necessita manipular requisições HTTP, executar regras -de negócio complexas e então assinar mensagens com HMAC. Devido à natureza -complexa da aplicação, pode haver um _exploit_ despercebido que lê arquivos -remotos no servidor e que poderia expor a chave privada para um invasor. +Um Secret de token de autoinicialização pode ser criado especificando o tipo de +um Secret explicitamente com o valor `bootstrap.kubernetes.io/token`. Este tipo +de Secret é projetado para tokens utilizados durante o processo de inicialização +de nós. Este tipo de Secret armazena tokens utilizados para assinar ConfigMaps +conhecidos. -Esta aplicação poderia ser dividida em dois processos, separados em dois -contêineres distintos: um contêiner de _front-end_, que manipula as interações -com o usuário e a lógica de negócio, mas não consegue ver a chave privada; e -um contêiner assinador, que vê a chave privada e responde a requisições simples -de assinatura do _front-end_ (por exemplo, através de rede local). +Um Secret de token de autoinicialização é normalmente criado no namespace +`kube-system` e nomeado na forma `bootstrap-token-`, onde +`` é um texto com 6 caracteres contendo a identificação do token. -Com essa abordagem particionada, um invasor agora precisa forçar o servidor de -aplicação a rodar comandos arbitrários, o que é mais difícil de ser feito do que -apenas ler um arquivo presente no disco. +No formato de manifesto do Kubernetes, um Secret de token de autoinicialização +se assemelha ao exemplo abaixo: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: bootstrap-token-5emitj + namespace: kube-system +type: bootstrap.kubernetes.io/token +data: + auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= + expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= + token-id: NWVtaXRq + token-secret: a3E0Z2lodnN6emduMXAwcg== + usage-bootstrap-authentication: dHJ1ZQ== + usage-bootstrap-signing: dHJ1ZQ== +``` - +Um Secret do tipo token de autoinicialização possui as seguintes chaves no campo +`data`: + +- `token-id`: Uma string com 6 caracteres aleatórios como identificador do + token. Requerido. +- `token-secret`: Uma string de 16 caracteres aleatórios como o conteúdo secreto + do token. Requerido. +- `description`: Uma string contendo uma descrição do propósito para o qual este + token é utilizado. Opcional. +- `expiration`: Um horário absoluto UTC no formato RFC3339 especificando quando + o token deve expirar. Opcional. +- `usage-bootstrap-`: Um conjunto de flags booleanas indicando outros + usos para este token de autoinicialização. +- `auth-extra-groups`: Uma lista separada por vírgulas de nomes de grupos que + serão autenticados adicionalmente, além do grupo `system:bootstrappers`. -## Melhores práticas +O YAML acima pode parecer confuso, já que os valores estão todos codificados em +formato base64. Você pode criar o mesmo Secret utilizando este YAML: +```yaml +apiVersion: v1 +kind: Secret +metadata: + # Observe como o Secret é nomeado + name: bootstrap-token-5emitj + # Um Secret de token de inicialização geralmente fica armazenado no namespace + # kube-system + namespace: kube-system +type: bootstrap.kubernetes.io/token +stringData: + auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" + expiration: "2020-09-13T04:39:10Z" + # Esta identificação de token é utilizada no nome + token-id: "5emitj" + token-secret: "kq4gihvszzgn1p0r" + # Este token pode ser utilizado para autenticação + usage-bootstrap-authentication: "true" + # e pode ser utilizado para assinaturas + usage-bootstrap-signing: "true" +``` -### Clientes que utilizam a API de Secrets +## Secrets imutáveis {#secret-immutable} -Ao instalar aplicações que interajam com a API de Secrets, você deve limitar o -acesso utilizando [políticas de autorização](/docs/reference/access-authn-authz/authorization/) -como [RBAC](/docs/reference/access-authn-authz/rbac/). +{{< feature-state for_k8s_version="v1.21" state="stable" >}} -Secrets frequentemente contém valores com um espectro de importância, muitos dos -quais podem causar escalações dentro do Kubernetes (por exemplo, tokens de service -account) e de sistemas externos. Mesmo que um aplicativo individual possa -avaliar o poder do Secret com o qual espera interagir, outras aplicações dentro -do mesmo namespace podem tornar estas suposições inválidas. +O Kubernetes permite que você marque Secrets (e ConfigMaps) específicos como +_imutáveis_. Prevenir mudanças nos dados de um Secret existente tem os seguintes +benefícios: -Por estas razões, as requisições `watch` (observar) e `list` (listar) de -Secrets dentro de um namespace são permissões extremamente poderosas e devem -ser evitadas, pois a listagem de Secrets permite a clientes inspecionar os -valores de todos os Secrets presentes naquele namespace. A habilidade de listar -e observar todos os Secrets em um cluster deve ser reservada somente para os -componentes mais privilegiados, que fazem parte do nível de aplicações de sistema. +- protege você de alterações acidentais (ou indesejadas) que poderiam provocar + disrupções em aplicações. +- em clusters com uso extensivo de Secrets (pelo menos dezenas de milhares de + montagens únicas de Secrets a Pods), utilizar Secrets imutáveis melhora o + desempenho do seu cluster através da redução significativa de carga no + kube-apiserver. O kubelet não precisa manter um _watch_ em Secrets que são + marcados como imutáveis. -Aplicações que necessitam acessar a API de Secret devem realizar uma requisição -`get` nos Secrets que precisam. Isto permite que administradores restrinjam o -acesso a todos os Secrets, enquanto -[utilizam uma lista de autorização a instâncias individuais](/docs/reference/access-authn-authz/rbac/#referring-to-resources) -que a aplicação precise. +### Marcando um Secret como imutável {#secret-immutable-create} -Para melhor desempenho em uma requisição `get` repetitiva, clientes podem criar -objetos que referenciam o Secret e então utilizar a requisição `watch` neste -novo objeto, requisitando o Secret novamente quando a referência mudar. -Além disso, uma [API de "observação em lotes"](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md) -para permitir a clientes observar recursos individuais também foi proposta e -provavelmente estará disponível em versões futuras do Kubernetes. +Você pode criar um Secret imutável adicionando o campo `immutable` com o valor +`true` ao manifesto do Secret. Por exemplo: +```yaml +apiVersion: v1 +kind: Secret +metadata: + ... +data: + ... +immutable: true +``` -## Propriedades de segurança +Você pode também atualizar qualquer Secret mutável existente para torná-lo +imutável. -### Proteções +{{< note >}} +Uma vez que um Secret ou ConfigMap seja marcado como imutável, _não_ é mais +possível reverter esta mudança, nem alterar os conteúdos do campo `data`. Você +pode somente apagar e recriar o Secret. Pods existentes mantém um ponto de +montagem referenciando o Secret removido - é recomendado recriar tais Pods. +{{< /note >}} -Como Secrets podem ser criados de forma independente de Pods que os utilizam, -há menos risco de um Secret ser exposto durante o fluxo de trabalho de criação, -visualização, e edição de Pods. O sistema pode também tomar precauções adicionais -com Secrets, como por exemplo evitar que sejam escritos em disco quando possível. +## Informações de segurança sobre Secrets {#information-security-for-secrets} -Um Secret só é enviado para um nó se um Pod naquele nó requerê-lo. O kubelet -armazena o Secret num sistema de arquivos `tmpfs`, de forma a evitar que o Secret -seja escrito em armazenamento persistente. Uma vez que o Pod que depende do -Secret é removido, o kubelet apaga sua cópia local do Secret também. +Embora ConfigMaps e Secrets funcionem de formas similares, o Kubernetes aplica +proteções extras aos objetos Secret. -Secrets de vários Pods diferentes podem existir no mesmo nó. No entanto, somente -os Secrets que um Pod requerer estão potencialmente visíveis em seus contêineres. -Portanto, um Pod não tem acesso aos Secrets de outro Pod. +Secrets frequentemente contém valores dentro de um espectro de importância, +muitos dos quais podem provocar escalações de privilégios dentro do Kubernetes +(por exemplo, um token de service account) e em sistemas externos. Mesmo que uma +aplicação individual possa avaliar o poder dos Secrets com os quais espera +interagir, outras aplicações dentro do mesmo namespace podem tornar tais +suposições inválidas. -Um Pod pode conter vários contêineres. Porém, cada contêiner em um Pod precisa -requerer o volume de Secret nos seus `volumeMounts` para que este fique visível -dentro do contêiner. Esta característica pode ser utilizada para construir -[partições de segurança ao nível do Pod](#use-case-secret-visible-to-one-container-in-a-pod). +Um Secret só é enviado a um nó se um Pod naquele nó precisa do Secret em questão. +Para montar Secrets em Pods, o kubelet armazena uma cópia dos dados dentro de um +sistema de arquivos `tmpfs`, de modo que os dados confidenciais não sejam +escritos em armazenamento durável. Uma vez que o Pod que dependia do Secret seja +removido, o kubelet apaga sua cópia local dos dados confidenciais do Secret. -Na maioria das distribuições do Kubernetes, a comunicação entre usuários e o -servidor da API e entre servidor da API e os kubelets é protegida por SSL/TLS. -Secrets são protegidos quando transmitidos através destes canais. +Um Pod pode possuir vários contêineres. Por padrão, contêineres que você define +têm acesso somente à ServiceAccount padrão e seu Secret relacionado. Você deve +explicitamente definir variáveis de ambiente ou mapear um volume dentro de um +contêiner para ter acesso a qualquer outro Secret. -{{< feature-state for_k8s_version="v1.13" state="beta" >}} +Podem haver Secrets para vários Pods no mesmo nó. No entanto, somente os Secrets +que um Pod requisitou estão potencialmente visíveis dentro de seus contêineres. +Portanto, um Pod não tem acesso aos Secrets de outro Pod. -Você pode habilitar [encriptação em disco](/docs/tasks/administer-cluster/encrypt-data/) -em dados de Secret para evitar que estes sejam armazenados em texto plano no -{{< glossary_tooltip term_id="etcd" >}}. +{{< warning >}} +Quaisquer contêineres privilegiados em um nó são passíveis de acesso a todos os +Secrets naquele nó. +{{< /warning >}} + +### Recomendações de segurança para desenvolvedores + +- Aplicações ainda devem proteger o valor da informação confidencial após lê-la + de uma variável de ambiente ou volume. Por exemplo, sua aplicação deve evitar + imprimir os dados do Secret sem encriptação ou transmitir esta informação para + aplicações terceiras de confiabilidade não-estabelecida. +- Se você estiver definindo múltiplos contêineres em um Pod, e somente um destes + contêineres necessita acesso a um Secret, defina o volume ou variável de + ambiente de maneira que os demais contêineres não tenham acesso àquele Secret. +- Se você configurar um Secret através de um {{< glossary_tooltip text="manifesto" term_id="manifest" >}}, + com os dados codificados em formato base64, compartilhar este arquivo ou + salvá-lo em um sistema de controle de versão de código-fonte significa que o + Secret está disponível para qualquer pessoa que pode ler o manifesto. O formato + base64 _não é_ um método de encriptação e não fornece nenhuma confidencialidade + adicional em comparação com texto puro. +- Ao instalar aplicações que interagem com a API de Secrets, você deve limitar + o acesso utilizando + [políticas de autorização](/docs/reference/access-authn-authz/authorization/), + como por exemplo [RBAC](/docs/reference/access-authn-authz/rbac/). +- Na API do Kubernetes, requisições `watch` e `list` em Secrets dentro de um + namespace são extremamente poderosas. Evite fornecer este acesso quando + possível, já que listar Secrets permite aos clientes inspecionar os valores de + todos os Secrets naquele namespace. + +### Recomendações de segurança para administradores de cluster -### Riscos +{{< caution >}} +Um usuário que pode criar um Pod que utiliza um Secret pode também ver o valor +daquele Secret. Mesmo que as permissões do cluster não permitam ao usuário ler +o Secret diretamente, o mesmo usuário poderia ter acesso a criar um Pod que +então expõe o Secret. +{{< /caution >}} -- No servidor da API, os dados de Secret são armazenados no +- Restrinja a habilidade de usar as requisições `watch` e `list` para listar todos + os Secrets em um cluster (utilizando a API do Kubernetes) de modo que somente + os componentes mais privilegiados e de nível de sistema possam realizar esta + ação. +- Ao instalar aplicações que interajam com a API de Secrets, você deve limitar o + acesso utilizando + [políticas de autorização](/docs/reference/access-authn-authz/authorization/), + como por exemplo [RBAC](/docs/reference/access-authn-authz/rbac/). +- No servidor da API, objetos (incluindo Secrets) são persistidos no {{< glossary_tooltip term_id="etcd" >}}; portanto: - - Administradores devem habilitar encriptação em disco para dados do cluster - (requer Kubernetes v1.13 ou posterior). - - Administradores devem limitar o acesso ao etcd somente para usuários - administradores. - - Administradores podem desejar apagar definitivamente ou destruir discos - previamente utilizados pelo etcd que não estiverem mais em uso. - - Ao executar o etcd em um cluster, administradores devem garantir o uso de - SSL/TLS para conexões ponto-a-ponto do etcd. -- Se você configurar um Secret utilizando um arquivo de manifesto (JSON ou - YAML) que contém os dados do Secret codificados como base64, compartilhar - este arquivo ou salvá-lo num sistema de controle de versão de código-fonte - compromete este Secret. Codificação base64 _não_ é um método de encriptação - e deve ser considerada idêntica a texto plano. -- Aplicações ainda precisam proteger o valor do Secret após lê-lo de um volume, - como por exemplo não escrever seu valor em logs ou enviá-lo para um sistema - não-confiável. -- Um usuário que consegue criar um Pod que utiliza um Secret também consegue - ler o valor daquele Secret. Mesmo que o servidor da API possua políticas para - impedir que aquele usuário leia o valor do Secret, o usuário poderia criar um - Pod que expõe o Secret. + - somente permita a administradores do sistema o acesso ao etcd (incluindo + acesso somente-leitura); + - habilite [encriptação em disco](/docs/tasks/administer-cluster/encrypt-data/) + para objetos Secret, de modo que os dados de tais Secrets não sejam + armazenados em texto plano no {{< glossary_tooltip term_id="etcd" >}}; + - considere a destruição do armazenamento durável previamente utilizado pelo + etcd quando não estiver mais em uso; + - se houverem múltiplas instâncias do etcd em uso, garanta que o etcd esteja + configurado para utilizar SSL/TLS para comunicação entre instâncias. ## {{% heading "whatsnext" %}} - Aprenda a [gerenciar Secrets utilizando `kubectl`](/pt-br/docs/tasks/configmap-secret/managing-secret-using-kubectl/) - Aprenda a [gerenciar Secrets utilizando arquivos de configuração](/pt-br/docs/tasks/configmap-secret/managing-secret-using-config-file/) - Aprenda a [gerenciar Secrets utilizando kustomize](/pt-br/docs/tasks/configmap-secret/managing-secret-using-kustomize/) -- Leia a [documentação de referência da API](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) de `Secrets` +- Leia a [documentação de referência da API](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) de Secrets From 60ee2c2d14f84ef12f48f68d864f85dc0b9f03ef Mon Sep 17 00:00:00 2001 From: mtardy Date: Tue, 28 Jun 2022 21:11:59 +0200 Subject: [PATCH 028/292] Add the documentation on the kubernetes.io/psp annotation --- .../en/docs/reference/labels-annotations-taints/_index.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index 4b74774ea7888..6af05872833eb 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -618,6 +618,12 @@ or updating objects that contain Pod templates, such as Deployments, Jobs, State See [Enforcing Pod Security at the Namespace Level](/docs/concepts/security/pod-security-admission) for more information. +### kubernetes.io/psp (deprecated) {#kubernetes-io-psp} + +Example: `kubernetes.io/psp: restricted` + +Value is the name of the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) that was validated against the ressource. + ### seccomp.security.alpha.kubernetes.io/pod (deprecated) {#seccomp-security-alpha-kubernetes-io-pod} This annotation has been deprecated since Kubernetes v1.19 and will become non-functional in v1.25. From 453f4e61f6255c43ad8de6c65cce7e161922d919 Mon Sep 17 00:00:00 2001 From: mtardy Date: Tue, 28 Jun 2022 21:12:30 +0200 Subject: [PATCH 029/292] Reference the kubernetes.io/psp annotation on the PodSecurityPolicy concept page --- .../docs/concepts/security/pod-security-policy.md | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/content/en/docs/concepts/security/pod-security-policy.md b/content/en/docs/concepts/security/pod-security-policy.md index cc0acc410dc6c..6b061b830e01e 100644 --- a/content/en/docs/concepts/security/pod-security-policy.md +++ b/content/en/docs/concepts/security/pod-security-policy.md @@ -214,6 +214,9 @@ controller selects policies according to the following criteria: 2. If the pod must be defaulted or mutated, the first PodSecurityPolicy (ordered by name) to allow the pod is selected. +When a Pod is validated against a PodSecurityPolicy, [a `kubernetes.io/psp` annotation](/docs/reference/labels-annotations-taints/#kubernetes-io-psp) +is added with its name as its value. + {{< note >}} During update operations (during which mutations to pod specs are disallowed) only non-mutating PodSecurityPolicies are used to validate the pod. @@ -332,7 +335,15 @@ The output is similar to this pod "pause" created ``` -It works as expected! But any attempts to create a privileged pod should still +It works as expected! You can verify that the pod was validated against the +newly created PodSecurityPolicy: + +```shell +kubectl-user get pod pause -o yaml | grep kubernetes.io/psp +kubernetes.io/psp: example +``` + +But any attempts to create a privileged pod should still be denied: ```shell From 9ffd24b78d841fd7e0b2ba3e492618ed369a8f63 Mon Sep 17 00:00:00 2001 From: mtardy Date: Tue, 28 Jun 2022 21:20:08 +0200 Subject: [PATCH 030/292] Use absolute URL in the tuto for the example PSP --- content/en/docs/concepts/security/pod-security-policy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/security/pod-security-policy.md b/content/en/docs/concepts/security/pod-security-policy.md index 6b061b830e01e..329382482eaef 100644 --- a/content/en/docs/concepts/security/pod-security-policy.md +++ b/content/en/docs/concepts/security/pod-security-policy.md @@ -258,7 +258,7 @@ The name of a PodSecurityPolicy object must be a valid And create it with kubectl: ```shell -kubectl-admin create -f example-psp.yaml +kubectl-admin create -f https://k8s.io/examples/policy/example-psp.yaml ``` Now, as the unprivileged user, try to create a simple pod: From f5405bf453f9afb54dfa982bfb9b5ad141ff299c Mon Sep 17 00:00:00 2001 From: Michael Date: Wed, 29 Jun 2022 07:36:29 +0800 Subject: [PATCH 031/292] [pt-br] fix link about Katacoda --- content/pt-br/includes/task-tutorial-prereqs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/includes/task-tutorial-prereqs.md b/content/pt-br/includes/task-tutorial-prereqs.md index eb4177b4fd371..66b20b849f2f1 100644 --- a/content/pt-br/includes/task-tutorial-prereqs.md +++ b/content/pt-br/includes/task-tutorial-prereqs.md @@ -2,5 +2,5 @@ Você precisa de um cluster Kubernetes e a ferramenta de linha de comando kubect precisa estar configurada para acessar o seu cluster. Se você ainda não tem um cluster, pode criar um usando o [minikube](/docs/tasks/tools/#minikube) ou você pode usar um dos seguintes ambientes: -* [Katacoda](https://www.katacoda.com/courses/kubernetes/playground) +* [Killercoda](https://killercoda.com/playgrounds/scenario/kubernetes) * [Play with Kubernetes](http://labs.play-with-k8s.com/) From 3b8a2a01fab70424aa4b3ed05069e65c15feeff6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Mah=C3=A9?= Date: Wed, 29 Jun 2022 09:26:06 +0200 Subject: [PATCH 032/292] Clarify the reference to the psp annotation in the concept page Co-authored-by: Tim Bannister --- content/en/docs/concepts/security/pod-security-policy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/security/pod-security-policy.md b/content/en/docs/concepts/security/pod-security-policy.md index 329382482eaef..59215c338068c 100644 --- a/content/en/docs/concepts/security/pod-security-policy.md +++ b/content/en/docs/concepts/security/pod-security-policy.md @@ -215,7 +215,7 @@ controller selects policies according to the following criteria: (ordered by name) to allow the pod is selected. When a Pod is validated against a PodSecurityPolicy, [a `kubernetes.io/psp` annotation](/docs/reference/labels-annotations-taints/#kubernetes-io-psp) -is added with its name as its value. +is added to the Pod, with the name of the PodSecurityPolicy as the annotation value. {{< note >}} During update operations (during which mutations to pod specs are disallowed) From 23eea7e12262779e4558692ce34c68c09502efa0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Mah=C3=A9?= Date: Wed, 29 Jun 2022 09:27:42 +0200 Subject: [PATCH 033/292] Add more context in the annotation page Co-authored-by: Tim Bannister --- .../en/docs/reference/labels-annotations-taints/_index.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index 6af05872833eb..d132642e2c51e 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -622,7 +622,11 @@ for more information. Example: `kubernetes.io/psp: restricted` -Value is the name of the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) that was validated against the ressource. +This annotation is only relevant if you are using [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/). + +When the PodSecurityPolicy admission controller admits a Pod, the admission controller +modifies the Pod to have this annotation. +The value of the annotation is the name of the PodSecurityPolicy that was used for validation. ### seccomp.security.alpha.kubernetes.io/pod (deprecated) {#seccomp-security-alpha-kubernetes-io-pod} From 8a4e62fb766e8efc296ca91823b210941fb24bcd Mon Sep 17 00:00:00 2001 From: mtardy Date: Wed, 29 Jun 2022 09:36:11 +0200 Subject: [PATCH 034/292] Separate commands from their outputs --- .../concepts/security/pod-security-policy.md | 23 +++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/content/en/docs/concepts/security/pod-security-policy.md b/content/en/docs/concepts/security/pod-security-policy.md index 59215c338068c..b43bfd4c5e939 100644 --- a/content/en/docs/concepts/security/pod-security-policy.md +++ b/content/en/docs/concepts/security/pod-security-policy.md @@ -287,6 +287,11 @@ pod's service account nor `fake-user` have permission to use the new policy: ```shell kubectl-user auth can-i use podsecuritypolicy/example +``` + +The output is similar to this: + +``` no ``` @@ -303,14 +308,27 @@ kubectl-admin create role psp:unprivileged \ --verb=use \ --resource=podsecuritypolicy \ --resource-name=example +``` + +``` role "psp:unprivileged" created +``` +```shell kubectl-admin create rolebinding fake-user:psp:unprivileged \ --role=psp:unprivileged \ --serviceaccount=psp-example:fake-user +``` + +``` rolebinding "fake-user:psp:unprivileged" created +``` +```shell kubectl-user auth can-i use podsecuritypolicy/example +``` + +``` yes ``` @@ -340,6 +358,11 @@ newly created PodSecurityPolicy: ```shell kubectl-user get pod pause -o yaml | grep kubernetes.io/psp +``` + +The output is similar to this + +``` kubernetes.io/psp: example ``` From 1d55061a5a748bd9cd953470c847ac012cc2cf8f Mon Sep 17 00:00:00 2001 From: mtardy Date: Wed, 29 Jun 2022 09:37:23 +0200 Subject: [PATCH 035/292] Remove the part about defining a PSP in a file --- content/en/docs/concepts/security/pod-security-policy.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/en/docs/concepts/security/pod-security-policy.md b/content/en/docs/concepts/security/pod-security-policy.md index b43bfd4c5e939..c296f31cc2c74 100644 --- a/content/en/docs/concepts/security/pod-security-policy.md +++ b/content/en/docs/concepts/security/pod-security-policy.md @@ -248,8 +248,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n ### Create a policy and a pod -Define the example PodSecurityPolicy object in a file. This is a policy that -prevents the creation of privileged pods. +This is a policy that prevents the creation of privileged pods. The name of a PodSecurityPolicy object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). From b2034013f8db61d16958f08667008c493fb8afc7 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Sun, 3 Jul 2022 18:22:08 +0200 Subject: [PATCH 036/292] [es] Localize NetworkPolicies Signed-off-by: Nicolas Quiceno B --- .../services-networking/network-policies.md | 288 ++++++++++++++++++ .../network-policy-allow-all-egress.yaml | 11 + .../network-policy-allow-all-ingress.yaml | 11 + .../network-policy-default-deny-all.yaml | 10 + .../network-policy-default-deny-egress.yaml | 9 + .../network-policy-default-deny-ingress.yaml | 9 + .../service/networking/networkpolicy.yaml | 35 +++ 7 files changed, 373 insertions(+) create mode 100644 content/es/docs/concepts/services-networking/network-policies.md create mode 100644 content/es/examples/service/networking/network-policy-allow-all-egress.yaml create mode 100644 content/es/examples/service/networking/network-policy-allow-all-ingress.yaml create mode 100644 content/es/examples/service/networking/network-policy-default-deny-all.yaml create mode 100644 content/es/examples/service/networking/network-policy-default-deny-egress.yaml create mode 100644 content/es/examples/service/networking/network-policy-default-deny-ingress.yaml create mode 100644 content/es/examples/service/networking/networkpolicy.yaml diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md new file mode 100644 index 0000000000000..11b08ddb2c245 --- /dev/null +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -0,0 +1,288 @@ +--- +reviewers: +- raelga +- electrocucaracha +title: Políticas de red (Network Policies) +content_type: concept +weight: 50 +--- + + + +Si quieres controlar el tráfico de red a nivel de dirección IP o de puerto (capa OSI 3 o 4), puedes considerar el uso de Kubernetes NetworkPolicies para las aplicaciones que corren en tu clúster. Las NetworkPolicies son una estructura enfocada en las aplicaciones que permite establecer cómo un {{< glossary_tooltip text="pod" term_id="pod">}} puede comunicarse con otras "entidades" (utilizamos la palabra "entidad" para evitar sobrecargar términos más comunes como "Endpoint" o "Service", que tienen connotaciones específicas de Kubernetes) a través de la red. Las NetworkPolicies se aplican a uno o ambos extremos de la conexión a un Pod, sin afectar a otras conexiones. + +Las entidades con las que un Pod puede comunicarse son de una combinación de estos 3 tipos: + +1. Otros pods permitidos (excepción: un pod no puede bloquear el acceso a sí mismo) +2. Namespaces permitidos +3. Bloqueos de IP (excepción: el tráfico hacia y desde el nodo donde se ejecuta un Pod siempre está permitido, independientemente de la dirección IP del Pod o del nodo) + +Cuando se define una NetworkPolicy basada en pods o espacios de nombres, se utiliza un {{< glossary_tooltip text="selector" term_id="selector">}} para especificar qué tráfico se permite desde y hacia los Pod(s) que coinciden con el selector. + +Por otro lado, cuando se crean NetworkPolicies basadas en IP, se definen políticas basadas en bloques de IP (rangos CIDR). + + + +## Prerrequisitos + +Las políticas de red son implementadas por el [plugin de red](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). Para usar políticas de red, debes estar utilizando una solución de red que soporte NetworkPolicy. Crear un recurso NetworkPolicy sin un controlador que lo habilite no tendrá ningún efecto. + + +## Dos Tipos de Aislamiento de Pod + +Hay dos tipos de aislamiento para un pod: el aislamiento para la salida y el aislamiento para la entrada. Estos se refieren a las conexiones que pueden establecerse. El término "Aislamiento" en el contexto de este documento no es absoluto, sino que significa "se aplican algunas restricciones". La alternativa, "no aislado para $dirección", significa que no se aplican restricciones en la dirección descrita. Los dos tipos de aislamiento (o no) se declaran independientemente, y ambos son relevantes para una conexión de un pod a otro. + +Por defecto, un pod no está aislado para la salida; todas las conexiones salientes están permitidas. Un pod está aislado para la salida si hay alguna NetworkPolicy con "Egress" en su `policyTypes` que seleccione el pod; decimos que tal política se aplica al pod para la salida. Cuando un pod está aislado para la salida, las únicas conexiones permitidas desde el pod son las permitidas por la lista `egress` de las NetworkPolicy que se aplique al pod para la salida. Los valores de esas listas `egress` se combinan de forma aditiva. + +Por defecto, un pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el pod; decimos que tal política se aplica al pod para la entrada. Cuando un pod está aislado para la entrada, las únicas conexiones permitidas en el pod son las del nodo del pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. + +Las políticas de red no entran en conflicto; son aditivas. Si alguna política o políticas se aplican a un pod para una dirección determinada, las conexiones permitidas en esa dirección desde ese pod es la unión de lo que permiten las políticas aplicables. Por tanto, el orden de evaluación no afecta al resultado de la política. + +Para que se permita una conexión desde un pod de origen a un pod de destino, tanto la política de salida del pod de origen como la de entrada del pod de destino deben permitir la conexión. Si cualquiera de los dos lados no permite la conexión, ésta no se producirá. + + +## El Recurso NetworkPolicy {#networkpolicy-resource} + +Ver la referencia [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) para una definición completa del recurso. + +Un ejemplo de NetworkPolicy pudiera ser este: + +{{< codenew file="service/networking/networkpolicy.yaml" >}} + +{{< note >}} +Enviar esto al API Server de su clúster no tendrá ningún efecto a menos que su solución de red tenga soporte de políticas de red. +{{< /note >}} + +__Campos Obligatorios__: Como con todos los otras configuraciones de Kubernetes, una NetworkPolicy +necesita los campos `apiVersion`, `kind`, y `metadata`. Para obtener información general +sobre cómo funcionan esos ficheros de configuración, mirar +[Configurar un Pod para usar un ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), +y [Gestión de Objetos](/docs/concepts/overview/working-with-objects/object-management). + +__spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) contiene toda la información necesaria para definir una política de red dado un Namespace. + +__podSelector__: Cada NetworkPolicy incluye un `podSelector` el cual selecciona el grupo de Pods en los cuales aplica la política. La política de ejemplo selecciona pods con el label "role=db". Un `podSelector` vacío selecciona todos los Pods en un Namespace. + +__policyTypes__: Cada NetworkPolicy incluye una lista de `policyTypes` la cual puede incluir `Ingress`, `Egress`, o ambas. Los campos `policyTypes` indican si la política aplica o no aplica al tráfico de entrada hacia el Pod seleccionado, el tráfico de salida desde el Pods seleccionado, o ambos. Si no se especifican `policyTypes` en una NetworkPolicy el valor `Ingress` será siempre aplicado por defecto y `Egress` será aplicado si la NetworkPolicy contiene alguna regla de salida. + +__ingress__: Cada NetworkPolicy puede incluir una lista de reglas `ingress` permitidas. Cada regla permite el tráfico con que se corresponda a ambos valores de las secciones de `from` y `ports`. La política de ejemplo contiene una única regla, la cual se corresponde con el tráfico sobre un solo puerto, desde uno de los tres orígenes definidos, el primero especificado por el valor `ipBlock`, el segundo especificado por el valor `namespaceSelector` y el tercero especificado por el `podSelector`. + +__egress__: Cada NetworkPolicy puede incluir una lista de reglas de `egress` permitidas. Cada regla permite el tráfico con que se corresponda a ambos valores de las secciones de `to` and `ports`. La política de ejemplo contiene una única regla, la cual se corresponde con el tráfico en un único puerto para cualquier destino en el rango de IPs `10.0.0.0/24`. + +Por lo tanto, la NetworkPolicy de ejemplo: + +1. Aísla los pods "role=db" en el "default" namespace para ambos tipos de tráfico ingress y egress (si ellos no están aún aislados) +2. (Reglas Ingress) permite la coneccion hacia todos los pods en el "default" namespace con el label "role=db" en el puerto TCP 6379 desde los siguientes orígenes: + + * cualquier pod en el "default" namespace con el label "role=frontend" + * cualquier pod en un namespace con el label "project=myproject" + * La dirección IP en los rangos 172.17.0.0–172.17.0.255 y 172.17.2.0–172.17.255.255 (por ejemplo, todo el rango de IPs de 172.17.0.0/16 con excepción del 172.17.1.0/24) +3. (Egress rules) permite coneccion desde cualquier pods en el "default" namespace con el label "role=db" hacia CIDR 10.0.0.0/24 en el puerto TCP 5978 + +Ver el recorrido de [Declarar Network Policy](/docs/tasks/administer-clúster/declare-network-policy/) para más ejemplos. + + +## Comportamiento de los selectores `to` y `from` + +Existen cuatro tipos de selectores que pueden ser especificados en una sección de `ingress` `from` or en una sección de `egress` `to`: + +__podSelector__: Este selector selecciona Pods específicos en el mismo espacio de nombres que la NetworkPolicy para permitir el tráfico como fuente de entrada o destino de salida. + +__namespaceSelector__: Este selector selecciona espacios de nombres específicos para permitir el tráfico como fuente de entrada o destino de salida. + +__namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que especifique tanto `namespaceSelector` como `podSelector` selecciona Pods específicos dentro de espacios de nombres específicos. Tenga cuidado de utilizar la sintaxis YAML correcta. A continuación se muestra un ejemplo de esta política: + +```yaml + ... + ingress: + - from: + - namespaceSelector: + matchLabels: + user: alice + podSelector: + matchLabels: + role: client + ... +``` + +contiene un único elemento `from` permitiendo conexiones desde los Pods con el label `role=client` en nombres de espacio con el label `user=alice`. Por el contrario, *esta* política: + +```yaml + ... + ingress: + - from: + - namespaceSelector: + matchLabels: + user: alice + - podSelector: + matchLabels: + role: client + ... +``` + + +contiene dos elementos en el array `from`, y permite conecciones desde Pods en el local Namespace con el label `role=client`, *o* desde cualquier Pod en cualquier nombre de espacio con el label `user=alice`. + +En caso de duda, utilice `kubectl describe` para ver cómo Kubernetes ha interpretado la política. + + + +__ipBlock__: Este selector selecciona rangos CIDR de IP específicos para permitirlas como fuentes de entrada o destinos de salida. Estas IPs deben ser externas al clúster, ya que las IPs de Pod son efímeras e impredecibles. + +Los mecanismos de entrada y salida del clúster a menudo requieren reescribir la IP de origen o destino +de los paquetes. En los casos en los que esto ocurre, no está definido si esto ocurre antes o +después del procesamiento de NetworkPolicy, y el comportamiento puede ser diferente para diferentes +combinaciones de plugin de red, proveedor de nube, implementación de `Service`, etc. + +En el caso de la entrada, esto significa que en algunos casos se pueden filtrar paquetes +entrantes basándose en la IP de origen real, mientras que en otros casos, la "IP de origen" sobre la que actúa la +la NetworkPolicy actúa puede ser la IP de un `LoadBalancer` o la IP de Nodo donde este el Pod involucrado, etc. + +Para la salida, esto significa que las conexiones de los pods a las IPs de `Service` que se reescriben a +IPs externas al clúster pueden o no estar sujetas a políticas basadas en `ipBlock`. + + +## Políticas por defecto + +Por defecto, si no existen políticas en un espacio de nombres, se permite todo el tráfico de entrada y salida hacia y desde los pods de ese espacio de nombres. Los siguientes ejemplos muestran cómo cambiar el comportamiento por defecto en ese espacio de nombres. + + +### Denegar todo el tráfico de entrada por defecto + +Puedes crear una política que "por defecto" aisle a un espacio de nombres del tráfico de entrada con la creación de una política que seleccione todos los Pods del espacio de nombres pero no permite ningún tráfico de entrada en esos Pods. + +{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} + +Esto asegura que incluso los Pods que no están seleccionados por ninguna otra NetworkPolicy también serán aislados del tráfico de entrada. Esta política no afecta el aislamiento en el tráfico de salida desde cualquier Pods. + + +### Permitir todo el tráfico de entrada + +Si tu quieres permitir todo el tráfico de entrada a todos los Pods en un nombre de espacio, puedes crear una política que explícitamente permita eso. + +{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} + +Con esta política en curso, ninguna política o políticas adicionales pueden hacer que se deniegue cualquier conexión entrante a esos pods. Esta política no tiene efecto sobre el aislamiento del tráfico de salida de cualquier pod. + + +### Denegar por defecto todo el tráfico de salida + +Puedes crear una política que "por defecto" aisle el tráfico de salida para un espacio de nombres, creando una NetworkPolicy que seleccione todos los pods pero que no permita ningún tráfico de salida desde esos pods. + +{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} + +Esto asegura que incluso los pods que no son seleccionados por ninguna otra NetworkPolicy no tendrán permitido el tráfico de salida. Esta política no cambia el comportamiento de aislamiento para el tráfico de entrada de ningún pod. + + +### Permitir todo el tráfico de salida + +Si quieres permitir todas las conexiones desde todos los pods de un espacio de nombres, puede crear una política que permita explícitamente todas las conexiones salientes de los pods de ese espacio de nombres. + +{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} + +Con esta política en vigor, ninguna política o políticas adicionales pueden hacer que se deniegue cualquier conexión de salida desde esos pods. Esta política no tiene efecto sobre el aislamiento para el tráfico de entrada a cualquier pod. + + +### Denegar por defecto todo el tráfico de entrada y de salida + +Puede crear una política que "por defecto" en un espacio de nombres impida todo el tráfico de entrada Y de salida creando la siguiente NetworkPolicy en ese espacio de nombres. + +{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}} + +Esto asegura que incluso los pods que no son seleccionados por ninguna otra NetworkPolicy no tendrán permitido el tráfico de entrada o salida. + + +## Soporte a SCTP + +{{< feature-state for_k8s_version="v1.20" state="stable" >}} + +Como característica estable, está activada por defecto. Para deshabilitar SCTP a nivel de clúster, usted (o el administrador de su clúster) tiene que deshabilitar la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `SCTPSupport` para el API Server con el flag `--feature-gates=SCTPSupport=false,...`. +Cuando esta feature gate está habilitada, puede establecer el campo `protocol` de una NetworkPolicy como `SCTP`. + +{{< note >}} +Debes utilizar un plugin de {{< glossary_tooltip text="CNI" term_id="cni" >}} que soporte el protocolo SCTP NetworkPolicies. +{{< /note >}} + + +## Apuntar a un rango de puertos + +{{< feature-state for_k8s_version="v1.22" state="beta" >}} + +Cuando se escribe una NetworkPolicy, se puede apuntar a un rango de puertos en lugar de un solo puerto. + +Esto se puede lograr con el uso del campo `endPort`, como el siguiente ejemplo: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: multi-port-egress + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Egress + egress: + - to: + - ipBlock: + cidr: 10.0.0.0/24 + ports: + - protocol: TCP + port: 32000 + endPort: 32768 +``` + +La regla anterior permite que cualquier Pod con la etiqueta `role=db` en el espacio de nombres `default` se comunique +con cualquier IP dentro del rango `10.0.0.0/24` sobre el protocolo TCP, siempre que el puerto +esté entre el rango 32000 y 32768. + +Se aplican las siguientes restricciones al utilizar este campo: +* Como característica en estado beta, está activada por defecto. Para desactivar el campo `endPort` a nivel de clúster, usted (o su administrador de clúster) debe desactivar la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NetworkPolicyEndPort` +en el API Server con el flag `--feature-gates=NetworkPolicyEndPort=false,...`. +* El campo `endPort` debe ser igual o mayor que el campo `port`. +* Sólo se puede definir `endPort` si también se define `port`. +* Ambos puertos deben ser numéricos. + + +{{< note >}} +Su clúster debe utilizar un plugin de {{< glossary_tooltip text="CNI" term_id="cni" >}} que +soporte el campo `endPort` en las especificaciones de NetworkPolicy. +Si su [plugin de red](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +no soporta el campo `endPort` y usted especifica una NetworkPolicy que use este campo, +la política se aplicará sólo para el campo `port`. +{{< /note >}} + + +## Como apuntar a un Namespace usando su nombre + +{{< feature-state for_k8s_version="1.22" state="stable" >}} + +El plano de control de Kubernetes establece una etiqueta inmutable `kubernetes.io/metadata.name` en todos los +espacios de nombre, siempre que se haya habilitado la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NamespaceDefaultLabelName`. +El valor de la etiqueta es el nombre del espacio de nombres. + +Aunque NetworkPolicy no puede apuntar a un espacio de nombres por su nombre con algún campo de objeto, puede utilizar la etiqueta estandarizada para apuntar a un espacio de nombres específico. + + + ## Que no puedes hacer con políticas de red (al menos, no aún) + +A día de hoy, en Kubernetes {{< skew currentVersion >}}, la siguiente funcionalidad no existe en la API de NetworkPolicy, pero es posible que se puedan implementar soluciones mediante componentes del sistema operativo (como SELinux, OpenVSwitch, IPTables, etc.) o tecnologías de capa 7 (Ingress controllers, implementaciones de Service Mesh) o controladores de admisión. En caso de que seas nuevo en la seguridad de la red en Kubernetes, vale la pena señalar que las siguientes historias de usuario no pueden (todavía) ser implementadas usando la API NetworkPolicy. + +- Forzar que el tráfico interno del clúster pase por una puerta de enlace común (esto se puede implementar con una malla de servicios u otro proxy). +- Cualquier cosa relacionada con TLS (se puede implementar con una malla de servicios o un Ingress controllers para esto). +- Políticas específicas de los nodos (se puede utilizar la notación CIDR para esto, pero no se puede apuntar a los nodos por sus identidades Kubernetes específicamente). +- Apuntar a los servicios por su nombre (sin embargo, puede orientar los pods o los espacios de nombres por su {{< glossary_tooltip text="labels" term_id="label" >}}, lo que suele ser una solución viable). +- Creación o gestión de "solicitudes de políticas" que son atendidas por un tercero. +- Políticas que por defecto son aplicadas a todos los espacios de nombres o pods (hay algunas distribuciones y proyectos de Kubernetes de terceros que pueden hacer esto). +- Consulta avanzada de políticas y herramientas de accesibilidad. +- La capacidad de registrar los eventos de seguridad de la red (por ejemplo, las conexiones bloqueadas o aceptadas). +- La capacidad de negar explícitamente las políticas (actualmente el modelo para NetworkPolicies es negar por defecto, con sólo la capacidad de añadir reglas de permitir). +- La capacidad de impedir el tráfico entrante de Loopback o de Host (actualmente los Pods no pueden bloquear el acceso al host local, ni tienen la capacidad de bloquear el acceso desde su nodo residente). + + +## {{% heading "whatsnext" %}} + +- Leer el recorrido de como [Declarar de Políticas de Red](/docs/tasks/administer-clúster/declare-network-policy/) para ver más ejemplos. +- Ver más [recetas](https://github.com/ahmetb/kubernetes-network-policy-recipes) de escenarios comunes habilitados por los recursos de las NetworkPolicy. diff --git a/content/es/examples/service/networking/network-policy-allow-all-egress.yaml b/content/es/examples/service/networking/network-policy-allow-all-egress.yaml new file mode 100644 index 0000000000000..42b2a2a296655 --- /dev/null +++ b/content/es/examples/service/networking/network-policy-allow-all-egress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-egress +spec: + podSelector: {} + egress: + - {} + policyTypes: + - Egress diff --git a/content/es/examples/service/networking/network-policy-allow-all-ingress.yaml b/content/es/examples/service/networking/network-policy-allow-all-ingress.yaml new file mode 100644 index 0000000000000..462912dae4eb3 --- /dev/null +++ b/content/es/examples/service/networking/network-policy-allow-all-ingress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-ingress +spec: + podSelector: {} + ingress: + - {} + policyTypes: + - Ingress diff --git a/content/es/examples/service/networking/network-policy-default-deny-all.yaml b/content/es/examples/service/networking/network-policy-default-deny-all.yaml new file mode 100644 index 0000000000000..5c0086bd71e8b --- /dev/null +++ b/content/es/examples/service/networking/network-policy-default-deny-all.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-all +spec: + podSelector: {} + policyTypes: + - Ingress + - Egress diff --git a/content/es/examples/service/networking/network-policy-default-deny-egress.yaml b/content/es/examples/service/networking/network-policy-default-deny-egress.yaml new file mode 100644 index 0000000000000..a4659e14174db --- /dev/null +++ b/content/es/examples/service/networking/network-policy-default-deny-egress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-egress +spec: + podSelector: {} + policyTypes: + - Egress diff --git a/content/es/examples/service/networking/network-policy-default-deny-ingress.yaml b/content/es/examples/service/networking/network-policy-default-deny-ingress.yaml new file mode 100644 index 0000000000000..e8238024878f4 --- /dev/null +++ b/content/es/examples/service/networking/network-policy-default-deny-ingress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-ingress +spec: + podSelector: {} + policyTypes: + - Ingress diff --git a/content/es/examples/service/networking/networkpolicy.yaml b/content/es/examples/service/networking/networkpolicy.yaml new file mode 100644 index 0000000000000..e91eed2f67e4b --- /dev/null +++ b/content/es/examples/service/networking/networkpolicy.yaml @@ -0,0 +1,35 @@ +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: test-network-policy + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Ingress + - Egress + ingress: + - from: + - ipBlock: + cidr: 172.17.0.0/16 + except: + - 172.17.1.0/24 + - namespaceSelector: + matchLabels: + project: myproject + - podSelector: + matchLabels: + role: frontend + ports: + - protocol: TCP + port: 6379 + egress: + - to: + - ipBlock: + cidr: 10.0.0.0/24 + ports: + - protocol: TCP + port: 5978 + From f9ebc90ff71ed966852b1133f1a14d50c6b69a7f Mon Sep 17 00:00:00 2001 From: Daniel Wright Date: Mon, 11 Jul 2022 08:41:04 -0700 Subject: [PATCH 037/292] [en] update en docs to use recommended labels --- .../docs/concepts/configuration/overview.md | 2 +- .../services-networking/dual-stack.md | 18 ++++---- .../service-traffic-policy.md | 2 +- .../concepts/services-networking/service.md | 45 +++++++++---------- .../workloads/pods/init-containers.md | 6 +-- .../debug-application/debug-statefulset.md | 4 +- .../docs/tasks/network/validate-dual-stack.md | 18 ++++---- .../run-application/delete-stateful-set.md | 10 ++--- .../networking/dual-stack-default-svc.yaml | 4 +- .../dual-stack-ipfamilies-ipv6.yaml | 4 +- .../networking/dual-stack-ipv6-svc.yaml | 4 +- .../dual-stack-prefer-ipv6-lb-svc.yaml | 4 +- .../dual-stack-preferred-ipfamilies-svc.yaml | 4 +- .../networking/dual-stack-preferred-svc.yaml | 4 +- 14 files changed, 64 insertions(+), 65 deletions(-) diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index a4dd5c59014a7..5dc5a64826bc8 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN ## Using Labels -- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach. +- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach. A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/). diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index 921d69e8fbf94..1716d7483b2e7 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -37,7 +37,7 @@ IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features: The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters: -* Kubernetes 1.20 or later +* Kubernetes 1.20 or later For information about using dual-stack services with earlier Kubernetes versions, refer to the documentation for that version @@ -95,7 +95,7 @@ set the `.spec.ipFamilyPolicy` field to one of the following values: If you would like to define which IP family to use for single stack or define the order of IP families for dual-stack, you can choose the address families by setting an optional field, -`.spec.ipFamilies`, on the Service. +`.spec.ipFamilies`, on the Service. {{< note >}} The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a @@ -133,11 +133,11 @@ These examples demonstrate the behavior of various dual-stack Service configurat address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from `.spec.ClusterIPs`. - + * For the `.spec.ClusterIP` field, the control plane records the IP address that is from the - same address family as the first service cluster IP range. + same address family as the first service cluster IP range. * On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list - one address. + one address. * On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy` behaves the same as `PreferDualStack`. @@ -174,7 +174,7 @@ dual-stack.) kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: 10.0.197.123 @@ -188,7 +188,7 @@ dual-stack.) protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp type: ClusterIP status: loadBalancer: {} @@ -214,7 +214,7 @@ dual-stack.) kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: None @@ -228,7 +228,7 @@ dual-stack.) protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp ``` #### Switching Services between single-stack and dual-stack diff --git a/content/en/docs/concepts/services-networking/service-traffic-policy.md b/content/en/docs/concepts/services-networking/service-traffic-policy.md index b9abe34b3fc73..8755b5298b59a 100644 --- a/content/en/docs/concepts/services-networking/service-traffic-policy.md +++ b/content/en/docs/concepts/services-networking/service-traffic-policy.md @@ -43,7 +43,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index c88375164b302..eda3b6b9b33d8 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -75,7 +75,7 @@ The name of a Service object must be a valid [RFC 1035 label name](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names). For example, suppose you have a set of Pods where each listens on TCP port 9376 -and contains a label `app=MyApp`: +and contains a label `app.kubernetes.io/name=MyApp`: ```yaml apiVersion: v1 @@ -84,7 +84,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 @@ -92,7 +92,7 @@ spec: ``` This specification creates a new Service object named "my-service", which -targets TCP port 9376 on any Pod with the `app=MyApp` label. +targets TCP port 9376 on any Pod with the `app.kubernetes.io/name=MyApp` label. Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies @@ -126,7 +126,7 @@ spec: ports: - containerPort: 80 name: http-web-svc - + --- apiVersion: v1 kind: Service @@ -144,9 +144,9 @@ spec: This works even if there is a mixture of Pods in the Service using a single -configured name, with the same network protocol available via different -port numbers. This offers a lot of flexibility for deploying and evolving -your Services. For example, you can change the port numbers that Pods expose +configured name, with the same network protocol available via different +port numbers. This offers a lot of flexibility for deploying and evolving +your Services. For example, you can change the port numbers that Pods expose in the next version of your backend software, without breaking clients. The default protocol for Services is TCP; you can also use any other @@ -159,7 +159,7 @@ Each port definition can have the same `protocol`, or a different one. ### Services without selectors Services most commonly abstract access to Kubernetes Pods thanks to the selector, -but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends, +but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends, including ones that run outside the cluster. For example: * You want to have an external database cluster in production, but in your @@ -222,10 +222,10 @@ In the example above, traffic is routed to the single endpoint defined in the YAML: `192.0.2.42:9376` (TCP). {{< note >}} -The Kubernetes API server does not allow proxying to endpoints that are not mapped to -pods. Actions such as `kubectl proxy ` where the service has no -selector will fail due to this constraint. This prevents the Kubernetes API server -from being used as a proxy to endpoints the caller may not be authorized to access. +The Kubernetes API server does not allow proxying to endpoints that are not mapped to +pods. Actions such as `kubectl proxy ` where the service has no +selector will fail due to this constraint. This prevents the Kubernetes API server +from being used as a proxy to endpoints the caller may not be authorized to access. {{< /note >}} An ExternalName Service is a special case of Service that does not have @@ -289,7 +289,7 @@ There are a few reasons for using proxying for Services: Later in this page you can read about various kube-proxy implementations work. Overall, you should note that, when running `kube-proxy`, kernel level rules may be -modified (for example, iptables rules might get created), which won't get cleaned up, +modified (for example, iptables rules might get created), which won't get cleaned up, in some cases until you reboot. Thus, running kube-proxy is something that should only be done by an administrator which understands the consequences of having a low level, privileged network proxying service on a computer. Although the `kube-proxy` @@ -423,7 +423,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - name: http protocol: TCP @@ -636,7 +636,7 @@ to specify IP address ranges that kube-proxy should consider as local to this no For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. -The default for `--nodeport-addresses` is an empty list. +The default for `--nodeport-addresses` is an empty list. his means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases). @@ -666,7 +666,7 @@ metadata: spec: type: NodePort selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - port: 80 @@ -692,7 +692,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 @@ -765,13 +765,13 @@ You must explicitly remove the `nodePorts` entry in every Service port to de-all `spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default. By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses the cloud provider's default load balancer implementation if the cluster is configured with -a cloud provider using the `--cloud-provider` component flag. +a cloud provider using the `--cloud-provider` component flag. If `spec.loadBalancerClass` is specified, it is assumed that a load balancer implementation that matches the specified class is watching for Services. Any default load balancer implementation (for example, the one provided by the cloud provider) will ignore Services that have this field set. `spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only. -Once set, it cannot be changed. +Once set, it cannot be changed. The value of `spec.loadBalancerClass` must be a label-style identifier, with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`". Unprefixed names are reserved for end-users. @@ -1073,7 +1073,7 @@ There are other annotations to manage Classic Elastic Load Balancers that are de # A list of existing security groups to be configured on the ELB created. Unlike the annotation # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other - # security groups previously assigned to the ELB and also overrides the creation + # security groups previously assigned to the ELB and also overrides the creation # of a uniquely generated security group for this ELB. # The first security group ID on this list is used as a source to permit incoming traffic to # target worker nodes (service traffic and health checks). @@ -1087,7 +1087,7 @@ There are other annotations to manage Classic Elastic Load Balancers that are de # generated security group in place, this ensures that every ELB # has a unique security group ID and a matching permit line to allow traffic to the target worker nodes # (service traffic and health checks). - # Security groups defined here can be shared between services. + # Security groups defined here can be shared between services. service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e" # A comma separated list of key-value pairs which are used @@ -1263,7 +1263,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - name: http protocol: TCP @@ -1481,4 +1481,3 @@ followed by the data from the client. * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) * Read about [Ingress](/docs/concepts/services-networking/ingress/) * Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) - diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 61b90d17d0b05..85c5ec413b0cf 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -28,7 +28,7 @@ Init containers are exactly like regular containers, except: * Init containers always run to completion. * Each init container must complete successfully before the next one starts. -If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds. +If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds. However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed. To specify an init container for a Pod, add the `initContainers` field into @@ -115,7 +115,7 @@ kind: Pod metadata: name: myapp-pod labels: - app: myapp + app.kubernetes.io/name: MyApp spec: containers: - name: myapp-container @@ -159,7 +159,7 @@ The output is similar to this: Name: myapp-pod Namespace: default [...] -Labels: app=myapp +Labels: app.kubernetes.io/name=MyApp Status: Pending [...] Init Containers: diff --git a/content/en/docs/tasks/debug/debug-application/debug-statefulset.md b/content/en/docs/tasks/debug/debug-application/debug-statefulset.md index 73c0d0c78adf5..428b8d0ee563e 100644 --- a/content/en/docs/tasks/debug/debug-application/debug-statefulset.md +++ b/content/en/docs/tasks/debug/debug-application/debug-statefulset.md @@ -24,11 +24,11 @@ This task shows you how to debug a StatefulSet. ## Debugging a StatefulSet -In order to list all the pods which belong to a StatefulSet, which have a label `app=myapp` set on them, +In order to list all the pods which belong to a StatefulSet, which have a label `app.kubernetes.io/name=MyApp` set on them, you can use the following: ```shell -kubectl get pods -l app=myapp +kubectl get pods -l app.kubernetes.io/name=MyApp ``` If you find that any Pods listed are in `Unknown` or `Terminating` state for an extended period of time, diff --git a/content/en/docs/tasks/network/validate-dual-stack.md b/content/en/docs/tasks/network/validate-dual-stack.md index 33a8d39091437..a27fc8050de88 100644 --- a/content/en/docs/tasks/network/validate-dual-stack.md +++ b/content/en/docs/tasks/network/validate-dual-stack.md @@ -134,7 +134,7 @@ spec: protocol: TCP targetPort: 9376 selector: - app: MyApp + app.kubernetes.io/name: MyApp sessionAffinity: None type: ClusterIP status: @@ -158,7 +158,7 @@ apiVersion: v1 kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: fd00::5118 @@ -172,7 +172,7 @@ spec: protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp sessionAffinity: None type: ClusterIP status: @@ -187,7 +187,7 @@ Create the following Service that explicitly defines `PreferDualStack` in `.spec The `kubectl get svc` command will only show the primary IP in the `CLUSTER-IP` field. ```shell -kubectl get svc -l app=MyApp +kubectl get svc -l app.kubernetes.io/name=MyApp NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service ClusterIP 10.0.216.242 80/TCP 5s @@ -197,15 +197,15 @@ my-service ClusterIP 10.0.216.242 80/TCP 5s Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using `kubectl describe`. You may then validate access to the service via the IPs and ports. ```shell -kubectl describe svc -l app=MyApp +kubectl describe svc -l app.kubernetes.io/name=MyApp ``` ``` Name: my-service Namespace: default -Labels: app=MyApp +Labels: app.kubernetes.io/name=MyApp Annotations: -Selector: app=MyApp +Selector: app.kubernetes.io/name=MyApp Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 @@ -220,14 +220,14 @@ Events: ### Create a dual-stack load balanced Service -If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with `PreferDualStack` in `.spec.ipFamilyPolicy`, `IPv6` as the first element of the `.spec.ipFamilies` array and the `type` field set to `LoadBalancer`. +If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with `PreferDualStack` in `.spec.ipFamilyPolicy`, `IPv6` as the first element of the `.spec.ipFamilies` array and the `type` field set to `LoadBalancer`. {{< codenew file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" >}} Check the Service: ```shell -kubectl get svc -l app=MyApp +kubectl get svc -l app.kubernetes.io/name=MyApp ``` Validate that the Service receives a `CLUSTER-IP` address from the IPv6 address block along with an `EXTERNAL-IP`. You may then validate access to the service via the IP and port. diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index eff3aaee176f8..a867b73a61703 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -50,10 +50,10 @@ For example: kubectl delete -f --cascade=orphan ``` -By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app=myapp`, you can then delete them as follows: +By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows: ```shell -kubectl delete pods -l app=myapp +kubectl delete pods -l app.kubernetes.io/name=MyApp ``` ### Persistent Volumes @@ -70,13 +70,13 @@ To delete everything in a StatefulSet, including the associated pods, you can ru ```shell grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}') -kubectl delete statefulset -l app=myapp +kubectl delete statefulset -l app.kubernetes.io/name=MyApp sleep $grace -kubectl delete pvc -l app=myapp +kubectl delete pvc -l app.kubernetes.io/name=MyApp ``` -In the example above, the Pods have the label `app=myapp`; substitute your own label as appropriate. +In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; substitute your own label as appropriate. ### Force deletion of StatefulSet pods diff --git a/content/en/examples/service/networking/dual-stack-default-svc.yaml b/content/en/examples/service/networking/dual-stack-default-svc.yaml index 86eadd5478aa9..a42c7d8a2517d 100644 --- a/content/en/examples/service/networking/dual-stack-default-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-default-svc.yaml @@ -3,10 +3,10 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml b/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml index 7c7239cae6c72..77949c883f095 100644 --- a/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml +++ b/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml @@ -3,12 +3,12 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilies: - IPv6 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml b/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml index 2aa0725059bbc..feb12f61a91d8 100644 --- a/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml @@ -5,8 +5,8 @@ metadata: spec: ipFamily: IPv6 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 - targetPort: 9376 \ No newline at end of file + targetPort: 9376 diff --git a/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml b/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml index 0949a7542818b..5a4a99a45cae1 100644 --- a/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml @@ -3,14 +3,14 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 type: LoadBalancer selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml b/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml index c31acfec581ed..79a4f34a7f749 100644 --- a/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml @@ -3,14 +3,14 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-preferred-svc.yaml b/content/en/examples/service/networking/dual-stack-preferred-svc.yaml index 8fb5bfa3d349f..66d42b961291d 100644 --- a/content/en/examples/service/networking/dual-stack-preferred-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-preferred-svc.yaml @@ -3,11 +3,11 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 From 857d3cd9f4be98e5dbd78e6cc7da1a03de8f7349 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:13:57 +0200 Subject: [PATCH 038/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 11b08ddb2c245..47a0efcf897a9 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -9,7 +9,7 @@ weight: 50 -Si quieres controlar el tráfico de red a nivel de dirección IP o de puerto (capa OSI 3 o 4), puedes considerar el uso de Kubernetes NetworkPolicies para las aplicaciones que corren en tu clúster. Las NetworkPolicies son una estructura enfocada en las aplicaciones que permite establecer cómo un {{< glossary_tooltip text="pod" term_id="pod">}} puede comunicarse con otras "entidades" (utilizamos la palabra "entidad" para evitar sobrecargar términos más comunes como "Endpoint" o "Service", que tienen connotaciones específicas de Kubernetes) a través de la red. Las NetworkPolicies se aplican a uno o ambos extremos de la conexión a un Pod, sin afectar a otras conexiones. +Si quieres controlar el tráfico de red a nivel de dirección IP o puerto (capa OSI 3 o 4), puedes considerar el uso de Kubernetes NetworkPolicies para las aplicaciones que corren en tu clúster. Las NetworkPolicies son una estructura enfocada en las aplicaciones que permite establecer cómo un {{< glossary_tooltip text="Pod" term_id="pod">}} puede comunicarse con otras "entidades" (utilizamos la palabra "entidad" para evitar sobrecargar términos más comunes como "Endpoint" o "Service", que tienen connotaciones específicas de Kubernetes) a través de la red. Las NetworkPolicies se aplican a uno o ambos extremos de la conexión a un Pod, sin afectar a otras conexiones. Las entidades con las que un Pod puede comunicarse son de una combinación de estos 3 tipos: From ee2f62ce72cb0c73d5ee99154e68dbf472780920 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:14:12 +0200 Subject: [PATCH 039/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 47a0efcf897a9..0dd4ab2f4313f 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -13,7 +13,7 @@ Si quieres controlar el tráfico de red a nivel de dirección IP o puerto (capa Las entidades con las que un Pod puede comunicarse son de una combinación de estos 3 tipos: -1. Otros pods permitidos (excepción: un pod no puede bloquear el acceso a sí mismo) +1. Otros Pods permitidos (excepción: un Pod no puede bloquear el acceso a sí mismo) 2. Namespaces permitidos 3. Bloqueos de IP (excepción: el tráfico hacia y desde el nodo donde se ejecuta un Pod siempre está permitido, independientemente de la dirección IP del Pod o del nodo) From b8f3bfee17550bb1ad0e326a5d6dc29a83795820 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:15:26 +0200 Subject: [PATCH 040/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 0dd4ab2f4313f..c0fb4bd2b95d7 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -17,7 +17,7 @@ Las entidades con las que un Pod puede comunicarse son de una combinación de es 2. Namespaces permitidos 3. Bloqueos de IP (excepción: el tráfico hacia y desde el nodo donde se ejecuta un Pod siempre está permitido, independientemente de la dirección IP del Pod o del nodo) -Cuando se define una NetworkPolicy basada en pods o espacios de nombres, se utiliza un {{< glossary_tooltip text="selector" term_id="selector">}} para especificar qué tráfico se permite desde y hacia los Pod(s) que coinciden con el selector. +Cuando se define una NetworkPolicy basada en Pods o Namespaces, se utiliza un {{< glossary_tooltip text="Selector" term_id="selector">}} para especificar qué tráfico se permite desde y hacia los Pod(s) que coinciden con el selector. Por otro lado, cuando se crean NetworkPolicies basadas en IP, se definen políticas basadas en bloques de IP (rangos CIDR). From daa869f777010c2ae5f32bfc9b643ac999cf924e Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:16:45 +0200 Subject: [PATCH 041/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index c0fb4bd2b95d7..8c25d57a60a99 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -30,7 +30,7 @@ Las políticas de red son implementadas por el [plugin de red](/docs/concepts/ex ## Dos Tipos de Aislamiento de Pod -Hay dos tipos de aislamiento para un pod: el aislamiento para la salida y el aislamiento para la entrada. Estos se refieren a las conexiones que pueden establecerse. El término "Aislamiento" en el contexto de este documento no es absoluto, sino que significa "se aplican algunas restricciones". La alternativa, "no aislado para $dirección", significa que no se aplican restricciones en la dirección descrita. Los dos tipos de aislamiento (o no) se declaran independientemente, y ambos son relevantes para una conexión de un pod a otro. +Hay dos tipos de aislamiento para un Pod: el aislamiento para la salida y el aislamiento para la entrada. Estos se refieren a las conexiones que pueden establecerse. El término "Aislamiento" en el contexto de este documento no es absoluto, sino que significa "se aplican algunas restricciones". La alternativa, "no aislado para $dirección", significa que no se aplican restricciones en la dirección descrita. Los dos tipos de aislamiento (o no) se declaran independientemente, y ambos son relevantes para una conexión de un Pod a otro. Por defecto, un pod no está aislado para la salida; todas las conexiones salientes están permitidas. Un pod está aislado para la salida si hay alguna NetworkPolicy con "Egress" en su `policyTypes` que seleccione el pod; decimos que tal política se aplica al pod para la salida. Cuando un pod está aislado para la salida, las únicas conexiones permitidas desde el pod son las permitidas por la lista `egress` de las NetworkPolicy que se aplique al pod para la salida. Los valores de esas listas `egress` se combinan de forma aditiva. From 1ac66aed0e1dbbe5b412ba0deeb3bcb6ca25309c Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:17:15 +0200 Subject: [PATCH 042/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 8c25d57a60a99..95788f8ae5714 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -32,7 +32,7 @@ Las políticas de red son implementadas por el [plugin de red](/docs/concepts/ex Hay dos tipos de aislamiento para un Pod: el aislamiento para la salida y el aislamiento para la entrada. Estos se refieren a las conexiones que pueden establecerse. El término "Aislamiento" en el contexto de este documento no es absoluto, sino que significa "se aplican algunas restricciones". La alternativa, "no aislado para $dirección", significa que no se aplican restricciones en la dirección descrita. Los dos tipos de aislamiento (o no) se declaran independientemente, y ambos son relevantes para una conexión de un Pod a otro. -Por defecto, un pod no está aislado para la salida; todas las conexiones salientes están permitidas. Un pod está aislado para la salida si hay alguna NetworkPolicy con "Egress" en su `policyTypes` que seleccione el pod; decimos que tal política se aplica al pod para la salida. Cuando un pod está aislado para la salida, las únicas conexiones permitidas desde el pod son las permitidas por la lista `egress` de las NetworkPolicy que se aplique al pod para la salida. Los valores de esas listas `egress` se combinan de forma aditiva. +Por defecto, un Pod no está aislado para la salida; todas las conexiones salientes están permitidas. Un Pod está aislado para la salida si hay alguna NetworkPolicy con "Egress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la salida. Cuando un Pod está aislado para la salida, las únicas conexiones permitidas desde el Pod son las permitidas por la lista `egress` de las NetworkPolicy que se aplique al Pod para la salida. Los valores de esas listas `egress` se combinan de forma aditiva. Por defecto, un pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el pod; decimos que tal política se aplica al pod para la entrada. Cuando un pod está aislado para la entrada, las únicas conexiones permitidas en el pod son las del nodo del pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. From 0e24924790720b1c2bc467b72ae00c5660ebe59d Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:18:24 +0200 Subject: [PATCH 043/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 95788f8ae5714..e0f08dc5ea2a1 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -273,7 +273,7 @@ A día de hoy, en Kubernetes {{< skew currentVersion >}}, la siguiente funcional - Forzar que el tráfico interno del clúster pase por una puerta de enlace común (esto se puede implementar con una malla de servicios u otro proxy). - Cualquier cosa relacionada con TLS (se puede implementar con una malla de servicios o un Ingress controllers para esto). - Políticas específicas de los nodos (se puede utilizar la notación CIDR para esto, pero no se puede apuntar a los nodos por sus identidades Kubernetes específicamente). -- Apuntar a los servicios por su nombre (sin embargo, puede orientar los pods o los espacios de nombres por su {{< glossary_tooltip text="labels" term_id="label" >}}, lo que suele ser una solución viable). +- Apuntar Services por nombre (sin embargo, puede orientar los Pods o los Namespaces por su {{< glossary_tooltip text="labels" term_id="label" >}}, lo que suele ser una solución viable). - Creación o gestión de "solicitudes de políticas" que son atendidas por un tercero. - Políticas que por defecto son aplicadas a todos los espacios de nombres o pods (hay algunas distribuciones y proyectos de Kubernetes de terceros que pueden hacer esto). - Consulta avanzada de políticas y herramientas de accesibilidad. From c6a9eefd9bf8621bf7e30d59219ad199cb24b6dd Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:18:44 +0200 Subject: [PATCH 044/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index e0f08dc5ea2a1..7215ceca64409 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -34,7 +34,7 @@ Hay dos tipos de aislamiento para un Pod: el aislamiento para la salida y el ais Por defecto, un Pod no está aislado para la salida; todas las conexiones salientes están permitidas. Un Pod está aislado para la salida si hay alguna NetworkPolicy con "Egress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la salida. Cuando un Pod está aislado para la salida, las únicas conexiones permitidas desde el Pod son las permitidas por la lista `egress` de las NetworkPolicy que se aplique al Pod para la salida. Los valores de esas listas `egress` se combinan de forma aditiva. -Por defecto, un pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el pod; decimos que tal política se aplica al pod para la entrada. Cuando un pod está aislado para la entrada, las únicas conexiones permitidas en el pod son las del nodo del pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. +Por defecto, un Pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un Pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la entrada. Cuando un Pod está aislado para la entrada, las únicas conexiones permitidas en el Pod son las del nodo del Pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. Las políticas de red no entran en conflicto; son aditivas. Si alguna política o políticas se aplican a un pod para una dirección determinada, las conexiones permitidas en esa dirección desde ese pod es la unión de lo que permiten las políticas aplicables. Por tanto, el orden de evaluación no afecta al resultado de la política. From 2e523e2bed40f5243a19b231a5583350809a79fc Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:19:00 +0200 Subject: [PATCH 045/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 7215ceca64409..3c39a24be15b7 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -275,7 +275,7 @@ A día de hoy, en Kubernetes {{< skew currentVersion >}}, la siguiente funcional - Políticas específicas de los nodos (se puede utilizar la notación CIDR para esto, pero no se puede apuntar a los nodos por sus identidades Kubernetes específicamente). - Apuntar Services por nombre (sin embargo, puede orientar los Pods o los Namespaces por su {{< glossary_tooltip text="labels" term_id="label" >}}, lo que suele ser una solución viable). - Creación o gestión de "solicitudes de políticas" que son atendidas por un tercero. -- Políticas que por defecto son aplicadas a todos los espacios de nombres o pods (hay algunas distribuciones y proyectos de Kubernetes de terceros que pueden hacer esto). +- Políticas que por defecto son aplicadas a todos los Namespaces o Pods (hay algunas distribuciones y proyectos de Kubernetes de terceros que pueden hacer esto). - Consulta avanzada de políticas y herramientas de accesibilidad. - La capacidad de registrar los eventos de seguridad de la red (por ejemplo, las conexiones bloqueadas o aceptadas). - La capacidad de negar explícitamente las políticas (actualmente el modelo para NetworkPolicies es negar por defecto, con sólo la capacidad de añadir reglas de permitir). From 85bdd1824c5a4c8dae3db90782c5672fdce38975 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:19:16 +0200 Subject: [PATCH 046/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 3c39a24be15b7..336bab6022f62 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -284,5 +284,5 @@ A día de hoy, en Kubernetes {{< skew currentVersion >}}, la siguiente funcional ## {{% heading "whatsnext" %}} -- Leer el recorrido de como [Declarar de Políticas de Red](/docs/tasks/administer-clúster/declare-network-policy/) para ver más ejemplos. +- Leer el artículo de como [Declarar de Políticas de Red](/docs/tasks/administer-clúster/declare-network-policy/) para ver más ejemplos. - Ver más [recetas](https://github.com/ahmetb/kubernetes-network-policy-recipes) de escenarios comunes habilitados por los recursos de las NetworkPolicy. From 01204d6462df5ecae7b2ec438073b4f1982dac8c Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:20:45 +0200 Subject: [PATCH 047/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 336bab6022f62..e95800fa615f5 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -63,7 +63,7 @@ __spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/mast __podSelector__: Cada NetworkPolicy incluye un `podSelector` el cual selecciona el grupo de Pods en los cuales aplica la política. La política de ejemplo selecciona pods con el label "role=db". Un `podSelector` vacío selecciona todos los Pods en un Namespace. -__policyTypes__: Cada NetworkPolicy incluye una lista de `policyTypes` la cual puede incluir `Ingress`, `Egress`, o ambas. Los campos `policyTypes` indican si la política aplica o no aplica al tráfico de entrada hacia el Pod seleccionado, el tráfico de salida desde el Pods seleccionado, o ambos. Si no se especifican `policyTypes` en una NetworkPolicy el valor `Ingress` será siempre aplicado por defecto y `Egress` será aplicado si la NetworkPolicy contiene alguna regla de salida. +__policyTypes__: Cada NetworkPolicy incluye una lista de `policyTypes` la cual puede incluir `Ingress`, `Egress`, o ambas. Los campos `policyTypes` indican si la política aplica o no al tráfico de entrada hacia el Pod seleccionado, el tráfico de salida desde el Pod seleccionado, o ambos. Si no se especifican `policyTypes` en una NetworkPolicy el valor `Ingress` será siempre aplicado por defecto y `Egress` será aplicado si la NetworkPolicy contiene alguna regla de salida. __ingress__: Cada NetworkPolicy puede incluir una lista de reglas `ingress` permitidas. Cada regla permite el tráfico con que se corresponda a ambos valores de las secciones de `from` y `ports`. La política de ejemplo contiene una única regla, la cual se corresponde con el tráfico sobre un solo puerto, desde uno de los tres orígenes definidos, el primero especificado por el valor `ipBlock`, el segundo especificado por el valor `namespaceSelector` y el tercero especificado por el `podSelector`. From 35d04e8e7bc5c9000c5a08a862c69a6cf5f7e6bb Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:21:30 +0200 Subject: [PATCH 048/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index e95800fa615f5..0f270426e3edf 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -65,7 +65,7 @@ __podSelector__: Cada NetworkPolicy incluye un `podSelector` el cual selecciona __policyTypes__: Cada NetworkPolicy incluye una lista de `policyTypes` la cual puede incluir `Ingress`, `Egress`, o ambas. Los campos `policyTypes` indican si la política aplica o no al tráfico de entrada hacia el Pod seleccionado, el tráfico de salida desde el Pod seleccionado, o ambos. Si no se especifican `policyTypes` en una NetworkPolicy el valor `Ingress` será siempre aplicado por defecto y `Egress` será aplicado si la NetworkPolicy contiene alguna regla de salida. -__ingress__: Cada NetworkPolicy puede incluir una lista de reglas `ingress` permitidas. Cada regla permite el tráfico con que se corresponda a ambos valores de las secciones de `from` y `ports`. La política de ejemplo contiene una única regla, la cual se corresponde con el tráfico sobre un solo puerto, desde uno de los tres orígenes definidos, el primero especificado por el valor `ipBlock`, el segundo especificado por el valor `namespaceSelector` y el tercero especificado por el `podSelector`. +__ingress__: Cada NetworkPolicy puede incluir una lista de reglas `ingress` permitidas. Cada regla permite el tráfico con que se relaciona a ambos valores de las secciones de `from` y `ports`. La política de ejemplo contiene una única regla, la cual se relaciona con el tráfico sobre un solo puerto, desde uno de los tres orígenes definidos, el primero especificado por el valor `ipBlock`, el segundo especificado por el valor `namespaceSelector` y el tercero especificado por el `podSelector`. __egress__: Cada NetworkPolicy puede incluir una lista de reglas de `egress` permitidas. Cada regla permite el tráfico con que se corresponda a ambos valores de las secciones de `to` and `ports`. La política de ejemplo contiene una única regla, la cual se corresponde con el tráfico en un único puerto para cualquier destino en el rango de IPs `10.0.0.0/24`. From 0134ba9aef49ca8b5b83a6e33012e3db58e04d27 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:22:22 +0200 Subject: [PATCH 049/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 0f270426e3edf..e61299f17febf 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -72,7 +72,7 @@ __egress__: Cada NetworkPolicy puede incluir una lista de reglas de `egress` per Por lo tanto, la NetworkPolicy de ejemplo: 1. Aísla los pods "role=db" en el "default" namespace para ambos tipos de tráfico ingress y egress (si ellos no están aún aislados) -2. (Reglas Ingress) permite la coneccion hacia todos los pods en el "default" namespace con el label "role=db" en el puerto TCP 6379 desde los siguientes orígenes: +2. (Reglas Ingress) permite la conexión hacia todos los Pods en el Namespace "default" con el label "role=db" en el puerto TCP 6379 desde los siguientes orígenes: * cualquier pod en el "default" namespace con el label "role=frontend" * cualquier pod en un namespace con el label "project=myproject" From 5d5dae0e67e280090e463fcae82762db3c8fbaf9 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:24:21 +0200 Subject: [PATCH 050/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index e61299f17febf..6464aa874ed4c 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -77,7 +77,7 @@ Por lo tanto, la NetworkPolicy de ejemplo: * cualquier pod en el "default" namespace con el label "role=frontend" * cualquier pod en un namespace con el label "project=myproject" * La dirección IP en los rangos 172.17.0.0–172.17.0.255 y 172.17.2.0–172.17.255.255 (por ejemplo, todo el rango de IPs de 172.17.0.0/16 con excepción del 172.17.1.0/24) -3. (Egress rules) permite coneccion desde cualquier pods en el "default" namespace con el label "role=db" hacia CIDR 10.0.0.0/24 en el puerto TCP 5978 +3. (Egress rules) permite conexión desde cualquier Pod en el Namespace "default" con el label "role=db" hacia CIDR 10.0.0.0/24 en el puerto TCP 5978 Ver el recorrido de [Declarar Network Policy](/docs/tasks/administer-clúster/declare-network-policy/) para más ejemplos. From 64630d9e6dcbc4ea0811141e0d22718f022de2d9 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:24:48 +0200 Subject: [PATCH 051/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 6464aa874ed4c..e38fe92cbe511 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -36,7 +36,7 @@ Por defecto, un Pod no está aislado para la salida; todas las conexiones salien Por defecto, un Pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un Pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la entrada. Cuando un Pod está aislado para la entrada, las únicas conexiones permitidas en el Pod son las del nodo del Pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. -Las políticas de red no entran en conflicto; son aditivas. Si alguna política o políticas se aplican a un pod para una dirección determinada, las conexiones permitidas en esa dirección desde ese pod es la unión de lo que permiten las políticas aplicables. Por tanto, el orden de evaluación no afecta al resultado de la política. +Las políticas de red no entran en conflicto; son aditivas. Si alguna política(s) se aplica a un Pod para una dirección determinada, las conexiones permitidas en esa dirección desde ese Pod es la unión de lo que permiten las políticas aplicables. Por tanto, el orden de evaluación no afecta al resultado de la política. Para que se permita una conexión desde un pod de origen a un pod de destino, tanto la política de salida del pod de origen como la de entrada del pod de destino deben permitir la conexión. Si cualquiera de los dos lados no permite la conexión, ésta no se producirá. From fb0f5538ecdab2fc11087b498b356d3e87313281 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:25:59 +0200 Subject: [PATCH 052/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index e38fe92cbe511..b00891a031514 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -84,7 +84,7 @@ Ver el recorrido de [Declarar Network Policy](/docs/tasks/administer-clúster/de ## Comportamiento de los selectores `to` y `from` -Existen cuatro tipos de selectores que pueden ser especificados en una sección de `ingress` `from` or en una sección de `egress` `to`: +Existen cuatro tipos de selectores que pueden ser especificados en una sección de `ingress` `from` o en una sección de `egress` `to`: __podSelector__: Este selector selecciona Pods específicos en el mismo espacio de nombres que la NetworkPolicy para permitir el tráfico como fuente de entrada o destino de salida. From 9b64965df5b07e88bc7d2c15210078f2e8691266 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:26:30 +0200 Subject: [PATCH 053/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index b00891a031514..da4f5bd34f512 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -86,7 +86,7 @@ Ver el recorrido de [Declarar Network Policy](/docs/tasks/administer-clúster/de Existen cuatro tipos de selectores que pueden ser especificados en una sección de `ingress` `from` o en una sección de `egress` `to`: -__podSelector__: Este selector selecciona Pods específicos en el mismo espacio de nombres que la NetworkPolicy para permitir el tráfico como fuente de entrada o destino de salida. +__podSelector__: Este selector selecciona Pods específicos en el mismo Namespace que la NetworkPolicy para permitir el tráfico como origen de entrada o destino de salida. __namespaceSelector__: Este selector selecciona espacios de nombres específicos para permitir el tráfico como fuente de entrada o destino de salida. From f40c943666e82cc9c3c2a0207ee102e47320ef2b Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:27:46 +0200 Subject: [PATCH 054/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index da4f5bd34f512..e8f8ff3547a35 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -121,7 +121,7 @@ contiene un único elemento `from` permitiendo conexiones desde los Pods con el ``` -contiene dos elementos en el array `from`, y permite conecciones desde Pods en el local Namespace con el label `role=client`, *o* desde cualquier Pod en cualquier nombre de espacio con el label `user=alice`. +contiene dos elementos en el array `from`, y permite conexiones desde Pods en el Namespace local con el label `role=client`, *o* desde cualquier Pod en cualquier Namespace con el label `user=alice`. En caso de duda, utilice `kubectl describe` para ver cómo Kubernetes ha interpretado la política. From 03e830908f96d01c07ebb794517628f85c49b614 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:28:16 +0200 Subject: [PATCH 055/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index e8f8ff3547a35..1440e40b1fb83 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -88,7 +88,7 @@ Existen cuatro tipos de selectores que pueden ser especificados en una sección __podSelector__: Este selector selecciona Pods específicos en el mismo Namespace que la NetworkPolicy para permitir el tráfico como origen de entrada o destino de salida. -__namespaceSelector__: Este selector selecciona espacios de nombres específicos para permitir el tráfico como fuente de entrada o destino de salida. +__namespaceSelector__: Este selector selecciona Namespaces específicos para permitir el tráfico como origen de entrada o destino de salida. __namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que especifique tanto `namespaceSelector` como `podSelector` selecciona Pods específicos dentro de espacios de nombres específicos. Tenga cuidado de utilizar la sintaxis YAML correcta. A continuación se muestra un ejemplo de esta política: From 4efa38eae2e4a0c411bf465c8f2c2455e4ad9ac4 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:29:07 +0200 Subject: [PATCH 056/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 1440e40b1fb83..e4384ab13942e 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -136,7 +136,7 @@ combinaciones de plugin de red, proveedor de nube, implementación de `Service`, En el caso de la entrada, esto significa que en algunos casos se pueden filtrar paquetes entrantes basándose en la IP de origen real, mientras que en otros casos, la "IP de origen" sobre la que actúa la -la NetworkPolicy actúa puede ser la IP de un `LoadBalancer` o la IP de Nodo donde este el Pod involucrado, etc. +la NetworkPolicy actúa puede ser la IP de un `LoadBalancer` o la IP del Nodo donde este el Pod involucrado, etc. Para la salida, esto significa que las conexiones de los pods a las IPs de `Service` que se reescriben a IPs externas al clúster pueden o no estar sujetas a políticas basadas en `ipBlock`. From e67145b6f88a37aac2f2f28c85e10f4acabdf14c Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:30:51 +0200 Subject: [PATCH 057/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index e4384ab13942e..554c699ddbe11 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -171,7 +171,7 @@ Puedes crear una política que "por defecto" aisle el tráfico de salida para un {{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} -Esto asegura que incluso los pods que no son seleccionados por ninguna otra NetworkPolicy no tendrán permitido el tráfico de salida. Esta política no cambia el comportamiento de aislamiento para el tráfico de entrada de ningún pod. +Esto asegura que incluso los Pods que no son seleccionados por ninguna otra NetworkPolicy no tengan permitido el tráfico de salida. Esta política no cambia el comportamiento de aislamiento para el tráfico de entrada de ningún Pod. ### Permitir todo el tráfico de salida From 51b34624a69fa352d0c45051c6e23c86aac0dca3 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:31:40 +0200 Subject: [PATCH 058/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 554c699ddbe11..97c9787aa3dc7 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -127,7 +127,7 @@ En caso de duda, utilice `kubectl describe` para ver cómo Kubernetes ha interpr -__ipBlock__: Este selector selecciona rangos CIDR de IP específicos para permitirlas como fuentes de entrada o destinos de salida. Estas IPs deben ser externas al clúster, ya que las IPs de Pod son efímeras e impredecibles. +__ipBlock__: Este selector selecciona rangos CIDR de IP específicos para permitirlas como origen de entrada o destino de salida. Estas IPs deben ser externas al clúster, ya que las IPs de Pod son efímeras e impredecibles. Los mecanismos de entrada y salida del clúster a menudo requieren reescribir la IP de origen o destino de los paquetes. En los casos en los que esto ocurre, no está definido si esto ocurre antes o From b566c26791c0eabdb8e001351cb5fa7653e8090e Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:33:00 +0200 Subject: [PATCH 059/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 97c9787aa3dc7..5fc9481d00378 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -176,7 +176,7 @@ Esto asegura que incluso los Pods que no son seleccionados por ninguna otra Netw ### Permitir todo el tráfico de salida -Si quieres permitir todas las conexiones desde todos los pods de un espacio de nombres, puede crear una política que permita explícitamente todas las conexiones salientes de los pods de ese espacio de nombres. +Si quieres permitir todas las conexiones desde todos los Pods de un Namespace, puedes crear una política que permita explícitamente todas las conexiones salientes de los Pods de ese Namespace. {{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} From 7d520bb8cc5d233473e5f3917136752fda5aa086 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:34:58 +0200 Subject: [PATCH 060/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 5fc9481d00378..b4d46ca0d6939 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -162,7 +162,7 @@ Si tu quieres permitir todo el tráfico de entrada a todos los Pods en un nombre {{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} -Con esta política en curso, ninguna política o políticas adicionales pueden hacer que se deniegue cualquier conexión entrante a esos pods. Esta política no tiene efecto sobre el aislamiento del tráfico de salida de cualquier pod. +Con esta política en curso, ninguna política(s) adicional puede hacer que se niegue cualquier conexión entrante a esos Pods. Esta política no tiene efecto sobre el aislamiento del tráfico de salida de cualquier Pod. ### Denegar por defecto todo el tráfico de salida From 95ba0520f5e1b064ef5a0961216324ec4c74a67f Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:35:34 +0200 Subject: [PATCH 061/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index b4d46ca0d6939..c4c6dde58d958 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -79,7 +79,7 @@ Por lo tanto, la NetworkPolicy de ejemplo: * La dirección IP en los rangos 172.17.0.0–172.17.0.255 y 172.17.2.0–172.17.255.255 (por ejemplo, todo el rango de IPs de 172.17.0.0/16 con excepción del 172.17.1.0/24) 3. (Egress rules) permite conexión desde cualquier Pod en el Namespace "default" con el label "role=db" hacia CIDR 10.0.0.0/24 en el puerto TCP 5978 -Ver el recorrido de [Declarar Network Policy](/docs/tasks/administer-clúster/declare-network-policy/) para más ejemplos. +Ver el artículo de [Declarar Network Policy](/docs/tasks/administer-clúster/declare-network-policy/) para más ejemplos. ## Comportamiento de los selectores `to` y `from` From 680992ad6f4accd26d911431ca38755973a39cca Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:36:51 +0200 Subject: [PATCH 062/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index c4c6dde58d958..9aafe2869af9f 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -180,7 +180,7 @@ Si quieres permitir todas las conexiones desde todos los Pods de un Namespace, p {{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} -Con esta política en vigor, ninguna política o políticas adicionales pueden hacer que se deniegue cualquier conexión de salida desde esos pods. Esta política no tiene efecto sobre el aislamiento para el tráfico de entrada a cualquier pod. +Con esta política en vigor, ninguna política(s) adicional puede hacer que se niegue cualquier conexión de salida desde esos Pods. Esta política no tiene efecto sobre el aislamiento para el tráfico de entrada a cualquier Pod. ### Denegar por defecto todo el tráfico de entrada y de salida From 78392f41dfa653ed435f389b2c8d495accbfc13a Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:42:11 +0200 Subject: [PATCH 063/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 9aafe2869af9f..579103571ced2 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -71,7 +71,7 @@ __egress__: Cada NetworkPolicy puede incluir una lista de reglas de `egress` per Por lo tanto, la NetworkPolicy de ejemplo: -1. Aísla los pods "role=db" en el "default" namespace para ambos tipos de tráfico ingress y egress (si ellos no están aún aislados) +1. Aísla los Pods "role=db" en el Namespace "default" para ambos tipos de tráfico ingress y egress (si ellos no están aún aislados) 2. (Reglas Ingress) permite la conexión hacia todos los Pods en el Namespace "default" con el label "role=db" en el puerto TCP 6379 desde los siguientes orígenes: * cualquier pod en el "default" namespace con el label "role=frontend" From 38b1d07f7d929fb48212cd4c57bb2fab3a99d07f Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:42:28 +0200 Subject: [PATCH 064/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 579103571ced2..37a66e74c756b 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -38,7 +38,7 @@ Por defecto, un Pod no está aislado para la entrada; todas las conexiones entra Las políticas de red no entran en conflicto; son aditivas. Si alguna política(s) se aplica a un Pod para una dirección determinada, las conexiones permitidas en esa dirección desde ese Pod es la unión de lo que permiten las políticas aplicables. Por tanto, el orden de evaluación no afecta al resultado de la política. -Para que se permita una conexión desde un pod de origen a un pod de destino, tanto la política de salida del pod de origen como la de entrada del pod de destino deben permitir la conexión. Si cualquiera de los dos lados no permite la conexión, ésta no se producirá. +Para que se permita una conexión desde un Pod de origen a un Pod de destino, tanto la política de salida del Pod de origen como la de entrada del Pod de destino deben permitir la conexión. Si cualquiera de los dos lados no permite la conexión, ésta no se producirá. ## El Recurso NetworkPolicy {#networkpolicy-resource} From 6d0197bcf9296087b53d2f650d7b282355421734 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:43:09 +0200 Subject: [PATCH 065/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 37a66e74c756b..81e827bf4c83e 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -61,7 +61,7 @@ y [Gestión de Objetos](/docs/concepts/overview/working-with-objects/object-mana __spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) contiene toda la información necesaria para definir una política de red dado un Namespace. -__podSelector__: Cada NetworkPolicy incluye un `podSelector` el cual selecciona el grupo de Pods en los cuales aplica la política. La política de ejemplo selecciona pods con el label "role=db". Un `podSelector` vacío selecciona todos los Pods en un Namespace. +__podSelector__: Cada NetworkPolicy incluye un `podSelector` el cual selecciona el grupo de Pods en los cuales aplica la política. La política de ejemplo selecciona Pods con el label "role=db". Un `podSelector` vacío selecciona todos los Pods en un Namespace. __policyTypes__: Cada NetworkPolicy incluye una lista de `policyTypes` la cual puede incluir `Ingress`, `Egress`, o ambas. Los campos `policyTypes` indican si la política aplica o no al tráfico de entrada hacia el Pod seleccionado, el tráfico de salida desde el Pod seleccionado, o ambos. Si no se especifican `policyTypes` en una NetworkPolicy el valor `Ingress` será siempre aplicado por defecto y `Egress` será aplicado si la NetworkPolicy contiene alguna regla de salida. From 31159a341b47f5d12b7ed4b5318a445dfae101c7 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:44:11 +0200 Subject: [PATCH 066/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 81e827bf4c83e..315738a3e48aa 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -74,7 +74,7 @@ Por lo tanto, la NetworkPolicy de ejemplo: 1. Aísla los Pods "role=db" en el Namespace "default" para ambos tipos de tráfico ingress y egress (si ellos no están aún aislados) 2. (Reglas Ingress) permite la conexión hacia todos los Pods en el Namespace "default" con el label "role=db" en el puerto TCP 6379 desde los siguientes orígenes: - * cualquier pod en el "default" namespace con el label "role=frontend" + * cualquier Pod en el Namespace "default" con el label "role=frontend" * cualquier pod en un namespace con el label "project=myproject" * La dirección IP en los rangos 172.17.0.0–172.17.0.255 y 172.17.2.0–172.17.255.255 (por ejemplo, todo el rango de IPs de 172.17.0.0/16 con excepción del 172.17.1.0/24) 3. (Egress rules) permite conexión desde cualquier Pod en el Namespace "default" con el label "role=db" hacia CIDR 10.0.0.0/24 en el puerto TCP 5978 From 7ce0ae7152f606f592d210f87eb42cb9e9f2324a Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:44:31 +0200 Subject: [PATCH 067/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 315738a3e48aa..f95d0a8a4b827 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -75,7 +75,7 @@ Por lo tanto, la NetworkPolicy de ejemplo: 2. (Reglas Ingress) permite la conexión hacia todos los Pods en el Namespace "default" con el label "role=db" en el puerto TCP 6379 desde los siguientes orígenes: * cualquier Pod en el Namespace "default" con el label "role=frontend" - * cualquier pod en un namespace con el label "project=myproject" + * cualquier Pod en un Namespace con el label "project=myproject" * La dirección IP en los rangos 172.17.0.0–172.17.0.255 y 172.17.2.0–172.17.255.255 (por ejemplo, todo el rango de IPs de 172.17.0.0/16 con excepción del 172.17.1.0/24) 3. (Egress rules) permite conexión desde cualquier Pod en el Namespace "default" con el label "role=db" hacia CIDR 10.0.0.0/24 en el puerto TCP 5978 From 27ad98afd101a38b7ce00292f2579e99a23189e8 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:45:17 +0200 Subject: [PATCH 068/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index f95d0a8a4b827..f53c891e257c6 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -138,7 +138,7 @@ En el caso de la entrada, esto significa que en algunos casos se pueden filtrar entrantes basándose en la IP de origen real, mientras que en otros casos, la "IP de origen" sobre la que actúa la la NetworkPolicy actúa puede ser la IP de un `LoadBalancer` o la IP del Nodo donde este el Pod involucrado, etc. -Para la salida, esto significa que las conexiones de los pods a las IPs de `Service` que se reescriben a +Para la salida, esto significa que las conexiones de los Pods a las IPs de `Service` que se reescriben a IPs externas al clúster pueden o no estar sujetas a políticas basadas en `ipBlock`. From 938c217d9b261be3e3ff0f3834bddf8247f10901 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:45:49 +0200 Subject: [PATCH 069/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index f53c891e257c6..9dd34a40bb4eb 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -144,7 +144,7 @@ IPs externas al clúster pueden o no estar sujetas a políticas basadas en `ipBl ## Políticas por defecto -Por defecto, si no existen políticas en un espacio de nombres, se permite todo el tráfico de entrada y salida hacia y desde los pods de ese espacio de nombres. Los siguientes ejemplos muestran cómo cambiar el comportamiento por defecto en ese espacio de nombres. +Por defecto, si no existen políticas en un Namespace, se permite todo el tráfico de entrada y salida hacia y desde los Pods de ese Namespace. Los siguientes ejemplos muestran cómo cambiar el comportamiento por defecto en ese Namespace. ### Denegar todo el tráfico de entrada por defecto From 4bd3daba8b9d6b476218c4adfead67067cbb5e2b Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:46:22 +0200 Subject: [PATCH 070/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 9dd34a40bb4eb..8de6a5ddb01aa 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -153,7 +153,7 @@ Puedes crear una política que "por defecto" aisle a un espacio de nombres del t {{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} -Esto asegura que incluso los Pods que no están seleccionados por ninguna otra NetworkPolicy también serán aislados del tráfico de entrada. Esta política no afecta el aislamiento en el tráfico de salida desde cualquier Pods. +Esto asegura que incluso los Pods que no están seleccionados por ninguna otra NetworkPolicy también serán aislados del tráfico de entrada. Esta política no afecta el aislamiento en el tráfico de salida desde cualquier Pod. ### Permitir todo el tráfico de entrada From 3b2482c2f65fbbbc26e23cb1dca29909cfc09a7d Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:47:08 +0200 Subject: [PATCH 071/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 8de6a5ddb01aa..0fd91e6c86a18 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -149,7 +149,7 @@ Por defecto, si no existen políticas en un Namespace, se permite todo el tráfi ### Denegar todo el tráfico de entrada por defecto -Puedes crear una política que "por defecto" aisle a un espacio de nombres del tráfico de entrada con la creación de una política que seleccione todos los Pods del espacio de nombres pero no permite ningún tráfico de entrada en esos Pods. +Puedes crear una política que "por defecto" aisle a un Namespace del tráfico de entrada con la creación de una política que seleccione todos los Pods del Namespace pero no permite ningún tráfico de entrada en esos Pods. {{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} From 5d96e59d5304760da83cb109895fb5d8b7b002ce Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:47:31 +0200 Subject: [PATCH 072/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 0fd91e6c86a18..aacd00890d91f 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -158,7 +158,7 @@ Esto asegura que incluso los Pods que no están seleccionados por ninguna otra N ### Permitir todo el tráfico de entrada -Si tu quieres permitir todo el tráfico de entrada a todos los Pods en un nombre de espacio, puedes crear una política que explícitamente permita eso. +Si tu quieres permitir todo el tráfico de entrada a todos los Pods en un Namespace, puedes crear una política que explícitamente permita eso. {{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} From 46b47e3ebc2fcd56db9792407656a39bfe02b351 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:47:56 +0200 Subject: [PATCH 073/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index aacd00890d91f..4cb1d4aca0599 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -167,7 +167,7 @@ Con esta política en curso, ninguna política(s) adicional puede hacer que se n ### Denegar por defecto todo el tráfico de salida -Puedes crear una política que "por defecto" aisle el tráfico de salida para un espacio de nombres, creando una NetworkPolicy que seleccione todos los pods pero que no permita ningún tráfico de salida desde esos pods. +Puedes crear una política que "por defecto" aisle el tráfico de salida para un Namespace, creando una NetworkPolicy que seleccione todos los Pods pero que no permita ningún tráfico de salida desde esos Pods. {{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} From f6cf8b4f77b877122be8563cdb27de7700bccc91 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:48:18 +0200 Subject: [PATCH 074/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 4cb1d4aca0599..af0118d6ca012 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -189,7 +189,7 @@ Puede crear una política que "por defecto" en un espacio de nombres impida todo {{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}} -Esto asegura que incluso los pods que no son seleccionados por ninguna otra NetworkPolicy no tendrán permitido el tráfico de entrada o salida. +Esto asegura que incluso los Pods que no son seleccionados por ninguna otra NetworkPolicy no tendrán permitido el tráfico de entrada o salida. ## Soporte a SCTP From 1e46bdc9d5dbf4f4f9d24a3a5862f930139f2116 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:48:46 +0200 Subject: [PATCH 075/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index af0118d6ca012..355da76afea75 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -185,7 +185,7 @@ Con esta política en vigor, ninguna política(s) adicional puede hacer que se n ### Denegar por defecto todo el tráfico de entrada y de salida -Puede crear una política que "por defecto" en un espacio de nombres impida todo el tráfico de entrada Y de salida creando la siguiente NetworkPolicy en ese espacio de nombres. +Puede crear una política que "por defecto" en un Namespace impida todo el tráfico de entrada y de salida creando la siguiente NetworkPolicy en ese Namespace. {{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}} From d219ca29c712e40c5777707ebdc974aa54ca5e29 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:49:27 +0200 Subject: [PATCH 076/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 355da76afea75..64775db7379ec 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -234,7 +234,7 @@ spec: endPort: 32768 ``` -La regla anterior permite que cualquier Pod con la etiqueta `role=db` en el espacio de nombres `default` se comunique +La regla anterior permite que cualquier Pod con la etiqueta `role=db` en el Namespace `default` se comunique con cualquier IP dentro del rango `10.0.0.0/24` sobre el protocolo TCP, siempre que el puerto esté entre el rango 32000 y 32768. From 804b3caaf221868d961dcf9c637264d3025371dc Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:49:55 +0200 Subject: [PATCH 077/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 64775db7379ec..9dee3b9f18203 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -260,7 +260,7 @@ la política se aplicará sólo para el campo `port`. {{< feature-state for_k8s_version="1.22" state="stable" >}} El plano de control de Kubernetes establece una etiqueta inmutable `kubernetes.io/metadata.name` en todos los -espacios de nombre, siempre que se haya habilitado la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NamespaceDefaultLabelName`. +Namespaces, siempre que se haya habilitado la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NamespaceDefaultLabelName`. El valor de la etiqueta es el nombre del espacio de nombres. Aunque NetworkPolicy no puede apuntar a un espacio de nombres por su nombre con algún campo de objeto, puede utilizar la etiqueta estandarizada para apuntar a un espacio de nombres específico. From ed8b5429c87b97005ef64fc5f507436c30e17d26 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:50:31 +0200 Subject: [PATCH 078/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 9dee3b9f18203..947902d2042ef 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -261,7 +261,7 @@ la política se aplicará sólo para el campo `port`. El plano de control de Kubernetes establece una etiqueta inmutable `kubernetes.io/metadata.name` en todos los Namespaces, siempre que se haya habilitado la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NamespaceDefaultLabelName`. -El valor de la etiqueta es el nombre del espacio de nombres. +El valor de la etiqueta es el nombre del Namespace. Aunque NetworkPolicy no puede apuntar a un espacio de nombres por su nombre con algún campo de objeto, puede utilizar la etiqueta estandarizada para apuntar a un espacio de nombres específico. From be874782223b5b40a92ff79208fbf21418361792 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:51:09 +0200 Subject: [PATCH 079/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 947902d2042ef..095884108eafa 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -263,7 +263,7 @@ El plano de control de Kubernetes establece una etiqueta inmutable `kubernetes.i Namespaces, siempre que se haya habilitado la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NamespaceDefaultLabelName`. El valor de la etiqueta es el nombre del Namespace. -Aunque NetworkPolicy no puede apuntar a un espacio de nombres por su nombre con algún campo de objeto, puede utilizar la etiqueta estandarizada para apuntar a un espacio de nombres específico. +Aunque NetworkPolicy no puede apuntar a un Namespace por su nombre con algún campo de objeto, puede utilizar la etiqueta estandarizada para apuntar a un Namespace específico. ## Que no puedes hacer con políticas de red (al menos, no aún) From 266854267ac22117962934f79166c5a02f1040a4 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:53:19 +0200 Subject: [PATCH 080/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 095884108eafa..17484bbb22354 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -266,7 +266,7 @@ El valor de la etiqueta es el nombre del Namespace. Aunque NetworkPolicy no puede apuntar a un Namespace por su nombre con algún campo de objeto, puede utilizar la etiqueta estandarizada para apuntar a un Namespace específico. - ## Que no puedes hacer con políticas de red (al menos, no aún) + ## Que no puedes hacer con políticas de red (al menos, aún no) A día de hoy, en Kubernetes {{< skew currentVersion >}}, la siguiente funcionalidad no existe en la API de NetworkPolicy, pero es posible que se puedan implementar soluciones mediante componentes del sistema operativo (como SELinux, OpenVSwitch, IPTables, etc.) o tecnologías de capa 7 (Ingress controllers, implementaciones de Service Mesh) o controladores de admisión. En caso de que seas nuevo en la seguridad de la red en Kubernetes, vale la pena señalar que las siguientes historias de usuario no pueden (todavía) ser implementadas usando la API NetworkPolicy. From a7afa631cd620bd01f8d2fe1c58a37e65a91c91f Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 18:55:28 +0200 Subject: [PATCH 081/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 17484bbb22354..c1dddc6df4612 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -90,7 +90,7 @@ __podSelector__: Este selector selecciona Pods específicos en el mismo Namespac __namespaceSelector__: Este selector selecciona Namespaces específicos para permitir el tráfico como origen de entrada o destino de salida. -__namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que especifique tanto `namespaceSelector` como `podSelector` selecciona Pods específicos dentro de espacios de nombres específicos. Tenga cuidado de utilizar la sintaxis YAML correcta. A continuación se muestra un ejemplo de esta política: +__namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que especifique tanto `namespaceSelector` como `podSelector` selecciona Pods específicos dentro de Namespaces específicos. Tenga cuidado de utilizar la sintaxis de YAML correcta. A continuación se muestra un ejemplo de esta política: ```yaml ... From 40248cff25f6818dfb0698d655f01ab804113c4c Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 12 Jul 2022 19:02:02 +0200 Subject: [PATCH 082/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index c1dddc6df4612..e62e39217ea65 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -67,7 +67,7 @@ __policyTypes__: Cada NetworkPolicy incluye una lista de `policyTypes` la cual p __ingress__: Cada NetworkPolicy puede incluir una lista de reglas `ingress` permitidas. Cada regla permite el tráfico con que se relaciona a ambos valores de las secciones de `from` y `ports`. La política de ejemplo contiene una única regla, la cual se relaciona con el tráfico sobre un solo puerto, desde uno de los tres orígenes definidos, el primero especificado por el valor `ipBlock`, el segundo especificado por el valor `namespaceSelector` y el tercero especificado por el `podSelector`. -__egress__: Cada NetworkPolicy puede incluir una lista de reglas de `egress` permitidas. Cada regla permite el tráfico con que se corresponda a ambos valores de las secciones de `to` and `ports`. La política de ejemplo contiene una única regla, la cual se corresponde con el tráfico en un único puerto para cualquier destino en el rango de IPs `10.0.0.0/24`. +__egress__: Cada NetworkPolicy puede incluir una lista de reglas de `egress` permitidas. Cada regla permite el tráfico con que se relaciona a ambos valores de las secciones de `to` and `ports`. La política de ejemplo contiene una única regla, la cual se relaciona con el tráfico en un único puerto para cualquier destino en el rango de IPs `10.0.0.0/24`. Por lo tanto, la NetworkPolicy de ejemplo: From bc2b7ecaeac58ae11b85b63201475ee17a9a48f4 Mon Sep 17 00:00:00 2001 From: Arhell Date: Wed, 13 Jul 2022 00:25:26 +0300 Subject: [PATCH 083/292] [pt-br] updated persistent-storage.md link --- content/pt-br/docs/concepts/storage/persistent-volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/concepts/storage/persistent-volumes.md b/content/pt-br/docs/concepts/storage/persistent-volumes.md index 12ff2700fe412..01d092e673c4d 100644 --- a/content/pt-br/docs/concepts/storage/persistent-volumes.md +++ b/content/pt-br/docs/concepts/storage/persistent-volumes.md @@ -723,7 +723,7 @@ Se você está criando templates ou exemplos que rodam numa grande quantidade de * Saiba mais sobre [Criando um PersistentVolume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume). * Saiba mais sobre [Criando um PersistentVolumeClaim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim). -* Leia a [documentação sobre planejamento de Armazenamento Persistente](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md). +* Leia a [documentação sobre planejamento de Armazenamento Persistente](https://git.k8s.io/design-proposals-archive/storage/persistent-storage.md). ### Referência From e8f9fd5334b7c9a751ab3aee3e9a8d562d8ad237 Mon Sep 17 00:00:00 2001 From: Juan Ezquerro LLanes Date: Wed, 13 Jul 2022 15:11:41 +0200 Subject: [PATCH 084/292] Fix links to PodOverhead Feature Design (language/pt) --- content/pt-br/docs/concepts/containers/runtime-class.md | 2 +- content/pt-br/docs/concepts/scheduling-eviction/pod-overhead.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/pt-br/docs/concepts/containers/runtime-class.md b/content/pt-br/docs/concepts/containers/runtime-class.md index ee090beedcc3b..37ee330c26d85 100644 --- a/content/pt-br/docs/concepts/containers/runtime-class.md +++ b/content/pt-br/docs/concepts/containers/runtime-class.md @@ -176,4 +176,4 @@ Pods utilizando-se desse Runtimeclass e assim contabilizar esses recursos para o - [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md) - [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling) - Leia mais sobre [Sobrecarga de Pods](/docs/concepts/scheduling-eviction/pod-overhead/) -- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) +- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) diff --git a/content/pt-br/docs/concepts/scheduling-eviction/pod-overhead.md b/content/pt-br/docs/concepts/scheduling-eviction/pod-overhead.md index c3788b22fa500..603c82d4f2edb 100644 --- a/content/pt-br/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/pt-br/docs/concepts/scheduling-eviction/pod-overhead.md @@ -188,6 +188,6 @@ mas é esperado em uma próxima versão. Os usuários necessitarão entretanto c * [RuntimeClass](/docs/concepts/containers/runtime-class/) -* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) +* [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) From ad92908054b901d94b34bf0754b4c8a073377375 Mon Sep 17 00:00:00 2001 From: ayatk <7327867+ayatk@users.noreply.github.com> Date: Wed, 13 Jul 2022 15:17:05 +0000 Subject: [PATCH 085/292] [ja] Fix links in release versioning --- content/ja/docs/reference/using-api/_index.md | 2 +- content/ja/docs/setup/release/version-skew-policy.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/ja/docs/reference/using-api/_index.md b/content/ja/docs/reference/using-api/_index.md index 2dff67b71fcdc..4cf5b25a95364 100644 --- a/content/ja/docs/reference/using-api/_index.md +++ b/content/ja/docs/reference/using-api/_index.md @@ -30,7 +30,7 @@ JSONとProtobufなどのシリアル化スキーマの変更については同 以下の説明は、両方のフォーマットをカバーしています。 APIのバージョニングとソフトウェアのバージョニングは間接的に関係しています。 -[API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)は、APIバージョニングとソフトウェアバージョニングの関係を説明しています。 +[API and release versioning proposal](https://git.k8s.io/sig-release/release-engineering/versioning.md)は、APIバージョニングとソフトウェアバージョニングの関係を説明しています。 APIのバージョンが異なると、安定性やサポートのレベルも異なります。 各レベルの基準については、[API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)で詳しく説明しています。 diff --git a/content/ja/docs/setup/release/version-skew-policy.md b/content/ja/docs/setup/release/version-skew-policy.md index 3e58503462ea9..bd1875aa41942 100644 --- a/content/ja/docs/setup/release/version-skew-policy.md +++ b/content/ja/docs/setup/release/version-skew-policy.md @@ -12,7 +12,7 @@ weight: 30 ## サポートされるバージョン {#supported-versions} -Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](https://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)を参照してください。 +Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](https://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning)を参照してください。 Kubernetesプロジェクトでは、最新の3つのマイナーリリースについてリリースブランチを管理しています ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}})。 From 27ef353ee4d19dd055f5abd81641d40eb693dc65 Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 14 Jul 2022 20:52:27 +0800 Subject: [PATCH 086/292] [zh-cn] resync /docs/reference/kubectl/cheatsheet.md --- .../docs/reference/kubectl/cheatsheet.md | 37 ++++++++++++++----- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/content/zh-cn/docs/reference/kubectl/cheatsheet.md b/content/zh-cn/docs/reference/kubectl/cheatsheet.md index f85846aa3285a..6ca3acfba8e28 100644 --- a/content/zh-cn/docs/reference/kubectl/cheatsheet.md +++ b/content/zh-cn/docs/reference/kubectl/cheatsheet.md @@ -26,7 +26,7 @@ card: This page contains a list of commonly used `kubectl` commands and flags. --> -本页列举了常用的 “kubectl” 命令和标志 +本页列举了常用的 `kubectl` 命令和标志。 @@ -50,10 +50,10 @@ You can also use a shorthand alias for `kubectl` that also works with completion --> ```bash source <(kubectl completion bash) # 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包。 -echo "source <(kubectl completion bash)" >> ~/.bashrc # 在您的 bash shell 中永久的添加自动补全 +echo "source <(kubectl completion bash)" >> ~/.bashrc # 在你的 bash shell 中永久地添加自动补全 ``` -您还可以为 `kubectl` 使用一个速记别名,该别名也可以与 completion 一起使用: +你还可以在补全时为 `kubectl` 使用一个速记别名: ```bash alias k=kubectl @@ -70,7 +70,7 @@ echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc --> ```bash source <(kubectl completion zsh) # 在 zsh 中设置当前 shell 的自动补全 -echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # 在您的 zsh shell 中永久的添加自动补全 +echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # 在你的 zsh shell 中永久地添加自动补全 ``` -## Kubectl 上下文和配置 +## Kubectl 上下文和配置 设置 `kubectl` 与哪个 Kubernetes 集群进行通信并修改配置信息。 查看[使用 kubeconfig 跨集群授权访问](/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) @@ -116,6 +116,11 @@ kubectl config get-contexts # display list of contexts kubectl config current-context # display the current-context kubectl config use-context my-cluster-name # set the default context to my-cluster-name +kubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig + +# configure the URL to a proxy server to use for requests made by this client in the kubeconfig +kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url + # add a new user to your kubeconf that supports basic auth kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword @@ -137,7 +142,9 @@ alias kn='f() { [ "$1" ] && kubectl config set-context --current --namespace $1 kubectl config view # 显示合并的 kubeconfig 配置。 # 同时使用多个 kubeconfig 文件并查看合并的配置 -KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view +KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 + +kubectl config view # 获取 e2e 用户的密码 kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' @@ -148,6 +155,11 @@ kubectl config get-contexts # 显示上下文列表 kubectl config current-context # 展示当前所处的上下文 kubectl config use-context my-cluster-name # 设置默认的上下文为 my-cluster-name +kubectl config set-cluster my-cluster-name # 在 kubeconfig 中设置集群条目 + +# 在 kubeconfig 中配置代理服务器的 URL,以用于该客户端的请求 +kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url + # 添加新的用户配置到 kubeconf 中,使用 basic auth 进行身份认证 kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword @@ -171,10 +183,11 @@ alias kn='f() { [ "$1" ] && kubectl config set-context --current --namespace $1 `apply` manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running `kubectl apply`. This is the recommended way of managing Kubernetes applications on production. See [Kubectl Book](https://kubectl.docs.kubernetes.io). --> ## Kubectl apply + `apply` 通过定义 Kubernetes 资源的文件来管理应用。 它通过运行 `kubectl apply` 在集群中创建和更新资源。 这是在生产中管理 Kubernetes 应用的推荐方法。 -参见 [Kubectl 文档](https://kubectl.docs.kubernetes.io)。 +参见 [Kubectl 文档](https://kubectl.docs.kubernetes.io/zh/)。 -## 与运行中的 Pods 进行交互 +## 与运行中的 Pod 进行交互 -You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of -{{< glossary_tooltip text="node(s)" term_id="node" >}}. +You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is +_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}}, +or to _prefer_ to run on particular nodes. There are several ways to do this and the recommended approaches all use [label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection. -Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement +Often, you do not need to set any such constraints; the +{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement (for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources). However, there are some circumstances where you may want to control which node -the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different -services that communicate a lot into the same availability zone. +the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, +or to co-locate Pods from two different services that communicate a lot into the same availability zone. You can use any of the following methods to choose where Kubernetes schedules -specific Pods: +specific Pods: * [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels) * [Affinity and anti-affinity](#affinity-and-anti-affinity) @@ -338,13 +340,15 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure that a set of workloads should -be co-located in the same defined topology, eg., the same node. +be co-located in the same defined topology; for example, preferring to place two related +Pods onto the same node. -Take, for example, a three-node cluster running a web application with an -in-memory cache like redis. You could use inter-pod affinity and anti-affinity -to co-locate the web servers with the cache as much as possible. +For example: imagine a three-node cluster. You use the cluster to run a web application +and also an in-memory cache (such as Redis). For this example, also assume that latency between +the web application and the memory cache should be as low as is practical. You could use inter-pod +affinity and anti-affinity to co-locate the web servers with the cache as much as possible. -In the following example Deployment for the redis cache, the replicas get the label `app=store`. The +In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The `podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas with the `app=store` label on a single node. This creates each cache in a separate node. @@ -379,10 +383,10 @@ spec: image: redis:3.2-alpine ``` -The following Deployment for the web servers creates replicas with the label `app=web-store`. The -Pod affinity rule tells the scheduler to place each replica on a node that has a -Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler -to avoid placing multiple `app=web-store` servers on a single node. +The following example Deployment for the web servers creates replicas with the label `app=web-store`. +The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod +with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place +multiple `app=web-store` servers on a single node. ```yaml apiVersion: apps/v1 @@ -431,6 +435,10 @@ where each web server is co-located with a cache, on three separate nodes. | *webserver-1* | *webserver-2* | *webserver-3* | | *cache-1* | *cache-2* | *cache-3* | +The overall effect is that each cache instance is likely to be accessed by a single client, that +is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency. + +You might have other reasons to use Pod anti-affinity. See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique as this example. From 829dee0940fc31bc9e44f45aafd640534657db76 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 10 May 2022 11:14:16 +0100 Subject: [PATCH 090/292] Wrap text for Pod Topology Spread Constraints Wrapping helps localization teams pick up and work with changes. --- .../topology-spread-constraints.md | 114 +++++++++++++----- 1 file changed, 87 insertions(+), 27 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 7c10da0acb8c4..965f0d23f98fd 100644 --- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -7,7 +7,11 @@ weight: 40 -You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization. +You can use _topology spread constraints_ to control how +{{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster +among failure-domains such as regions, zones, nodes, and other user-defined topology +domains. This can help to achieve high availability as well as efficient resource +utilization. @@ -16,7 +20,9 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te ### Node Labels -Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. For example, a Node might have labels: `node=node1,zone=us-east-1a,region=us-east-1` +Topology spread constraints rely on node labels to identify the topology +domain(s) that each Node is in. For example, a Node might have labels: +`node=node1,zone=us-east-1a,region=us-east-1` Suppose you have a 4-node cluster with the following labels: @@ -48,7 +54,9 @@ graph TB class zoneA,zoneB cluster; {{< /mermaid >}} -Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/labels-annotations-taints/) that are created and populated automatically on most clusters. +Instead of manually applying labels, you can also reuse the +[well-known labels](/docs/reference/labels-annotations-taints/) that are created and populated +automatically on most clusters. ## Spread Constraints for Pods @@ -70,7 +78,9 @@ spec: labelSelector: ``` -You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are: +You can define one or multiple `topologySpreadConstraint` to instruct the +kube-scheduler how to place each incoming Pod in relation to the existing Pods across +your cluster. The fields are: - **maxSkew** describes the degree to which Pods may be unevenly distributed. It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`: @@ -104,15 +114,24 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s in order to use it. {{< /note >}} -- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain. +- **topologyKey** is the key of node labels. If two Nodes are labelled with this key + and have identical values for that label, the scheduler treats both Nodes as being + in the same topology. The scheduler tries to place a balanced number of Pods into + each topology domain. - **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint: - `DoNotSchedule` (default) tells the scheduler not to schedule it. - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. -- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details. +- **labelSelector** is used to find matching Pods. Pods + that match this label selector are counted to determine the + number of Pods in their corresponding topology domain. + See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) + for more details. -When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints. +When a Pod defines more than one `topologySpreadConstraint`, those constraints are +ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all +the constraints. You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. @@ -142,9 +161,14 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones, {{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} -`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:<any value>" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint. +`topologyKey: zone` implies the even distribution will only be applied to the +nodes which have label pair "zone:<any value>" present. `whenUnsatisfiable: +DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't +satisfy the constraint. -If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed into "zoneB": +If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would +become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In +this example, the incoming Pod can only be placed into "zoneB": {{}} graph BT @@ -189,13 +213,21 @@ graph BT You can tweak the Pod spec to meet various kinds of requirements: -- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed into "zoneA" as well. -- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes instead of zones. In the above example, if `maxSkew` remains "1", the incoming Pod can only be placed onto "node4". -- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it's preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.) +- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed + into "zoneA" as well. +- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes + instead of zones. In the above example, if `maxSkew` remains "1", the incoming + Pod can only be placed onto "node4". +- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` + to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs + are satisfied). However, it's preferred to be placed into the topology domain which + has fewer matching Pods. (Be aware that this preferability is jointly normalized + with other internal scheduling priorities like resource usage ratio, etc.) ### Example: Multiple TopologySpreadConstraints -This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: +This builds upon the previous example. Suppose you have a 4-node cluster where 3 +Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: {{}} graph BT @@ -220,7 +252,10 @@ You can use 2 TopologySpreadConstraints to control the Pods spreading on both zo {{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} -In this case, to match the first constraint, the incoming Pod can only be placed into "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4". +In this case, to match the first constraint, the incoming Pod can only be placed into +"zoneB"; while in terms of the second constraint, the incoming Pod can only be placed +onto "node4". Then the results of 2 constraints are ANDed, so the only viable option +is to place on "node4". Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones: @@ -243,13 +278,18 @@ graph BT class zoneA,zoneB cluster; {{< /mermaid >}} -If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only placed into "zoneB"; while in terms of the second constraint, "mypod" can only be placed onto "node2". Then a joint result of "zoneB" and "node2" returns nothing. +If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in +`Pending` state. This is because: to satisfy the first constraint, "mypod" can only placed +into "zoneB"; while in terms of the second constraint, "mypod" can only be placed onto +"node2". Then a joint result of "zoneB" and "node2" returns nothing. -To overcome this situation, you can either increase the `maxSkew` or modify one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`. +To overcome this situation, you can either increase the `maxSkew` or modify one of +the constraints to use `whenUnsatisfiable: ScheduleAnyway`. ### Interaction With Node Affinity and Node Selectors -The scheduler will skip the non-matching nodes from the skew calculations if the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined. +The scheduler will skip the non-matching nodes from the skew calculations if the +incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined. ### Example: TopologySpreadConstraints with NodeAffinity @@ -287,11 +327,17 @@ class n5 k8s; class zoneC cluster; {{< /mermaid >}} -and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed into "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. +and you know that "zoneC" must be excluded. In this case, you can compose the yaml +as below, so that "mypod" will be placed into "zoneB" instead of "zoneC". +Similarly `spec.nodeSelector` is also respected. {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} -The scheduler doesn't have prior knowledge of all the zones or other topology domains that a cluster has. They are determined from the existing nodes in the cluster. This could lead to a problem in autoscaled clusters, when a node pool (or node group) is scaled to zero nodes and the user is expecting them to scale up, because, in this case, those topology domains won't be considered until there is at least one node in them. +The scheduler doesn't have prior knowledge of all the zones or other topology domains +that a cluster has. They are determined from the existing nodes in the cluster. This +could lead to a problem in autoscaled clusters, when a node pool (or node group) is +scaled to zero nodes and the user is expecting them to scale up, because, in this case, +those topology domains won't be considered until there is at least one node in them. ### Other Noticeable Semantics @@ -301,10 +347,21 @@ There are some implicit conventions worth noting here: - The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that: - 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto such nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". - -- Be aware of what will happen if the incoming Pod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed into "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels. + 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the + above example, suppose "node1" does not have label "zone", then the 2 Pods will + be disregarded, hence the incoming Pod will be scheduled into "zoneA". + 2. the incoming Pod has no chances to be scheduled onto such nodes - + in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` + joins the cluster, it will be bypassed due to the absence of label key "zone". + +- Be aware of what will happen if the incomingPod's + `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the + above example, if we remove the incoming Pod's labels, it can still be placed into + "zoneB" since the constraints are still satisfied. However, after the placement, + the degree of imbalance of the cluster remains unchanged - it's still zoneA + having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds + label {foo:bar}. So if this is not what you expect, we recommend the workload's + `topologySpreadConstraints[*].labelSelector` to match its own labels. ### Cluster-level default constraints @@ -405,15 +462,18 @@ scheduled - more packed or more scattered. For finer control, you can specify topology spread constraints to distribute Pods across different topology domains - to achieve either high availability or cost-saving. This can also help on rolling update workloads and scaling out -replicas smoothly. See +replicas smoothly. +See [Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation) for more details. ## Known Limitations -- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution. -You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution. -- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921) +- There's no guarantee that the constraints remain satisfied when Pods are removed. For + example, scaling down a Deployment may result in imbalanced Pods distribution. + You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution. +- Pods matched on tainted nodes are respected. + See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921). ## {{% heading "whatsnext" %}} From 72a070e619d31db66ad092814e84256c85494ac3 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 10 May 2022 12:54:59 +0100 Subject: [PATCH 091/292] Improve Pod Topology Spread Constraints concept MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Adjust heading levels - Link to API reference for Pod - Clarify examples - Add introductory text - Split two combined examples - Explain that Pods in a group should set the same topology spread constraints - Write headings in sentence case - Avoid using “we” --- .../topology-spread-constraints.md | 375 +++++++++++------- 1 file changed, 230 insertions(+), 145 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 965f0d23f98fd..1c1a33c5ed529 100644 --- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -13,111 +13,105 @@ among failure-domains such as regions, zones, nodes, and other user-defined topo domains. This can help to achieve high availability as well as efficient resource utilization. +You can set [cluster-level constraints](#cluster-level-default-constraints) as a default, +or configure topology spread constraints for individual workloads. -## Prerequisites +## Motivation -### Node Labels +Imagine that you have a cluster of up to twenty nodes, and you want to run a +{{< glossary_tooltip text="workload" term_id="workload" >}} +that automatically scales how many replicas it uses. There could be as few as +two Pods or as many as fifteen. +When there are only two Pods, you'd prefer not to have both of those Pods run on the +same node: you would run the risk that a single node failure takes your workload +offline. -Topology spread constraints rely on node labels to identify the topology -domain(s) that each Node is in. For example, a Node might have labels: -`node=node1,zone=us-east-1a,region=us-east-1` +In addition to this basic usage, there are some advanced usage examples that +enable your workloads to benefit on high availability and cluster utilization. -Suppose you have a 4-node cluster with the following labels: +As you scale up and run more Pods, a different concern becomes important. Imagine +that you have three nodes running five Pods each. The nodes have enough capacity +to run that many replicas; however, the clients that interact with this workload +are split across three different datacenters (or infrastructure zones). Now you +have less concern about a single node failure, but you notice that latency is +higher than you'd like, and you are paying for network costs associated with +sending network traffic between the different zones. -``` -NAME STATUS ROLES AGE VERSION LABELS -node1 Ready 4m26s v1.16.0 node=node1,zone=zoneA -node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA -node3 Ready 3m17s v1.16.0 node=node3,zone=zoneB -node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB -``` +You decide that under normal operation you'd prefer to have a similar number of replicas +[scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone, +and you'd like the cluster to self-heal in the case that there is a problem. -Then the cluster is logically viewed as below: +Pod topology spread constraints offer you a declarative way to configure that. -{{}} -graph TB - subgraph "zoneB" - n3(Node3) - n4(Node4) - end - subgraph "zoneA" - n1(Node1) - n2(Node2) - end - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4 k8s; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -Instead of manually applying labels, you can also reuse the -[well-known labels](/docs/reference/labels-annotations-taints/) that are created and populated -automatically on most clusters. - -## Spread Constraints for Pods - -### API +## `topologySpreadConstraints` field -The API field `pod.spec.topologySpreadConstraints` is defined as below: +The Pod API includes a field, `spec.topologySpreadConstraints`. Here is an example: ```yaml +--- apiVersion: v1 kind: Pod metadata: - name: mypod + name: example-pod spec: + # Configure a topology spread constraint topologySpreadConstraints: - maxSkew: - minDomains: + minDomains: # optional; alpha since v1.24 topologyKey: whenUnsatisfiable: labelSelector: + ### other Pod fields go here ``` -You can define one or multiple `topologySpreadConstraint` to instruct the +You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. + +### Spread constraint definition + +You can define one or multiple `topologySpreadConstraints` entries to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across -your cluster. The fields are: - -- **maxSkew** describes the degree to which Pods may be unevenly distributed. - It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`: - - - when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum - permitted difference between the number of matching pods in the target - topology and the global minimum - (the minimum number of pods that match the label selector in a topology domain. - For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, - The global minimum is 0). - - when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher +your cluster. Those fields are: + +- **maxSkew** describes the degree to which Pods may be unevenly distributed. You must + specify this field and the number must be greater than zero. Its semantics differ + according to the value of `whenUnsatisfiable`: + + - if you select `whenUnsatisfiable: DoNotSchedule`, then `maxSkew` defines the + maximum permitted difference between the number of matching pods in the target + topology and the _global minimum_ + (the minimum number of pods that match the label selector in a topology domain). + For example, if you have 3 zones with 2, 4 and 5 matching pods respectively, + then the global minimum is 2 and `maxSkew` is compared relative to that number. + - if you select `whenUnsatisfiable: ScheduleAnyway`, the scheduler gives higher precedence to topologies that would help reduce the skew. -- **minDomains** indicates a minimum number of eligible domains. +- **minDomains** indicates a minimum number of eligible domains. This field is optional. A domain is a particular instance of a topology. An eligible domain is a domain whose nodes match the node selector. - - The value of `minDomains` must be greater than 0, when specified. - - When the number of eligible domains with match topology keys is less than `minDomains`, - Pod topology spread treats "global minimum" as 0, and then the calculation of `skew` is performed. - The "global minimum" is the minimum number of matching Pods in an eligible domain, - or zero if the number of eligible domains is less than `minDomains`. - - When the number of eligible domains with matching topology keys equals or is greater than - `minDomains`, this value has no effect on scheduling. - - When `minDomains` is nil, the constraint behaves as if `minDomains` is 1. - - When `minDomains` is not nil, the value of `whenUnsatisfiable` must be "`DoNotSchedule`". - {{< note >}} The `minDomains` field is an alpha field added in 1.24. You have to enable the `MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) in order to use it. {{< /note >}} -- **topologyKey** is the key of node labels. If two Nodes are labelled with this key - and have identical values for that label, the scheduler treats both Nodes as being - in the same topology. The scheduler tries to place a balanced number of Pods into - each topology domain. + - The value of `minDomains` must be greater than 0, when specified. + You can only specify `minDomains` in conjunction with `whenUnsatisfiable: DoNotSchedule`. + - When the number of eligible domains with match topology keys is less than `minDomains`, + Pod topology spread treats global minimum as 0, and then the calculation of `skew` is performed. + The global minimum is the minimum number of matching Pods in an eligible domain, + or zero if the number of eligible domains is less than `minDomains`. + - When the number of eligible domains with matching topology keys equals or is greater than + `minDomains`, this value has no effect on scheduling. + - If you do not specify `minDomains`, the constraint behaves as if `minDomains` is 1. + +- **topologyKey** is the key of [node labels](#node-labels). If two Nodes are labelled + with this key and have identical values for that label, the scheduler treats both + Nodes as being in the same topology. The scheduler tries to place a balanced number + of Pods into each topology domain. - **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint: - `DoNotSchedule` (default) tells the scheduler not to schedule it. @@ -130,14 +124,82 @@ your cluster. The fields are: for more details. When a Pod defines more than one `topologySpreadConstraint`, those constraints are -ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all -the constraints. +combined using a logical AND operation: the kube-scheduler looks for a node for the incoming Pod +that satisfies all the configured constraints. -You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. +### Node labels + +Topology spread constraints rely on node labels to identify the topology +domain(s) that each {{< glossary_tooltip text="node" term_id="node" >}} is in. +For example, a node might have labels: +```yaml + region: us-east-1 + zone: us-east-1a +``` + +{{< note >}} +For brevity, this example doesn't use the +[well-known](/docs/reference/labels-annotations-taints/) label keys +`topology.kubernetes.io/zone` and `topology.kubernetes.io/region`. However, +those registered label keys are nonetheless recommended rather than the private +(unqualified) label keys `region` and `zone` that are used here. + +You can't make a reliable assumption about the meaning of a private label key +between different contexts. +{{< /note >}} -### Example: One TopologySpreadConstraint -Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: +Suppose you have a 4-node cluster with the following labels: + +``` +NAME STATUS ROLES AGE VERSION LABELS +node1 Ready 4m26s v1.16.0 node=node1,zone=zoneA +node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA +node3 Ready 3m17s v1.16.0 node=node3,zone=zoneB +node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB +``` + +Then the cluster is logically viewed as below: + +{{}} +graph TB + subgraph "zoneB" + n3(Node3) + n4(Node4) + end + subgraph "zoneA" + n1(Node1) + n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +## Consistency + +You should set the same Pod topology spread constraints on all pods in a group. + +Usually, if you are using a workload controller such as a Deployment, the pod template +takes care of this for you. If you mix different spread constraints then Kubernetes +follows the API definition of the field; however, the behavior is more likely to become +confusing and troubleshooting is less straightforward. + +You need a mechanism to ensure that all the nodes in a topology domain (such as a +cloud provider region) are labelled consistently. +To avoid you needing to manually label nodes, most clusters automatically +populate well-known labels such as `topology.kubernetes.io/hostname`. Check whether +your cluster supports this. + +## Topology spread constraint examples + +### Example: one topology spread constraint {#example-one-topologyspreadconstraint} + +Suppose you have a 4-node cluster where 3 Pods labelled `foo: bar` are located in +node1, node2 and node3 respectively: {{}} graph BT @@ -157,18 +219,20 @@ graph BT class zoneA,zoneB cluster; {{< /mermaid >}} -If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as: +If you want an incoming Pod to be evenly spread with existing Pods across zones, you +can use a manifest similar to: {{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} -`topologyKey: zone` implies the even distribution will only be applied to the -nodes which have label pair "zone:<any value>" present. `whenUnsatisfiable: -DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't -satisfy the constraint. +From that manifest, `topologyKey: zone` implies the even distribution will only be applied +to nodes that are labelled `zone: ` (nodes that don't have a `zone` label +are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the +incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint. -If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would -become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In -this example, the incoming Pod can only be placed into "zoneB": +If the scheduler placed this incoming Pod into zone `A`, the distribution of Pods would +become `[3, 1]`. That means the actual skew is then 2 (calculated as `3 - 1`), which +violates `maxSkew: 1`. To satisfy the constraints and context for this example, the +incoming Pod can only be placed onto a node in zone `B`: {{}} graph BT @@ -213,21 +277,21 @@ graph BT You can tweak the Pod spec to meet various kinds of requirements: -- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed - into "zoneA" as well. -- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes - instead of zones. In the above example, if `maxSkew` remains "1", the incoming - Pod can only be placed onto "node4". +- Change `maxSkew` to a bigger value - such as `2` - so that the incoming Pod can + be placed into zone `A` as well. +- Change `topologyKey` to `node` so as to distribute the Pods evenly across nodes + instead of zones. In the above example, if `maxSkew` remains `1`, the incoming + Pod can only be placed onto the node `node4`. - Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it's preferred to be placed into the topology domain which - has fewer matching Pods. (Be aware that this preferability is jointly normalized - with other internal scheduling priorities like resource usage ratio, etc.) + has fewer matching Pods. (Be aware that this preference is jointly normalized + with other internal scheduling priorities such as resource usage ratio). -### Example: Multiple TopologySpreadConstraints +### Example: multiple topology spread constraints {#example-multiple-topologyspreadconstraints} This builds upon the previous example. Suppose you have a 4-node cluster where 3 -Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: +existing Pods labeled `foo: bar` are located on node1, node2 and node3 respectively: {{}} graph BT @@ -248,14 +312,17 @@ graph BT class zoneA,zoneB cluster; {{< /mermaid >}} -You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node: +You can combine two topology spread constraints to control the spread of Pods both +by node and by zone: {{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} -In this case, to match the first constraint, the incoming Pod can only be placed into -"zoneB"; while in terms of the second constraint, the incoming Pod can only be placed -onto "node4". Then the results of 2 constraints are ANDed, so the only viable option -is to place on "node4". +In this case, to match the first constraint, the incoming Pod can only be placed onto +nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be +scheduled to the node `node4`. The scheduler only considers options that satisfy all +defined constraints, so the only valid placement is onto node `node4`. + +### Example: conflicting topology spread constraints {#example-conflicting-topologyspreadconstraints} Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones: @@ -278,22 +345,28 @@ graph BT class zoneA,zoneB cluster; {{< /mermaid >}} -If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in -`Pending` state. This is because: to satisfy the first constraint, "mypod" can only placed -into "zoneB"; while in terms of the second constraint, "mypod" can only be placed onto -"node2". Then a joint result of "zoneB" and "node2" returns nothing. +If you were to apply +[`two-constraints.yaml`](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/topology-spread-constraints/two-constraints.yaml) +(the manifest from the previous example) +to **this** cluster, you would see that the Pod `mypod` stays in the `Pending` state. +This happens because: to satisfy the first constraint, the Pod `mypod` can only +be placed into zone `B`; while in terms of the second constraint, the Pod `mypod` +can only schedule to node `node2`. The intersection of the two constraints returns +an empty set, and the scheduler cannot place the Pod. -To overcome this situation, you can either increase the `maxSkew` or modify one of -the constraints to use `whenUnsatisfiable: ScheduleAnyway`. +To overcome this situation, you can either increase the value of `maxSkew` or modify +one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`. Depending on +circumstances, you might also decide to delete an existing Pod manually - for example, +if you are troubleshooting why a bug-fix rollout is not making progress. -### Interaction With Node Affinity and Node Selectors +#### Interaction with node affinity and node selectors The scheduler will skip the non-matching nodes from the skew calculations if the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined. -### Example: TopologySpreadConstraints with NodeAffinity +### Example: topology spread constraints with node affinity {#example-topologyspreadconstraints-with-nodeaffinity} -Suppose you have a 5-node cluster ranging from zoneA to zoneC: +Suppose you have a 5-node cluster ranging across zones A to C: {{}} graph BT @@ -327,9 +400,9 @@ class n5 k8s; class zoneC cluster; {{< /mermaid >}} -and you know that "zoneC" must be excluded. In this case, you can compose the yaml -as below, so that "mypod" will be placed into "zoneB" instead of "zoneC". -Similarly `spec.nodeSelector` is also respected. +and you know that zone `C` must be excluded. In this case, you can compose a manifest +as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`. +Similarly, Kubernetes also respects `spec.nodeSelector`. {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} @@ -339,43 +412,45 @@ could lead to a problem in autoscaled clusters, when a node pool (or node group) scaled to zero nodes and the user is expecting them to scale up, because, in this case, those topology domains won't be considered until there is at least one node in them. -### Other Noticeable Semantics +## Implicit conventions There are some implicit conventions worth noting here: - Only the Pods holding the same namespace as the incoming Pod can be matching candidates. -- The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that: +- The scheduler bypasses any nodes that don't have any `topologySpreadConstraints[*].topologyKey` + present. This implies that: - 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the - above example, suppose "node1" does not have label "zone", then the 2 Pods will - be disregarded, hence the incoming Pod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto such nodes - - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` - joins the cluster, it will be bypassed due to the absence of label key "zone". + 1. any Pods located on those bypassed nodes do not impact `maxSkew` calculation - in the + above example, suppose the node `node1` does not have a label "zone", then the 2 Pods will + be disregarded, hence the incoming Pod will be scheduled into zone `A`. + 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - + in the above example, suppose a node `node5` has the **mistyped** label `zone-typo: zoneC` + (and no `zone` label set). After node `node5` joins the cluster, it will be bypassed and + Pods for this workload aren't scheduled there. -- Be aware of what will happen if the incomingPod's +- Be aware of what will happen if the incoming Pod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the - above example, if we remove the incoming Pod's labels, it can still be placed into - "zoneB" since the constraints are still satisfied. However, after the placement, - the degree of imbalance of the cluster remains unchanged - it's still zoneA - having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds - label {foo:bar}. So if this is not what you expect, we recommend the workload's - `topologySpreadConstraints[*].labelSelector` to match its own labels. + above example, if you remove the incoming Pod's labels, it can still be placed onto + nodes in zone `B`, since the constraints are still satisfied. However, after that + placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A` + having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as + `foo: bar`. If this is not what you expect, update the workload's + `topologySpreadConstraints[*].labelSelector` to match the labels in the pod template. -### Cluster-level default constraints +## Cluster-level default constraints It is possible to set default topology spread constraints for a cluster. Default topology spread constraints are applied to a Pod if, and only if: - It doesn't define any constraints in its `.spec.topologySpreadConstraints`. -- It belongs to a service, replication controller, replica set or stateful set. +- It belongs to a Service, ReplicaSet, StatefulSet or ReplicationController. -Default constraints can be set as part of the `PodTopologySpread` plugin args -in a [scheduling profile](/docs/reference/scheduling/config/#profiles). +Default constraints can be set as part of the `PodTopologySpread` plugin +arguments in a [scheduling profile](/docs/reference/scheduling/config/#profiles). The constraints are specified with the same [API above](#api), except that -`labelSelector` must be empty. The selectors are calculated from the services, -replication controllers, replica sets or stateful sets that the Pod belongs to. +`labelSelector` must be empty. The selectors are calculated from the Services, +ReplicaSets, StatefulSets or ReplicationControllers that the Pod belongs to. An example configuration might look like follows: @@ -396,12 +471,12 @@ profiles: ``` {{< note >}} -[`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins) -is disabled by default. It's recommended to use `PodTopologySpread` to achieve similar -behavior. +The [`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins) +is disabled by default. The Kubernetes project recommends using `PodTopologySpread` +to achieve similar behavior. {{< /note >}} -#### Built-in default constraints {#internal-default-constraints} +### Built-in default constraints {#internal-default-constraints} {{< feature-state for_k8s_version="v1.24" state="stable" >}} @@ -449,33 +524,43 @@ profiles: defaultingType: List ``` -## Comparison with PodAffinity/PodAntiAffinity +## Comparison with podAffinity and podAntiAffinity {#comparison-with-podaffinity-podantiaffinity} -In Kubernetes, directives related to "Affinity" control how Pods are -scheduled - more packed or more scattered. +In Kubernetes, [inter-Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) +control how Pods are scheduled in relation to one another - either more packed +or more scattered. -- For `PodAffinity`, you can try to pack any number of Pods into qualifying +`podAffinity` +: attracts Pods; you can try to pack any number of Pods into qualifying topology domain(s) -- For `PodAntiAffinity`, only one Pod can be scheduled into a - single topology domain. +`podAntiAffinity` +: repels Pods. If you set this to `requiredDuringSchedulingIgnoredDuringExecution` mode then + only a single Pod can be scheduled into a single topology domain; if you choose + `preferredDuringSchedulingIgnoredDuringExecution` then you lose the ability to enforce the + constraint. For finer control, you can specify topology spread constraints to distribute Pods across different topology domains - to achieve either high availability or cost-saving. This can also help on rolling update workloads and scaling out replicas smoothly. -See + +For more context, see the [Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation) -for more details. +section of the enhancement proposal about Pod topology spread constraints. -## Known Limitations +## Known limitations - There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution. - You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution. + + You can use a tool such as the [Descheduler](https://github.com/kubernetes-sigs/descheduler) + to rebalance the Pods distribution. - Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921). ## {{% heading "whatsnext" %}} -- [Blog: Introducing PodTopologySpread](/blog/2020/05/introducing-podtopologyspread/) - explains `maxSkew` in details, as well as bringing up some advanced usage examples. +- The blog article [Introducing PodTopologySpread](/blog/2020/05/introducing-podtopologyspread/) + explains `maxSkew` in some detail, as well as covering some advanced usage examples. +- Read the [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of + the API reference for Pod. From bfff661ac0024fee950c9c9d9be991257a36ffae Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 10 May 2022 13:00:19 +0100 Subject: [PATCH 092/292] Clarify known limitation of Pod topology spread constraints The limitation is more around cluster autoscaling; nonetheless it seems to belong under Known limitations. --- .../topology-spread-constraints.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 1c1a33c5ed529..77f4d1ea55362 100644 --- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -406,12 +406,6 @@ Similarly, Kubernetes also respects `spec.nodeSelector`. {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} -The scheduler doesn't have prior knowledge of all the zones or other topology domains -that a cluster has. They are determined from the existing nodes in the cluster. This -could lead to a problem in autoscaled clusters, when a node pool (or node group) is -scaled to zero nodes and the user is expecting them to scale up, because, in this case, -those topology domains won't be considered until there is at least one node in them. - ## Implicit conventions There are some implicit conventions worth noting here: @@ -557,6 +551,16 @@ section of the enhancement proposal about Pod topology spread constraints. to rebalance the Pods distribution. - Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921). +- The scheduler doesn't have prior knowledge of all the zones or other topology + domains that a cluster has. They are determined from the existing nodes in the + cluster. This could lead to a problem in autoscaled clusters, when a node pool (or + node group) is scaled to zero nodes, and you're expecting the cluster to scale up, + because, in this case, those topology domains won't be considered until there is + at least one node in them. + You can work around this by using an cluster autoscaling tool that is aware of + Pod topology spread constraints and is also aware of the overall set of topology + domains. + ## {{% heading "whatsnext" %}} From ed58f048b9b6fa3ac83f7ecc1bf3f8cb8275f624 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Wed, 13 Jul 2022 00:40:22 +0100 Subject: [PATCH 093/292] Fix typo --- content/en/docs/contribute/style/diagram-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/contribute/style/diagram-guide.md b/content/en/docs/contribute/style/diagram-guide.md index 68a750e2b29f2..ac3a4fd529b2c 100644 --- a/content/en/docs/contribute/style/diagram-guide.md +++ b/content/en/docs/contribute/style/diagram-guide.md @@ -438,7 +438,7 @@ Note that the live editor doesn't recognize Hugo shortcodes. ### Example 1 - Pod topology spread constraints Figure 6 shows the diagram appearing in the -[Pod topology pread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/#node-labels) +[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/#node-labels) page. {{< mermaid >}} From 558bd9c0142bf15cac600689b059fa8856fa908a Mon Sep 17 00:00:00 2001 From: Akanksha kumari Date: Sat, 16 Jul 2022 16:54:39 +0530 Subject: [PATCH 094/292] Update configure-liveness-readiness-startup-probes.md --- .../configure-liveness-readiness-startup-probes.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 3f4c1c8dcdd3d..b6c2efe5850cc 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -97,7 +97,7 @@ kubectl describe pod liveness-exec ``` At the bottom of the output, there are messages indicating that the liveness -probes have failed, and the containers have been killed and recreated. +probes have failed, and the failed containers have been killed and recreated. ``` Type Reason Age From Message @@ -117,7 +117,7 @@ Wait another 30 seconds, and verify that the container has been restarted: kubectl get pod liveness-exec ``` -The output shows that `RESTARTS` has been incremented: +The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter increments as soon as a failed container in a restarts : ``` NAME READY STATUS RESTARTS AGE From 31cde47bf019ae2a7a01bfcdf4b08e4f08d1329d Mon Sep 17 00:00:00 2001 From: Akanksha kumari Date: Sat, 16 Jul 2022 19:24:06 +0530 Subject: [PATCH 095/292] Omit `apt-transport-https` from install Remove dummy package `apt-transport-https` from linux kubectl install instructions --- .../en/docs/tasks/tools/install-kubectl-linux.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md index d027ab647d8d1..77af85d887c49 100644 --- a/content/en/docs/tasks/tools/install-kubectl-linux.md +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -110,9 +110,19 @@ For example, to download version {{< param "fullversion" >}} on Linux, type: ```shell sudo apt-get update - sudo apt-get install -y apt-transport-https ca-certificates curl + sudo apt-get install -y ca-certificates curl ``` - + + {{< note >}} + + If you use Debian 9 (stretch) or earlier you would also need to install `apt-transport-https`: + + ```shell + sudo apt-get install -y apt-transport-https + ``` + + {{< /note >}} + 2. Download the Google Cloud public signing key: ```shell From 4e15e8f2aec3913620642de4c0068ca4837fa2d3 Mon Sep 17 00:00:00 2001 From: bhangra Date: Sun, 17 Jul 2022 16:53:21 +0900 Subject: [PATCH 096/292] Update kubelet-config-file.md the commas caused kubelet service to fail to start. should be omitted. --- .../en/docs/tasks/administer-cluster/kubelet-config-file.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index 668f4532a51fb..da1e167cccb54 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -30,9 +30,9 @@ Here is an example of what this file might look like: ``` apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration -address: "192.168.0.8", -port: 20250, -serializeImagePulls: false, +address: "192.168.0.8" +port: 20250 +serializeImagePulls: false evictionHard: memory.available: "200Mi" ``` From 1e550e960415c3e64a741350769f014238584b00 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sun, 17 Jul 2022 10:36:25 +0800 Subject: [PATCH 097/292] [en] updated /node-pressure-eviction.md --- .../scheduling-eviction/node-pressure-eviction.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md index a7724c83d9c4a..244298d150422 100644 --- a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md +++ b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md @@ -91,9 +91,9 @@ Some kubelet garbage collection features are deprecated in favor of eviction: | ------------- | -------- | --------- | | `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection | | `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior | -| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context | -| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context | -| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context | +| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context | +| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context | +| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context | ### Eviction thresholds @@ -216,7 +216,7 @@ the kubelet frees up disk space in the following order: If the kubelet's attempts to reclaim node-level resources don't bring the eviction signal below the threshold, the kubelet begins to evict end-user pods. -The kubelet uses the following parameters to determine pod eviction order: +The kubelet uses the following parameters to determine the pod eviction order: 1. Whether the pod's resource usage exceeds requests 1. [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/) @@ -319,7 +319,7 @@ The kubelet sets an `oom_score_adj` value for each container based on the QoS fo {{}} The kubelet also sets an `oom_score_adj` value of `-997` for containers in Pods that have -`system-node-critical` {{}} +`system-node-critical` {{}}. {{}} If the kubelet can't reclaim memory before a node experiences OOM, the @@ -401,7 +401,7 @@ counted as `active_file`. If enough of these kernel block buffers are on the active LRU list, the kubelet is liable to observe this as high resource use and taint the node as experiencing memory pressure - triggering pod eviction. -For more more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916) +For more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916) You can work around that behavior by setting the memory limit and memory request the same for containers likely to perform intensive I/O activity. You will need From e8cb7ec9bae2d6162862b56c01482959009ecff0 Mon Sep 17 00:00:00 2001 From: Balaram Vedulla Date: Sun, 17 Jul 2022 19:02:52 +0200 Subject: [PATCH 098/292] Update assign-pod-node.md --- content/en/docs/concepts/scheduling-eviction/assign-pod-node.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index db9f1d900d682..fe95d23b68c19 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -170,7 +170,7 @@ For example, consider the following Pod spec: {{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}} If there are two possible nodes that match the -`requiredDuringSchedulingIgnoredDuringExecution` rule, one with the +`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the `label-1:key-1` label and another with the `label-2:key-2` label, the scheduler considers the `weight` of each node and adds the weight to the other scores for that node, and schedules the Pod onto the node with the highest final score. From f8d84cedce105edaaeb9f8dbe95d9e33bc351e29 Mon Sep 17 00:00:00 2001 From: ydFu Date: Mon, 18 Jul 2022 16:34:48 +0800 Subject: [PATCH 099/292] Updated the 'Installing Kubernetes with Kubespray' 1. Add the OS supported by the current version of Kubespray. [ref 1: kubespray](https://github.com/kubernetes-sigs/kubespray/blob/master/README.md#deploy-a-production-ready-kubernetes-cluster) 2. Supplement the features provided by the Kubespray overview. [ref 2: kubespray](https://github.com/kubernetes-sigs/kubespray/blob/master/README.md#deploy-a-production-ready-kubernetes-cluster) 3. Update Supported Linux Distributions. [ref 3: Supported Linux Distributions](https://github.com/kubernetes-sigs/kubespray#supported-linux-distributions) 4. Improve the description in '(1/5) Meet the underlay requirements' [ref 4: Requirements](https://github.com/kubernetes-sigs/kubespray/blob/master/README.md#requirements) 5. Add whatsnext. Signed-off-by: ydFu --- .../production-environment/tools/kubespray.md | 54 ++++++++++--------- 1 file changed, 29 insertions(+), 25 deletions(-) diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index fe139261f6d3f..9877088a5b1fa 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -8,19 +8,24 @@ weight: 30 This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). -Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: - -* a highly available cluster -* composable attributes -* support for most popular Linux distributions - * Ubuntu 16.04, 18.04, 20.04, 22.04 - * CentOS/RHEL/Oracle Linux 7, 8 - * Debian Buster, Jessie, Stretch, Wheezy - * Fedora 34, 35 - * Fedora CoreOS - * openSUSE Leap 15 - * Flatcar Container Linux by Kinvolk -* continuous integration tests +Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. + +Kubespray provides: +* Highly available cluster. +* Composable (Choice of the network plugin for instance). +* Supports most popular Linux distributions: + - Flatcar Container Linux by Kinvolk + - Debian Bullseye, Buster, Jessie, Stretch + - Ubuntu 16.04, 18.04, 20.04, 22.04 + - CentOS/RHEL 7, 8 + - Fedora 34, 35 + - Fedora CoreOS + - openSUSE Leap 15.x/Tumbleweed + - Oracle Linux 7, 8 + - Alma Linux 8 + - Rocky Linux 8 + - Amazon Linux 2 +* Continuous integration tests. To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/reference/setup-tools/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/). @@ -33,13 +38,13 @@ To choose a tool which best fits your use case, read [this comparison](https://g Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements): -* **Ansible v2.11 and python-netaddr are installed on the machine that will run Ansible commands** -* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks** -* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) -* The target servers are configured to allow **IPv4 forwarding** -* **Your ssh key must be copied** to all the servers in your inventory -* **Firewalls are not managed by kubespray**. You'll need to implement appropriate rules as needed. You should disable your firewall in order to avoid any issues during deployment -* If kubespray is run from a non-root user account, correct privilege escalation method should be configured in the target servers and the `ansible_become` flag or command parameters `--become` or `-b` should be specified +* **Minimum required version of Kubernetes is v1.22** +* **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands** +* The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required See ([Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) +* The target servers are configured to allow **IPv4 forwarding**. +* If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**. +* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall. +* If kubespray is run from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified. Kubespray provides the following utilities to help provision your environment: @@ -110,11 +115,10 @@ When running the reset playbook, be sure not to accidentally target your product ## Feedback -* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)) -* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues) +* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)). +* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues). ## {{% heading "whatsnext" %}} - -Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md). - +* Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md). +* Learn more about [Kubespray](https://github.com/kubernetes-sigs/kubespray). From f54db271ea3bf7ce0b69ef3f262fbe4060734dff Mon Sep 17 00:00:00 2001 From: sarazqy Date: Fri, 1 Jul 2022 14:40:07 +0800 Subject: [PATCH 100/292] translate content/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1.md into Chinese --- .../workload-resources/cron-job-v1.md | 1489 +++++++++++++++++ 1 file changed, 1489 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1.md new file mode 100644 index 0000000000000..71e2cfd0e574a --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/cron-job-v1.md @@ -0,0 +1,1489 @@ +--- +api_metadata: + apiVersion: "batch/v1" + import: "k8s.io/api/batch/v1" + kind: "CronJob" +content_type: "api_reference" +description: "CronJob 代表单个定时作业 (Cron Job) 的配置。" +title: "CronJob" +weight: 10 +--- + + + +`apiVersion: batch/v1` + +`import "k8s.io/api/batch/v1"` + +## CronJob {#CronJob} + + +CronJob 代表单个定时作业(Cron Job) 的配置。 + +
+ +- **apiVersion**: batch/v1 + + +- **kind**: CronJob + + +- **metadata** (}}">ObjectMeta) + + + 标准的对象元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **spec** (}}">CronJobSpec) + + + 定时作业的预期行为的规约,包括排期表(Schedule)。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +- **status** (}}">CronJobStatus) + + + 定时作业的当前状态。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +## CronJobSpec {#CronJobSpec} + + + +CronJobSpec 描述了作业的执行方式和实际将运行的时间。 + +
+ + + +- **jobTemplate** (JobTemplateSpec), 必需 + + 指定执行 CronJob 时将创建的作业。 + + + + + **JobTemplateSpec 描述了从模板创建作业时应具有的数据** + + + + - **jobTemplate.metadata** (}}">ObjectMeta) + + 从此模板创建的作业的标准对象元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + + + - **jobTemplate.spec** (}}">JobSpec) + + 对作业的预期行为的规约。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + + +- **schedule** (string), 必需 + + Cron 格式的排期表,请参阅 https://en.wikipedia.org/wiki/Cron. + + + +- **timeZone** (string) + + 给定时间表的时区,请参阅 https://en.wikipedia.org/wiki/List_of_tz_database_time_zones。 + 如果未指定,这将取决于 kube-controller-manager 进程的时区。 ALPHA:此字段处于 alpha 状态,必须通过 "CronJobTimeZone" 功能门启用。 + + + +- **concurrencyPolicy** (string) + + 指定如何处理作业的并发执行。 有效值为: + + - "Allow" (默认):允许 CronJobs 并发运行; + - "Forbid":禁止并发运行,如果上一次运行尚未完成则跳过下一次运行; + - "Replace":取消当前正在运行的作业并将其替换为新作业 + + + +- **startingDeadlineSeconds** (int64) + + 可选字段。当作业因为某种原因错过预定时间时,设定作业的启动截止时间(秒)。错过排期的作业将被视为失败的作业。 + + + +- **suspend** (boolean) + + 这个标志告诉控制器暂停后续的执行,它不适用于已经开始的执行。默认为 false。 + + + +- **successfulJobsHistoryLimit** (int32) + + 要保留的成功完成作业数。值必须是非负整数。默认值为 3。 + + + +- **failedJobsHistoryLimit** (int32) + + 要保留的以失败状态结束的作业个数。值必须是非负整数。默认值为 1。 + + +## CronJobStatus {#CronJobStatus} + + +CronJobStatus 表示某个定时作业的当前状态。 + +
+ + +- **active** ([]}}">ObjectReference) + + **Atomic: 将在合并过程中被替换** + + 指向当前正在运行的作业的指针列表。 + + +- **lastScheduleTime** (Time) + + 上次成功调度作业的时间信息。 + + + **Time 是对 time.Time 的封装,它支持对 YAML 和 JSON 的正确编排。为 time 包提供的许多工厂方法模式提供了包装器。** + + +- **lastSuccessfulTime** (Time) + + 上次成功完成作业的时间信息。 + + + **Time 是对 time.Time 的封装,它支持对 YAML 和 JSON 的正确编排。为 time 包提供的许多工厂方法模式提供了包装器。** + + +## CronJobList {#CronJobList} + + +CronJobList 是定时作业的集合。 + +
+ +- **apiVersion**: batch/v1 + + +- **kind**: CronJobList + + +- **metadata** (}}">ListMeta) + + 标准列表元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + +- **items** ([]}}">CronJob), required + + items 是 CronJobs 的列表。 + + + +## 操作 {#操作} + +
+ + + +### `get` 查看指定的 CronJob + +#### HTTP 请求 + +GET /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + CronJob 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +#### 响应 + + + +200 (}}">CronJob): OK + +401: 未经授权 + + + +### `get` 查看指定 CronJob 的状态 + +#### HTTP 请求 + +GET /apis/batch/v1/namespaces/{namespace}/cronjobs/{name}/status + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + CronJob 的名称 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +#### 响应 + + +200 (}}">CronJob): OK + +401: 未经授权 + + + +### `list` 查看或监视 CronJob 类别的对象 + +#### HTTP 请求 + +GET /apis/batch/v1/namespaces/{namespace}/cronjobs + +#### 参数 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + + + +- **continue** (**查询参数**): string + + }}">continue + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + + +- **limit** (**查询参数**): integer + + }}">limit + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + + +- **watch** (**查询参数**): boolean + + }}">watch + + + +#### 响应 + + +200 (}}">CronJobList): OK + +401: 未授权 + + + +### `list` 查看或监视 CronJob 类型的对象 + +#### HTTP 请求 + +GET /apis/batch/v1/cronjobs + +#### 参数 + + + +- **allowWatchBookmarks** (*in query*): boolean + + }}">allowWatchBookmarks + + + +- **continue** (*in query*): string + + }}">continue + + + +- **fieldSelector** (*in query*): string + + }}">fieldSelector + + + +- **labelSelector** (*in query*): string + + }}">labelSelector + + + +- **limit** (*in query*): integer + + }}">limit + + + +- **pretty** (*in query*): string + + }}">pretty + + + +- **resourceVersion** (*in query*): string + + }}">resourceVersion + + + +- **resourceVersionMatch** (*in query*): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (*in query*): integer + + }}">timeoutSeconds + + + +- **watch** (*in query*): boolean + + }}">watch + + + +#### 响应 + + +200 (}}">CronJobList): OK + +401: 未授权 + + + +### `create` 创建一个 CronJob + +#### HTTP 请求 + +POST /apis/batch/v1/namespaces/{namespace}/cronjobs + +#### 参数 + + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **body**: }}">CronJob, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +#### 响应 + + +200 (}}">CronJob): OK + +201 (}}">CronJob): 创建完成 + +202 (}}">CronJob): 已接受 + +401: 未授权 + + + +### `update` 替换指定的 CronJob + +#### HTTP 请求 + +PUT /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + CronJob 的名称 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **body**: }}">CronJob, 必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +#### 响应 + + +200 (}}">CronJob): OK + +201 (}}">CronJob): 创建完成 + +401: 未授权 + + + +### `update` 替换指定 CronJob 的状态 + +#### HTTP 请求 + +PUT /apis/batch/v1/namespaces/{namespace}/cronjobs/{name}/status + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + CronJob 的名称 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **body**: }}">CronJob, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +#### 响应 + + +200 (}}">CronJob): OK + +201 (}}">CronJob): 创建完成 + +401: 未授权 + + + +### `patch` 部分更新指定的 CronJob + +#### HTTP 请求 + +PATCH /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + CronJob 的名称 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **body**: }}">Patch, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + +- **force** (**查询参数**): boolean + + }}">force + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +#### 响应 + + +200 (}}">CronJob): OK + +201 (}}">CronJob): 创建完成 + +401: 未授权 + + + +### `patch` 部分更新指定 CronJob 的状态 + +#### HTTP 请求 + +PATCH /apis/batch/v1/namespaces/{namespace}/cronjobs/{name}/status + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + CronJob 的名称 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **body**: }}">Patch, 必需 + + + +- **dryRun** (**参数参数**): string + + }}">dryRun + + + +- **fieldManager** (**参数参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**参数参数**): string + + }}">fieldValidation + + + +- **force** (**参数参数**): boolean + + }}">force + + + +- **pretty** (**参数参数**): string + + }}">pretty + + + +#### 响应 + + +200 (}}">CronJob): OK + +201 (}}">CronJob): 创建完成 + +401: 未授权 + + + +### `delete` 删除一个 CronJob + +#### HTTP 请求 + +DELETE /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + CronJob 的名称 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **body**: }}">DeleteOptions + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + + +#### 响应 + + +200 (}}">Status): OK + +202 (}}">Status): 创建完成 + +401: 未授权 + + + +### `deletecollection` 删除一组 CronJob + +#### HTTP 请求 + +DELETE /apis/batch/v1/namespaces/{namespace}/cronjobs + +#### 参数 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **body**: }}">DeleteOptions + + + +- **continue** (**查询参数**): string + + }}">continue + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + + +- **limit** (**查询参数**): integer + + }}">limit + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + + +####响应 + + +200 (}}">Status): OK + +401: 未授权 + From a2f9ceee93f8c801826459510069b73558dc1f03 Mon Sep 17 00:00:00 2001 From: sarazqy Date: Wed, 13 Jul 2022 20:26:32 +0800 Subject: [PATCH 101/292] translate content/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md into Chinese --- .../workload-resources/daemon-set-v1.md | 1598 +++++++++++++++++ 1 file changed, 1598 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md new file mode 100644 index 0000000000000..1ab884c064b85 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md @@ -0,0 +1,1598 @@ +--- +api_metadata: + apiVersion: "apps/v1" + import: "k8s.io/api/apps/v1" + kind: "DaemonSet" +content_type: "api_reference" +description: "DaemonSet 表示守护程序集的配置。" +title: "DaemonSet" +weight: 8 +--- + + + + +`apiVersion: apps/v1` + +`import "k8s.io/api/apps/v1"` + + +## DaemonSet {#DaemonSet} + +DaemonSet 表示守护程序集的配置。 + +
+ +- **apiVersion**: apps/v1 + + +- **kind**: DaemonSet + + +- **metadata** (}}">ObjectMeta) + + 标准的对象元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + +- **spec** (}}">DaemonSetSpec) + + 此守护程序集的预期行为。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + +- **status** (}}">DaemonSetStatus) + + 此守护程序集的当前状态。此数据可能已经过时一段时间。由系统填充。 + 只读。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + + +## DaemonSetSpec {#DaemonSetSpec} + +DaemonSetSpec 是守护程序集的规约。 + +
+ + +- **selector** (}}">LabelSelector), 必需 + + 对由守护程序集管理的 Pod 的标签查询。Pod 必须匹配此查询才能被此 DaemonSet 控制.查询条件必须与 Pod 模板的标签匹配。 + 更多信息: https://kubernetes.io/zh-cn/concepts/overview/working-with-objects/labels/#label-selectors + + +- **template** (}}">PodTemplateSpec), 必需 + + 描述将要创建的 Pod 的对象。DaemonSet 将在与模板的节点选择器匹配的每个节点上(如果未指定节点选择器,则在每个节点上)准确创建此 Pod 的副本。 + 更多信息: https://kubernetes.io/zh-cn/concepts/workloads/controllers/replicationcontroller#pod-template + + +- **minReadySeconds** (int32) + + 新建的 DaemonSet Pod 应该在没有任何容器崩溃的情况下处于就绪状态的最小秒数,这样它才会被认为是可用的。默认为 0(Pod 准备就绪后将被视为可用)。 + + +- **updateStrategy** (DaemonSetUpdateStrategy) + + 用新 Pod 替换现有 DaemonSet Pod 的更新策略。 + + + + **DaemonSetUpdateStrategy 是一个结构体,用于控制 DaemonSet 的更新策略。** + + + + - **updateStrategy.type** (string) + + 守护程序集更新的类型。可以是 "RollingUpdate" 或 "OnDelete"。默认为 RollingUpdate。 + + + + - **updateStrategy.rollingUpdate** (RollingUpdateDaemonSet) + + 滚动更新配置参数。仅在 type 值为 "RollingUpdate" 时出现。 + + + + **用于控制守护程序集滚动更新的预期行为的规约。** + + + + - **updateStrategy.rollingUpdate.maxSurge** (IntOrString) + + 对于拥有可用 DaemonSet Pod 的节点而言,在更新期间可以拥有更新后的 DaemonSet Pod 的最大节点数。 + 属性值可以是绝对数量(例如:5)或所需 Pod 的百分比(例如:10%)。 + 如果 maxUnavailable 为 0,则该值不能为 0。绝对数是通过四舍五入从百分比计算得出的,最小值为 1。 + 默认值为 0。示例:当设置为 30% 时,最多为节点总数的 30% 节点上应该运行守护程序 Pod(即 status.desiredNumberScheduled) + 可以在旧 Pod 标记为已删除之前创建一个新 Pod。更新首先在 30% 的节点上启动新的 Pod。 + 一旦更新的 Pod 可用(就绪时长至少 minReadySeconds 秒),该节点上的旧 DaemonSet pod 就会被标记为已删除。 + 如果旧 Pod 因任何原因变得不可用(Ready 转换为 false、被驱逐或节点被腾空), + 则会立即在该节点上创建更新的 Pod,而不考虑激增限制。 + 允许激增意味着如果就绪检查失败,任何给定节点上的守护程序集消耗的资源可能会翻倍, + 因此资源密集型守护程序集应该考虑到它们可能会在中断期间导致驱逐。 + 此字段是 Beta 字段,由 DaemonSetUpdateSurge 特性门启用/禁用。 + + + + **IntOrString 是一种可以容纳 int32 或字符串的类型。在 JSON 或 YAML 编组和解组中使用时,它会生成或使用内部类型。 + 例如,这允许你拥有一个可以接受名称或数字的 JSON 字段。** + + + + - **updateStrategy.rollingUpdate.maxUnavailable** (IntOrString) + + 更新期间不可用的 DaemonSet Pod 的最大数量。值可以是绝对数(例如:5)或更新开始时 DaemonSet Pod 总数的百分比(例如:10%)。 + 绝对数是通过四舍五入的百分比计算得出的。如果 maxSurge 为 0,则此值不能为 0 默认值为 1。 + 例如:当设置为 30% 时,最多节点总数 30% 的、应该运行守护程序的节点总数(即 status.desiredNumberScheduled) + 可以在任何给定时间停止更新。更新首先停止最多 30% 的 DaemonSet Pod, + 然后在它们的位置启动新的 DaemonSet Pod。 + 一旦新的 Pod 可用,它就会继续处理其他 DaemonSet Pod,从而确保在更新期间至少 70% 的原始 DaemonSet Pod 数量始终可用。 + + + + **IntOrString 是一种可以保存 int32 或字符串的类型。在 JSON 或 YAML 编组和解组中使用时,它会生成或使用内部类型。例如,这允许你拥有一个可以接受名称或数字的 JSON 字段。** + + +- **revisionHistoryLimit** (int32) + + 用来允许回滚而保留的旧历史记录的数量。此字段是个指针,用来区分明确的零值和未指定的指针。默认值是 10。 + + + +## DaemonSetStatus {#DaemonSetStatus} + +DaemonSetStatus 表示守护程进程的当前状态。 + +
+ + +- **numberReady** (int32),必需 + + numberReady 是应该运行守护进程 Pod 并且有一个或多个 DaemonSet Pod 以就绪条件运行的节点数。 + + +- **numberAvailable** (int32) + + 应该运行守护进程 Pod 并有一个或多个守护进程 Pod 正在运行和可用(就绪时长超过 spec.minReadySeconds)的节点数量。 + + +- **numberUnavailable** (int32) + + 应该运行守护进程 Pod 并且没有任何守护进程 Pod 正在运行且可用(至少已就绪 spec.minReadySeconds 秒)的节点数。 + + +- **numberMisscheduled** (int32),必需 + + 正在运行守护进程 Pod,但不应该运行守护进程 Pod 的节点数量。更多信息: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ + + +- **desiredNumberScheduled** (int32),必需 + + 应该运行守护进程 Pod 的节点总数(包括正确运行守护进程 Pod 的节点)。更多信息: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ + + +- **currentNumberScheduled** (int32),必需 + + 运行至少 1 个守护进程 Pod 并且应该运行守护进程 Pod 的节点数。多信息: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ + + +- **updatedNumberScheduled** (int32) + + 正在运行更新后的守护进程 Pod 的节点总数。 + + +- **collisionCount** (int32) + + DaemonSet 的哈希冲突计数。DaemonSet 控制器在需要为最新的 ControllerRevision 创建名称时使用此字段作为避免冲突的机制。 + + +- **conditions** ([]DaemonSetCondition) + + **补丁策略:根据 `type` 键合并** + + + 表示 DaemonSet 当前状态的最新可用观测信息。 + + + **DaemonSet Condition 描述了 DaemonSet 在某一时刻的状态。** + + + + - **conditions.status** (string),必需 + + 状况的状态,True、False、Unknown 之一。 + + - **conditions.type** (string),必需 + + DaemonSet 状况的类型。 + + + + - **conditions.lastTransitionTime** (Time) + + 状况上次从一种状态转换到另一种状态的时间。 + + + **Time 是对 time.Time 的封装,支持正确编码为 YAML 和 JSON。time 包为许多工厂方法提供了封装器。** + + + + - **conditions.message** (string) + + 一条人类可读的消息,指示有关转换的详细信息。 + + + + - **conditions.reason** (string) + + 状况最后一次转换的原因。 + + + +- **observedGeneration** (int64) + + 守护进程集控制器观察到的最新一代。 + + + + +## DaemonSetList {#DaemonSetList} + +DaemonSetList 是守护进程集的集合。 + +
+ +- **apiVersion**: apps/v1 + + +- **kind**: DaemonSetList + + + + +- **metadata** (}}">ListMeta) + + 标准列表元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **items** ([]}}">DaemonSet), 必需 + + DaemonSet 的列表。 + + +## Operations {#Operations} + +
+ + +### `get` 读取指定的 DaemonSet + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + DaemonSet 的名称 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + + +200 (}}">DaemonSet): OK + +401: 未授权 + + +### `get` 读取指定的 DaemonSet 的状态 + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + DaemonSet 的名称 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + + +200 (}}">DaemonSet): OK + +401: 未授权 + + +### `list` 列表或查看 DaemonSet 类型的对象 + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/daemonsets + +#### 参数 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + + +- **allowWatchBookmarks** (**路径参数**): boolean + + }}">allowWatchBookmarks + + +- **continue** (**查询参数**): string + + }}">continue + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + + +- **limit** (**查询参数**): integer + + }}">limit + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + + +200 (}}">DaemonSetList): OK + +401: 未授权 + + +### `list` 列表或查看 DaemonSet 类型的对象 + +#### HTTP 请求 + +GET /apis/apps/v1/daemonsets + +#### 参数 + + + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + + +- **continue** (**查询参数**): string + + }}">continue + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + + +- **limit** (**查询参数**): integer + + }}">limit + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +- **watch** (**查询参数**): boolean + + }}">watch + + + +#### 响应 + + +200 (}}">DaemonSetList): OK + +401: 未授权 + + +### `create` 创建一个 DaemonSet + +#### HTTP 请求 + +POST /apis/apps/v1/namespaces/{namespace}/daemonsets + +#### 参数 + + +- **namespace** (**路径参数**): string, 必需 + +}}">namespace + + +- **body**: }}">DaemonSet, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + + +200 (}}">DaemonSet): OK + +201 (}}">DaemonSet): 创建完成 + +202 (}}">DaemonSet): 已接受 + +401: 未授权 + + +### `update` 替换指定的 DaemonSet + +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + DaemonSet 的名称 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + + +- **body**: }}">DaemonSet,必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + + +200 (}}">DaemonSet): OK + +201 (}}">DaemonSet): 已创建 + +401: 未授权 + + +### `update` 替换指定 DaemonSet 的状态 + +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + DaemonSet 的名称 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">DaemonSet, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + + +200 (}}">DaemonSet): OK + +201 (}}">DaemonSet): 已创建 + +401: 未授权 + + +### `patch` 部分更新指定的 DaemonSet + +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} + +#### 参数 + + + +- **name** (**路径参数**): string, 必需 + + DaemonSet 的名称 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">Patch, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **force** **查询参数**): boolean + + }}">force + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + + +200 (}}">DaemonSet): OK + +201 (}}">DaemonSet): 已创建 + +401: 未授权 + + +### `patch` 部分更新指定 DaemonSet 的状态 + +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + DaemonSet 的名称 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">Patch, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **force** (**查询参数**): boolean + + }}">force + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + + +200 (}}">DaemonSet): OK + +201 (}}">DaemonSet): 已创建 + +401: 未授权 + + +### `delete` 删除一个 DaemonSet + +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + DaemonSet 的名称 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + + +- **body**: }}">DeleteOptions + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + + +#### 响应 + + +200 (}}">Status): OK + +202 (}}">Status): 已接受 + +401: 未授权 + + +### `deletecollection` 删除 DaemonSet 的集合 + +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/daemonsets + +#### 参数 + + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">DeleteOptions + + + +- **continue** (**查询参数**): string + + }}">continue + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + +- **limit** (**查询参数**): integer + + }}">limit + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + + +200 (}}">Status): OK + +401: 未授权 \ No newline at end of file From 21f794857411015d581ff968cbe96fa069d287fe Mon Sep 17 00:00:00 2001 From: Akanksha kumari Date: Tue, 19 Jul 2022 12:46:54 +0530 Subject: [PATCH 102/292] Update wordings for RESTART counter Co-authored-by: Rishit Dagli --- .../configure-liveness-readiness-startup-probes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index b6c2efe5850cc..66f33b1a6fab8 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -117,7 +117,7 @@ Wait another 30 seconds, and verify that the container has been restarted: kubectl get pod liveness-exec ``` -The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter increments as soon as a failed container in a restarts : +The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter increments as soon as a failed container comes back to the running state: ``` NAME READY STATUS RESTARTS AGE From 673d1d72120430c2b8b8c2032f075169d2bed67a Mon Sep 17 00:00:00 2001 From: Tom Kivlin Date: Tue, 19 Jul 2022 09:40:58 +0100 Subject: [PATCH 103/292] small edit and add back content from PR#2334 --- .../configure-pod-configmap.md | 98 +++++++++++++++---- 1 file changed, 78 insertions(+), 20 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index a4882ff88293d..c13fc2ce6a573 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -10,7 +10,7 @@ card: Many applications rely on configuration which is used during either application initialization or runtime. Most of the times there is a requirement to adjust values assigned to configuration parameters. -ConfigMaps is the kubernetes way to inject application pods with configuration data. +ConfigMaps is the Kubernetes way to inject application pods with configuration data. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps. @@ -623,24 +623,6 @@ Like before, all previous files in the `/etc/config/` directory will be deleted. You can project keys to specific paths and specific permissions on a per-file basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) user guide explains the syntax. -### Optional References - -A ConfigMap reference may be marked "optional". If the ConfigMap is non-existent, the mounted volume will be empty. If the ConfigMap exists, but the referenced -key is non-existent the path will be absent beneath the mount point. - -### Mounted ConfigMaps are updated automatically - -When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into -existence after a pod has started. - -Kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the -ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as -kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. - -{{< note >}} -A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. -{{< /note >}} - @@ -675,7 +657,7 @@ data: ### Restrictions -- You must create a ConfigMap before referencing it in a Pod specification (unless you mark the ConfigMap as "optional"). If you reference a ConfigMap that doesn't exist, the Pod won't start. Likewise, references to keys that don't exist in the ConfigMap will prevent the pod from starting. +- You must create a ConfigMap before referencing it in a Pod specification, or mark the ConfigMap as "optional" (see [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist, or hasn't been marked as "optional" the Pod won't start. Likewise, references to keys that don't exist in the ConfigMap will prevent the pod from starting. - If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered invalid will be skipped. The pod will be allowed to start, but the invalid names will be recorded in the event log (`InvalidVariableNames`). The log message lists each skipped key. For example: @@ -693,7 +675,83 @@ data: - You can't use ConfigMaps for {{< glossary_tooltip text="static pods" term_id="static-pod" >}}, because the Kubelet does not support this. +### Optional ConfigMaps + +A ConfigMap reference may be marked "optional". +If the ConfigMap is non-existent, the mounted volume will be empty. +If the ConfigMap exists, but the referenced key is non-existent the path will be absent beneath the mount point. +#### Optional ConfigMap in environment variables + +There might be situations where environment variables are not always required. +These environment variables can be marked as optional in a pod like so: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: gcr.io/google_containers/busybox + command: [ "/bin/sh", "-c", "env" ] + env: + - name: SPECIAL_LEVEL_KEY + valueFrom: + configMapKeyRef: + name: a-config + key: akey + optional: true + restartPolicy: Never +``` + +When this Pod is run, the output will be empty. + +#### Optional ConfigMap via volume plugin + +Volumes and files provided by a ConfigMap can be also be marked as optional. +The ConfigMap or the key specified does not have to exist. +The mount path for such items will always be created. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: gcr.io/google_containers/busybox + command: [ "/bin/sh", "-c", "ls /etc/config" ] + volumeMounts: + - name: config-volume + mountPath: /etc/config + volumes: + - name: config-volume + configMap: + name: no-config + optional: true + restartPolicy: Never +``` + +When this pod is run, the output will be: + +```shell +``` + +### Mounted ConfigMaps are updated automatically + +When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into +existence after a pod has started. + +Kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the +ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as +kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. + +{{< note >}} +A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. +{{< /note >}} ## {{% heading "whatsnext" %}} From a2e43f2a28d618bf22411fa8cc58720366558d6e Mon Sep 17 00:00:00 2001 From: "donghui.jiang" Date: Wed, 6 Jul 2022 10:59:41 +0800 Subject: [PATCH 104/292] [zh-cn] update mutating-webhook-configuration-v1.md Chinese version [zh-cn] update content/zh-cn/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1.md --- .../mutating-webhook-configuration-v1.md | 1133 +++++++++++++++++ 1 file changed, 1133 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1.md b/content/zh-cn/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1.md new file mode 100644 index 0000000000000..80c2cf83c0a95 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1.md @@ -0,0 +1,1133 @@ +--- +api_metadata: +apiVersion: "admissionregistration.k8s.io/v1" +import: "k8s.io/api/admissionregistration/v1" +kind: "MutatingWebhookConfiguration" +content_type: "api_reference" +description: "MutatingWebhookConfiguration 描述准入 Webhook 的配置,该 Webhook 可在更改对象的情况下接受或拒绝对象请求" +title: "MutatingWebhookConfiguration" +weight: 2 +--- + + + +`apiVersion: admissionregistration.k8s.io/v1` + +`import "k8s.io/api/admissionregistration/v1"` + +## MutatingWebhookConfiguration {#MutatingWebhookConfiguration} + + + +MutatingWebhookConfiguration 描述准入 Webhook 的配置,该 Webhook 可接受或拒绝对象请求,并且可能变更对象。 + +
+ +- **apiVersion**: admissionregistration.k8s.io/v1 + +- **kind**: MutatingWebhookConfiguration + + + +- **metadata** (}}">ObjectMeta) + + 标准的对象元数据,更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata。 + + + +- **webhooks** ([]MutatingWebhook) + + **补丁策略:根据 `name` 键执行合并操作** + + webhooks 是 Webhook 及其所影响的资源和操作的列表。 + + + **MutatingWebhook 描述了一个准入 Webhook 及其适用的资源和操作。** + + + + - **webhooks.admissionReviewVersions** ([]string), 必需 + + admissionReviewVersions 是 Webhook 期望的 `AdmissionReview` 版本的优选顺序列表。 + API 服务器将尝试使用它所支持的版本列表中的第一个版本。如果 API 服务器不支持此列表中设置的任何版本,则此对象将验证失败。 + 如果持久化的 Webhook 配置指定了所允许的版本,但其中不包括 API 服务器所知道的任何版本, + 则对 Webhook 的调用将失败并根据失败策略进行处理。 + + + + - **webhooks.clientConfig** (WebhookClientConfig), 必需 + + clientConfig 定义了如何与 Webhook 通信。必需。 + + + **WebhookClientConfig 包含与 Webhook 建立 TLS 连接的信息** + + + + - **webhooks.clientConfig.caBundle** ([]byte) + + `caBundle` 是一个 PEM 编码的 CA 包,将用于验证 Webhook 的服务证书。如果未指定,则使用 apiserver 上的系统信任根。 + + + + - **webhooks.clientConfig.service** (ServiceReference) + + `service` 是对此 Webhook 的服务的引用。必须指定 `service` 或 `url` 之一。 + + 如果 Webhook 在集群中运行,那么你应该使用 `service`。 + + + **ServiceReference 包含对 Service.legacy.k8s.io 的引用** + + + + - **webhooks.clientConfig.service.name** (string), 必需 + + `name` 是服务的名称。必需。 + + + + - **webhooks.clientConfig.service.namespace** (string), 必需 + + `namespace` 是服务的命名空间。必需。 + + + + - **webhooks.clientConfig.service.path** (string) + + `path` 是一个可选的 URL 路径,在针对此服务的所有请求中都会发送此路径。 + + + + - **webhooks.clientConfig.service.port** (int32) + + 如果指定了,则为托管 Webhook 的服务上的端口。默认为 443 以实现向后兼容。 + `port` 应该是一个有效的端口号(包括 1-65535)。 + + + + - **webhooks.clientConfig.url** (string) + + `url` 以标准 URL 形式(`scheme://host:port/path`)给出了 Webhook 的位置。必须指定 `url` 或 `service` 中的一个。 + + `host` 不能用来引用集群中运行的服务;这种情况应改用 `service` 字段。在某些 API 服务器上,可能会通过外部 DNS 解析 `host` 值。 + (例如,`kube-apiserver` 无法解析集群内 DNS,因为这会违反分层原理)。`host` 也可以是 IP 地址。 + + 请注意,使用 `localhost` 或 `127.0.0.1` 作为 `host` 是有风险的,除非你非常小心地在运行 apiserver 的所有主机上运行此 Webhook, + 而这些 API 服务器可能需要调用此 Webhook。此类部署可能是不可移植的,即不容易在新集群中重复安装。 + + 该方案必须是 “https”;URL 必须以 “https://” 开头。 + + 路径是可选的,如果存在,可以是 URL 中允许的任何字符串。你可以使用路径将任意字符串传递给 Webhook,例如集群标识符。 + + 不允许使用用户或基本身份验证,例如不允许使用 “user:password@”。 + 不允许使用片段(“#...”)和查询参数(“?...”)。 + + + + - **webhooks.name** (string), 必需 + + 准入 Webhook 的名称。应该是完全限定的名称,例如 imagepolicy.kubernetes.io,其中 “imagepolicy” 是 Webhook 的名称, + kubernetes.io 是组织的名称。必需。 + + + + - **webhooks.sideEffects** (string), 必需 + + sideEffects 说明此 Webhook 是否有副作用。可接受的值为:None、NoneOnDryRun(通过 v1beta1 创建的 Webhook 也可以指定 Some 或 Unknown)。 + 具有副作用的 Webhook 必须实现协调系统,因为请求可能会被准入链中的未来步骤拒绝,因此需要能够撤消副作用。 + 如果请求与带有 sideEffects == Unknown 或 Some 的 Webhook 匹配,则带有 dryRun 属性的请求将被自动拒绝。 + + + + - **webhooks.failurePolicy** (string) + + failurePolicy 定义如何处理来自准入端点的无法识别的错误 - 允许的值是 Ignore 或 Fail。默认为 Fail。 + + + + - **webhooks.matchPolicy** (string) + + matchPolicy 定义了如何使用 “rules” 列表来匹配传入的请求。允许的值为 “Exact” 或 “Equivalent”。 + + - Exact: 仅当请求与指定规则完全匹配时才匹配请求。 + 例如,如果可以通过 apps/v1、apps/v1beta1 和 extensions/v1beta1 修改 deployments 资源, + 但 “rules” 仅包含 `apiGroups:["apps"]、apiVersions:["v1"]、resources:["deployments"]`, + 对 apps/v1beta1 或 extensions/v1beta1 的请求不会被发送到 Webhook。 + + - Equivalent: 如果针对的资源包含在 “rules” 中,即使请求是通过另一个 API 组或版本提交,也会匹配。 + 例如,如果可以通过 apps/v1、apps/v1beta1 和 extensions/v1beta1 修改 deployments 资源, + 并且 “rules” 仅包含 `apiGroups:["apps"]、apiVersions:["v1"]、resources:["deployments "]`, + 对 apps/v1beta1 或 extensions/v1beta1 的请求将被转换为 apps/v1 并发送到 Webhook。 + + 默认为 “Equivalent”。 + + - **webhooks.namespaceSelector** (}}">LabelSelector) + + + + namespaceSelector 根据对象的命名空间是否与 selector 匹配来决定是否在该对象上运行 Webhook。 + 如果对象本身是 Namespace,则针对 object.metadata.labels 执行匹配。 + 如果对象是其他集群作用域资源,则永远不会跳过 Webhook 的匹配动作。 + + 例如,为了针对 “runlevel” 不为 “0” 或 “1” 的名字空间中的所有对象运行 Webhook; + 你可以按如下方式设置 selector : + ``` + "namespaceSelector": { + "matchExpressions": [ + { + "key": "runlevel", + "operator": "NotIn", + "values": [ + "0", + "1" + ] + } + ] + } + ``` + + + 相反,如果你只想针对 “environment” 为 “prod” 或 “staging” 的名字空间中的对象运行 Webhook; + 你可以按如下方式设置 selector: + ``` + "namespaceSelector": { + "matchExpressions": [ + { + "key": "environment", + "operator": "In", + "values": [ + "prod", + "staging" + ] + } + ] + } + ``` + + 有关标签选择算符的更多示例,请参阅 + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels。 + + 默认为空的 LabelSelector,匹配所有对象。 + + + + - **webhooks.objectSelector** (}}">LabelSelector) + + objectSelector 根据对象是否具有匹配的标签来决定是否运行 Webhook。 + objectSelector 针对将被发送到 Webhook 的 oldObject 和 newObject 进行评估,如果任一对象与选择器匹配,则视为匹配。 + 空对象(create 时为 oldObject,delete 时为 newObject)或不能有标签的对象(如 DeploymentRollback 或 PodProxyOptions 对象) + 认为是不匹配的。 + 仅当 Webhook 支持时才能使用对象选择器,因为最终用户可以通过设置标签来跳过准入 webhook。 + 默认为空的 LabelSelector,匹配所有内容。 + + + - **webhooks.reinvocationPolicy** (string) + + + reinvocationPolicy 表示这个 webhook 是否可以被多次调用,作为一次准入评估的一部分。可取值有 “Never” 和 “IfNeeded”。 + + - Never: 在一次录取评估中,webhook 被调用的次数不会超过一次。 + - IfNeeded:如果被录取的对象在被最初的 Webhook 调用后又被其他录取插件修改, + 那么该 webhook 将至少被额外调用一次作为录取评估的一部分。 + 指定此选项的 webhook **必须**是幂等的,能够处理它们之前承认的对象。 + 注意:**不保证额外调用的次数正好为1。** + 如果额外的调用导致对对象的进一步修改,Webhook 不保证会再次被调用。 + **使用该选项的 webhook 可能会被重新排序,以最小化额外调用的数量。** + 在保证所有的变更都完成后验证一个对象,使用验证性质的准入 Webhook 代替。 + + 默认值为 “Never” 。 + + + + - **webhooks.rules** ([]RuleWithOperations) + + rules 描述了 Webhook 关心的资源/子资源上有哪些操作。Webhook 关心操作是否匹配**任何**rules。 + 但是,为了防止 ValidatingAdmissionWebhooks 和 ValidatingAdmissionWebhooks 将集群置于只能完全禁用插件才能恢复的状态, + ValidatingAdmissionWebhooks 和 ValidatingAdmissionWebhooks 永远不会在处理 ValidatingWebhookConfiguration + 和 ValidatingWebhookConfiguration 对象的准入请求被调用。 + + + **RuleWithOperations 是操作和资源的元组。建议确保所有元组组合都是有效的。** + + + + - **webhooks.rules.apiGroups** ([]string) + + apiGroups 是资源所属的 API 组列表。'*' 是所有组。 + 如果存在 '*',则列表的长度必须为 1。必需。 + + + + - **webhooks.rules.apiVersions** ([]string) + + apiVersions 是资源所属的 API 版本列表。'*' 是所有版本。 + 如果存在 '*',则列表的长度必须为 1。必需。 + + + + - **webhooks.rules.operations** ([]string) + + operations 是准入 Webhook 所关心的操作 —— CREATE、UPDATE、DELETE、CONNECT + 或用来指代所有已知操作以及将来可能添加的准入操作的 `*`。 + 如果存在 '*',则列表的长度必须为 1。必需。 + + + + - **webhooks.rules.resources** ([]string) + + resources 是此规则适用的资源列表。 + + - `pods` 表示 pods,'pods/log' 表示 pods 的日志子资源。`*` 表示所有资源,但不是子资源。 + - `pods/*` 表示 pods 的所有子资源, + - `*/scale` 表示所有 scale 子资源, + - `*/*` 表示所有资源及其子资源。 + + 如果存在通配符,则验证规则将确保资源不会相互重叠。 + + 根据所指定的对象,可能不允许使用子资源。必需。 + + + + - **webhooks.rules.scope** (string) + + scope 指定此规则的范围。有效值为 “Cluster”, “Namespaced” 和 “*”。 + “Cluster” 表示只有集群范围的资源才会匹配此规则。 + Namespace API 对象是集群范围的。 + “Namespaced” 意味着只有命名空间作用域的资源会匹配此规则。 + “*” 表示没有范围限制。 + 子资源与其父资源的作用域相同。默认为 “*”。 + + + + - **webhooks.timeoutSeconds** (int32) + + timeoutSeconds 指定此 Webhook 的超时时间。 + 超时后,Webhook 的调用将被忽略或 API 调用将根据失败策略失败。 + 超时值必须在 1 到 30 秒之间。默认为 10 秒。 + +## MutatingWebhookConfigurationList {#MutatingWebhookConfigurationList} + + +MutatingWebhookConfigurationList 是 MutatingWebhookConfiguration 的列表。 + +
+ +- **apiVersion**: admissionregistration.k8s.io/v1 + +- **kind**: MutatingWebhookConfigurationList + + + +- **metadata** (}}">ListMeta) + + 标准的对象元数据,更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds。 + + + +- **items** ([]}}">MutatingWebhookConfiguration), 必需 + + MutatingWebhookConfiguration 列表。 + + +## 操作 {#operations} + +
+ + +### `get` 读取指定的 MutatingWebhookConfiguration + +#### HTTP 请求 + +GET /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + MutatingWebhookConfiguration 的名称。 + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">MutatingWebhookConfiguration): OK + +401: Unauthorized + + +### `list` 列出或观察 MutatingWebhookConfiguration 类型的对象 + +#### HTTP 请求 + +GET /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations + + +#### 参数 + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + + + +- **continue** (**查询参数**): string + + }}">continue + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + + +- **limit** (**查询参数**): integer + + }}">limit + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + + +200 (}}">MutatingWebhookConfigurationList): OK + +401: Unauthorized + + +### `create` 创建一个 MutatingWebhookConfiguration + +#### HTTP 请求 + +POST /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations + + +#### 参数 + +- **body**: }}">MutatingWebhookConfiguration, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">MutatingWebhookConfiguration): OK + +201 (}}">MutatingWebhookConfiguration): Created + +202 (}}">MutatingWebhookConfiguration): Accepted + +401: Unauthorized + + +### `update` 替换指定的 MutatingWebhookConfiguration + +#### HTTP 请求 + +PUT /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + MutatingWebhookConfiguration 的名称。 + + + +- **body**: }}">MutatingWebhookConfiguration, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">MutatingWebhookConfiguration): OK + +201 (}}">MutatingWebhookConfiguration): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 MutatingWebhookConfiguration + +#### HTTP 请求 + +PATCH /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + MutatingWebhookConfiguration 的名称。 + + + +- **body**: }}">Patch, 必需 + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + +- **force** (**查询参数**): boolean + + }}">force + + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">MutatingWebhookConfiguration): OK + +201 (}}">MutatingWebhookConfiguration): Created + +401: Unauthorized + + +### `delete` 删除 MutatingWebhookConfiguration + +#### HTTP 请求 + +DELETE /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + MutatingWebhookConfiguration 的名称。 + +- **body**: }}">DeleteOptions + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">Status): OK + +202 (}}">Status): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 MutatingWebhookConfiguration 的集合 + +#### HTTP 请求 + +DELETE /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations + + +#### 参数 + +- **body**: }}">DeleteOptions + + + +- **continue** (**查询参数**): string + + }}">continue + + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + + +- **limit** (**查询参数**): integer + + }}">limit + + + +- **pretty** (**查询参数**): string + + }}">pretty + + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized \ No newline at end of file From 4802e0f14aa8d1a8e3bf2e5fef27f34b711b2147 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 19 Jul 2022 19:06:16 +0200 Subject: [PATCH 105/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index e62e39217ea65..6c17c3eace920 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -25,7 +25,7 @@ Por otro lado, cuando se crean NetworkPolicies basadas en IP, se definen políti ## Prerrequisitos -Las políticas de red son implementadas por el [plugin de red](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). Para usar políticas de red, debes estar utilizando una solución de red que soporte NetworkPolicy. Crear un recurso NetworkPolicy sin un controlador que lo habilite no tendrá ningún efecto. +Las políticas de red son implementadas por el [plugin de red](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). Para usar políticas de red, debes estar utilizando una solución de red que soporte NetworkPolicy. Crear un recurso NetworkPolicy sin un controlador que lo habilite no tendrá efecto alguno. ## Dos Tipos de Aislamiento de Pod From 08267eaae67003f93d41f9c208d6fa73fa63c106 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 19 Jul 2022 19:06:28 +0200 Subject: [PATCH 106/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 6c17c3eace920..25c9a47b822f4 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -268,7 +268,7 @@ Aunque NetworkPolicy no puede apuntar a un Namespace por su nombre con algún ca ## Que no puedes hacer con políticas de red (al menos, aún no) -A día de hoy, en Kubernetes {{< skew currentVersion >}}, la siguiente funcionalidad no existe en la API de NetworkPolicy, pero es posible que se puedan implementar soluciones mediante componentes del sistema operativo (como SELinux, OpenVSwitch, IPTables, etc.) o tecnologías de capa 7 (Ingress controllers, implementaciones de Service Mesh) o controladores de admisión. En caso de que seas nuevo en la seguridad de la red en Kubernetes, vale la pena señalar que las siguientes historias de usuario no pueden (todavía) ser implementadas usando la API NetworkPolicy. +Actualmente, en Kubernetes {{< skew currentVersion >}}, la siguiente funcionalidad no existe en la API de NetworkPolicy, pero es posible que se puedan implementar soluciones mediante componentes del sistema operativo (como SELinux, OpenVSwitch, IPTables, etc.) o tecnologías de capa 7 (Ingress controllers, implementaciones de Service Mesh) o controladores de admisión. En caso de que seas nuevo en la seguridad de la red en Kubernetes, vale la pena señalar que las siguientes historias de usuario no pueden (todavía) ser implementadas usando la API NetworkPolicy. - Forzar que el tráfico interno del clúster pase por una puerta de enlace común (esto se puede implementar con una malla de servicios u otro proxy). - Cualquier cosa relacionada con TLS (se puede implementar con una malla de servicios o un Ingress controllers para esto). From c96718f2ee7b5d419a9126127ec0605aeb69b3ee Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 19 Jul 2022 19:07:37 +0200 Subject: [PATCH 107/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 25c9a47b822f4..5071fc9f209ca 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -55,7 +55,7 @@ Enviar esto al API Server de su clúster no tendrá ningún efecto a menos que s __Campos Obligatorios__: Como con todos los otras configuraciones de Kubernetes, una NetworkPolicy necesita los campos `apiVersion`, `kind`, y `metadata`. Para obtener información general -sobre cómo funcionan esos ficheros de configuración, mirar +sobre cómo funcionan esos ficheros de configuración, puedes consultar [Configurar un Pod para usar un ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), y [Gestión de Objetos](/docs/concepts/overview/working-with-objects/object-management). From cef90828d9260a6ccc3c129fade28858abd63377 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 19 Jul 2022 19:11:03 +0200 Subject: [PATCH 108/292] Fix type Signed-off-by: Nicolas Quiceno B --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index 5071fc9f209ca..f7c20dceed727 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -34,7 +34,7 @@ Hay dos tipos de aislamiento para un Pod: el aislamiento para la salida y el ais Por defecto, un Pod no está aislado para la salida; todas las conexiones salientes están permitidas. Un Pod está aislado para la salida si hay alguna NetworkPolicy con "Egress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la salida. Cuando un Pod está aislado para la salida, las únicas conexiones permitidas desde el Pod son las permitidas por la lista `egress` de las NetworkPolicy que se aplique al Pod para la salida. Los valores de esas listas `egress` se combinan de forma aditiva. -Por defecto, un Pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un Pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la entrada. Cuando un Pod está aislado para la entrada, las únicas conexiones permitidas en el Pod son las del nodo del Pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. +Por defecto, un Pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un Pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la entrada. Cuando un Pod está aislado para la entrada, las únicas conexiones permitidas en el Pod son las del nodo del Pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al Pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. Las políticas de red no entran en conflicto; son aditivas. Si alguna política(s) se aplica a un Pod para una dirección determinada, las conexiones permitidas en esa dirección desde ese Pod es la unión de lo que permiten las políticas aplicables. Por tanto, el orden de evaluación no afecta al resultado de la política. From edeadf50a31287627da398f6b29516a191f798c2 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 19 Jul 2022 20:32:56 +0200 Subject: [PATCH 109/292] Update content/es/docs/concepts/services-networking/network-policies.md Co-authored-by: Victor Morales --- .../es/docs/concepts/services-networking/network-policies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index f7c20dceed727..ab0a752e0b067 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -105,7 +105,7 @@ __namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que es ... ``` -contiene un único elemento `from` permitiendo conexiones desde los Pods con el label `role=client` en nombres de espacio con el label `user=alice`. Por el contrario, *esta* política: +contiene un elemento `from` permitiendo conexiones desde los Pods con el label `role=client` en Namespaces con el label `user=alice`. Por el contrario, *esta* política: ```yaml ... From 4d565e7e054175a8a9e9871085a4ba15d8f89743 Mon Sep 17 00:00:00 2001 From: Nicolas Quiceno B Date: Tue, 19 Jul 2022 20:37:35 +0200 Subject: [PATCH 110/292] Improve text localization Signed-off-by: Nicolas Quiceno B --- .../docs/concepts/services-networking/network-policies.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md index ab0a752e0b067..09f65765a4c5e 100644 --- a/content/es/docs/concepts/services-networking/network-policies.md +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -50,7 +50,7 @@ Un ejemplo de NetworkPolicy pudiera ser este: {{< codenew file="service/networking/networkpolicy.yaml" >}} {{< note >}} -Enviar esto al API Server de su clúster no tendrá ningún efecto a menos que su solución de red tenga soporte de políticas de red. +Enviar esto al API Server de su clúster no tendrá ningún efecto a menos que su solución de red soporte de políticas de red. {{< /note >}} __Campos Obligatorios__: Como con todos los otras configuraciones de Kubernetes, una NetworkPolicy @@ -90,7 +90,7 @@ __podSelector__: Este selector selecciona Pods específicos en el mismo Namespac __namespaceSelector__: Este selector selecciona Namespaces específicos para permitir el tráfico como origen de entrada o destino de salida. -__namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que especifique tanto `namespaceSelector` como `podSelector` selecciona Pods específicos dentro de Namespaces específicos. Tenga cuidado de utilizar la sintaxis de YAML correcta. A continuación se muestra un ejemplo de esta política: +__namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que especifica tanto `namespaceSelector` como `podSelector` selecciona Pods específicos dentro de Namespaces específicos. Es importante revisar que se utiliza la sintaxis de YAML correcta. A continuación se muestra un ejemplo de esta política: ```yaml ... @@ -120,8 +120,7 @@ contiene un elemento `from` permitiendo conexiones desde los Pods con el label ` ... ``` - -contiene dos elementos en el array `from`, y permite conexiones desde Pods en el Namespace local con el label `role=client`, *o* desde cualquier Pod en cualquier Namespace con el label `user=alice`. +contiene dos elementos en el array `from`, y permite conexiones desde Pods en los Namespaces con el label `role=client`, *o* desde cualquier Pod en cualquier Namespace con el label `user=alice`. En caso de duda, utilice `kubectl describe` para ver cómo Kubernetes ha interpretado la política. From a0a41981e0771070cdf4a266db0bcd49226e595f Mon Sep 17 00:00:00 2001 From: windsonsea Date: Wed, 20 Jul 2022 10:11:47 +0800 Subject: [PATCH 111/292] [zh-cn] fix 404 errors in release.md --- content/zh-cn/releases/release.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/zh-cn/releases/release.md b/content/zh-cn/releases/release.md index cbb8dc48b2bc7..9d1c01cf943b1 100644 --- a/content/zh-cn/releases/release.md +++ b/content/zh-cn/releases/release.md @@ -219,7 +219,7 @@ The general labeling process should be consistent across artifact types. referring to a release MAJOR.MINOR `vX.Y` version. See also - [release versioning](/contributors/design-proposals/release/versioning.md). + [release versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md). - *release branch*: Git branch `release-X.Y` created for the `vX.Y` milestone. @@ -233,7 +233,7 @@ The general labeling process should be consistent across artifact types. [GitHub 里程碑](https://help.github.com/en/github/managing-your-work-on-github/associating-milestones-with-issues-and-pull-requests) 指的是发布 主.次 `vX.Y` 版本。 - 另请参阅[发布版本控制](/contributors/design-proposals/release/versioning.md)。 + 另请参阅[发布版本控制](https://git.k8s.io/sig-release/release-engineering/versioning.md)。 - **发布分支**:为 `vX.Y` 里程碑创建的 Git 分支 `release-X.Y`。 @@ -504,14 +504,14 @@ Issues are marked as targeting a milestone via the Prow "/milestone" command. The Release Team's [Bug Triage Lead](https://git.k8s.io/sig-release/release-team/role-handbooks/bug-triage/README.md) and overall community watch incoming issues and triage them, as described in the contributor guide section on -[issue triage](/contributors/guide/issue-triage.md). +[issue triage](https://k8s.dev/docs/guide/issue-triage/). --> ### 问题补充 {#issue-additions} 通过 Prow “/milestone” 命令标记问题并指向里程碑。 发布团队的[错误分类负责人](https://git.k8s.io/sig-release/release-team/role-handbooks/bug-triage/README.md)和整个社区观察新出现的问题并对其进行分类, -在贡献者指南部分中描述[问题分类](/contributors/guide/issue-triage.md)。 +在贡献者指南部分中描述[问题分类](https://k8s.dev/docs/guide/issue-triage/)。 ## 其他必需的标签 {#other-required-labels} -[这里是标签列表及其用途和目的](https://git.k8s.io/test-infra/label*sync/labels.md#labels-that-apply-to-all-repos-for-both-issues-and-prs)。 +[这里是标签列表及其用途和目的](https://git.k8s.io/test-infra/label_sync/labels.md#labels-that-apply-to-all-repos-for-both-issues-and-prs)。 -- **items** ([]}}">HorizontalPodAutoscaler), required +- **items** ([]}}">HorizontalPodAutoscaler),必需 items 是水平 Pod 自动扩缩器对象的列表。 diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2.md new file mode 100644 index 0000000000000..b8ac14288a3a8 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2.md @@ -0,0 +1,2327 @@ +--- +api_metadata: + apiVersion: "autoscaling/v2beta2" + import: "k8s.io/api/autoscaling/v2beta2" + kind: "HorizontalPodAutoscaler" +content_type: "api_reference" +description: "HorizontalPodAutoscaler 是水平 Pod 自动扩缩器的配置,它根据指定的指标自动管理实现 scale 子资源的任何资源的副本数。" +title: "HorizontalPodAutoscaler v2beta2" +weight: 13 +--- + + +`apiVersion: autoscaling/v2beta2` + +`import "k8s.io/api/autoscaling/v2beta2"` + + +## HorizontalPodAutoscaler {#HorizontalPodAutoscaler} + + +HorizontalPodAutoscaler 是水平 Pod 自动扩缩器的配置, +它根据指定的指标自动管理实现 scale 子资源的任何资源的副本数。 + +
+ +- **apiVersion**: autoscaling/v2beta2 + +- **kind**: HorizontalPodAutoscaler + +- **metadata** (}}">ObjectMeta) + + + + metadata 是标准的对象元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **spec** (}}">HorizontalPodAutoscalerSpec) + + + + spec 是自动扩缩器行为的规约。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. + +- **status** (}}">HorizontalPodAutoscalerStatus) + + + + status 是自动扩缩器的当前信息。 + +## HorizontalPodAutoscalerSpec {#HorizontalPodAutoscalerSpec} + + +HorizontalPodAutoscalerSpec 描述了 HorizontalPodAutoscaler 预期的功能。 + +
+ + + +- **maxReplicas** (int32),必需 + + maxReplicas 是自动扩缩器可以扩容的副本数的上限。不能小于 minReplicas。 + + + +- **scaleTargetRef** (CrossVersionObjectReference),必需 + + scaleTargetRef 指向要扩缩的目标资源,用于收集 Pod 的相关指标信息以及实际更改的副本数。 + + + + + + **CrossVersionObjectReference 包含足够的信息来让你识别出所引用的资源。** + + - **scaleTargetRef.kind** (string),必需 + + 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds" + + + + - **scaleTargetRef.name** (string),必需 + + 被引用对象的名称;更多信息: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + + + - **scaleTargetRef.apiVersion** (string) + + 被引用对象的 API 版本。 + + + +- **minReplicas** (int32) + + minReplicas 是自动扩缩器可以缩减的副本数的下限。它默认为 1 个 Pod。 + 如果启用了 Alpha 特性门控 HPAScaleToZero 并且配置了至少一个 Object 或 External 度量指标, + 则 minReplicas 允许为 0。只要至少有一个度量值可用,扩缩就处于活动状态。 + + + +- **behavior** (HorizontalPodAutoscalerBehavior) + + behavior 配置目标在扩容(Up)和缩容(Down)两个方向的扩缩行为(分别用 scaleUp 和 scaleDown 字段)。 + 如果未设置,则会使用默认的 HPAScalingRules 进行扩缩容。 + + + + + + **HorizontalPodAutoscalerBehavior 配置目标在扩容(Up)和缩容(Down)两个方向的扩缩行为 + (分别用 scaleUp 和 scaleDown 字段)。** + + - **behavior.scaleDown** (HPAScalingRules) + + scaleDown 是缩容策略。如果未设置,则默认值允许缩减到 minReplicas 数量的 Pod, + 具有 300 秒的稳定窗口(使用最近 300 秒的最高推荐值)。 + + + + + + HPAScalingRules 为一个方向配置扩缩行为。在根据 HPA 的指标计算 desiredReplicas 后应用这些规则。 + 可以通过指定扩缩策略来限制扩缩速度。可以通过指定稳定窗口来防止抖动, + 因此不会立即设置副本数,而是选择稳定窗口中最安全的值。 + + - **behavior.scaleDown.policies** ([]HPAScalingPolicy) + + policies 是可在扩缩容过程中使用的潜在扩缩策略的列表。必须至少指定一个策略,否则 HPAScalingRules 将被视为无效而丢弃。 + + + + + + **HPAScalingPolicy 是一个单一的策略,它必须在指定的过去时间间隔内保持为 true。** + + - **behavior.scaleDown.policies.type** (string),必需 + + type 用于指定扩缩策略。 + + + + - **behavior.scaleDown.policies.value** (int32),必需 + + value 包含策略允许的更改量。它必须大于零。 + + + + - **behavior.scaleDown.policies.periodSeconds** (int32),必需 + + periodSeconds 表示策略应该保持为 true 的时间窗口长度。 + periodSeconds 必须大于零且小于或等于 1800(30 分钟)。 + + + + - **behavior.scaleDown.selectPolicy** (string) + + selectPolicy 用于指定应该使用哪个策略。如果未设置,则使用默认值 MaxPolicySelect。 + + + + - **behavior.scaleDown.stabilizationWindowSeconds** (int32) + + stabilizationWindowSeconds 是在扩缩容时应考虑的之前建议的秒数。stabilizationWindowSeconds + 必须大于或等于零且小于或等于 3600(一小时)。如果未设置,则使用默认值: + + - 扩容:0(不设置稳定窗口)。 + - 缩容:300(即稳定窗口为 300 秒)。 + + + + - **behavior.scaleUp** (HPAScalingRules) + + scaleUp 是用于扩容的扩缩策略。如果未设置,则默认值为以下值中的较高者: + + * 每 60 秒增加不超过 4 个 Pod + * 每 60 秒 Pod 数量翻倍 + + 不使用稳定窗口。 + + + + + + HPAScalingRules 为一个方向配置扩缩行为。在根据 HPA 的指标计算 desiredReplicas 后应用这些规则。 + 可以通过指定扩缩策略来限制扩缩速度。可以通过指定稳定窗口来防止抖动, + 因此不会立即设置副本数,而是选择稳定窗口中最安全的值。 + + - **behavior.scaleUp.policies** ([]HPAScalingPolicy) + + policies 是可在扩缩容过程中使用的潜在扩缩策略的列表。必须至少指定一个策略,否则 HPAScalingRules 将被视为无效而丢弃。 + + + + + + **HPAScalingPolicy 是一个单一的策略,它必须在指定的过去时间间隔内保持为 true。** + + - **behavior.scaleUp.policies.type** (string),必需 + + type 用于指定扩缩策略。 + + + + - **behavior.scaleUp.policies.value** (int32),必需 + + value 包含策略允许的更改量。它必须大于零。 + + + + - **behavior.scaleUp.policies.periodSeconds** (int32),必需 + + periodSeconds 表示策略应该保持为 true 的时间窗口长度。 + periodSeconds 必须大于零且小于或等于 1800(30 分钟)。 + + + + - **behavior.scaleUp.selectPolicy** (string) + + selectPolicy 用于指定应该使用哪个策略。如果未设置,则使用默认值 MaxPolicySelect。 + + + + - **behavior.scaleUp.stabilizationWindowSeconds** (int32) + + stabilizationWindowSeconds 是在扩缩容时应考虑的之前建议的秒数。stabilizationWindowSeconds + 必须大于或等于零且小于或等于 3600(一小时)。如果未设置,则使用默认值: + + - 扩容:0(不设置稳定窗口)。 + - 缩容:300(即稳定窗口为 300 秒)。 + + + +- **metrics** ([]MetricSpec) + + metrics 包含用于计算预期副本数的规约(将使用所有指标的最大副本数)。 + 预期副本数是通过将目标值与当前值之间的比率乘以当前 Pod 数来计算的。 + 因此,使用的指标必须随着 Pod 数量的增加而减少,反之亦然。 + 有关每种类别的指标必须如何响应的更多信息,请参阅各个指标源类别。 + 如果未设置,默认指标将设置为 80% 的平均 CPU 利用率。 + + + + + + **MetricSpec 指定如何基于单个指标进行扩缩容(一次只能设置 `type` 和一个其他匹配字段)** + + - **metrics.type** (string),必需 + + type 是指标源的类别。它取值是 “ContainerResource”、“External”、“Object”、“Pods” 或 “Resource” 之一, + 每个类别映射到对象中的一个对应的字段。注意:“ContainerResource” 类别在特性门控 HPAContainerMetrics 启用时可用。 + + + + - **metrics.containerResource** (ContainerResourceMetricSource) + + containerResource 是指 Kubernetes 已知的资源指标(例如在请求和限制中指定的那些), + 描述当前扩缩目标中每个 Pod 中的单个容器(例如 CPU 或内存)。 + 此类指标内置于 Kubernetes 中,在使用 “pods” 源的、按 Pod 计算的普通指标之外,还具有一些特殊的扩缩选项。 + 这是一个 Alpha 特性,可以通过 HPAContainerMetrics 特性标志启用。 + + + + + + ContainerResourceMetricSource 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, + 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。在与目标值比较之前,这些值先计算平均值。 + 此类指标内置于 Kubernetes 中,并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 + 只应设置一种 “target” 类别。 + + - **metrics.containerResource.container** (string),必需 + + container 是扩缩目标的 Pod 中容器的名称。 + + + + - **metrics.containerResource.name** (string),必需 + + name 是相关资源的名称。 + + + + - **metrics.containerResource.target** (MetricTarget),必需 + + target 指定给定指标的目标值。 + + + + + + **MetricTarget 定义特定指标的目标值、平均值或平均利用率** + + - **metrics.containerResource.target.type** (string),必需 + + type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 + + + + - **metrics.containerResource.target.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 的资源指标均值的目标值, + 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 + + + + - **metrics.containerResource.target.averageValue** (}}">Quantity) + + 是跨所有相关 Pod 的指标均值的目标值(以数量形式给出)。 + + + + - **metrics.containerResource.target.value** (}}">Quantity) + + value 是指标的目标值(以数量形式给出)。 + + + + - **metrics.external** (ExternalMetricSource) + + external 指的是不与任何 Kubernetes 对象关联的全局指标。 + 这一字段允许基于来自集群外部运行的组件(例如云消息服务中的队列长度,或来自运行在集群外部的负载均衡器的 QPS)的信息进行自动扩缩容。 + + + + + + ExternalMetricSource 指示如何基于 Kubernetes 对象无关的指标 + (例如云消息传递服务中的队列长度,或来自集群外部运行的负载均衡器的 QPS)执行扩缩操作。 + + - **metrics.external.metric** (MetricIdentifier),必需 + + metric 通过名称和选择算符识别目标指标。 + + + + + + **MetricIdentifier 定义指标的名称和可选的选择算符** + + - **metrics.external.metric.name** (string),必需 + + name 是给定指标的名称。 + + + + - **metrics.external.metric.selector** (}}">LabelSelector) + + selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 + 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 + 未设置时,仅 metricName 参数将用于收集指标。 + + + + - **metrics.external.target** (MetricTarget),必需 + + target 指定给定指标的目标值。 + + + + + + **MetricTarget 定义特定指标的目标值、平均值或平均利用率** + + - **metrics.external.target.type** (string),必需 + + type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 + + + + - **metrics.external.target.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得到的资源指标均值的目标值, + 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 + + + + - **metrics.external.target.averageValue** (}}">Quantity) + + averageValue 是跨所有相关 Pod 得到的指标均值的目标值(以数量形式给出)。 + + + + - **metrics.external.target.value** (}}">Quantity) + + value 是指标的目标值(以数量形式给出)。 + + + + - **metrics.object** (ObjectMetricSource) + + object 是指描述单个 Kubernetes 对象的指标(例如,Ingress 对象上的 `hits-per-second`)。 + + + + + + **ObjectMetricSource 表示如何根据描述 Kubernetes 对象的指标进行扩缩容(例如,Ingress 对象的 `hits-per-second`)** + + - **metrics.object.describedObject** (CrossVersionObjectReference),必需 + + + + + + **CrossVersionObjectReference 包含足够的信息来让你识别所引用的资源。** + + - **metrics.object.describedObject.kind** (string),必需 + + 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"。 + + + + - **metrics.object.describedObject.name** (string),必需 + + 被引用对象的名称;更多信息: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + + + - **metrics.object.describedObject.apiVersion** (string) + + 被引用对象的 API 版本。 + + + + - **metrics.object.metric** (MetricIdentifier),必需 + + metric 通过名称和选择算符识别目标指标。 + + + + + + **MetricIdentifier 定义指标的名称和可选的选择算符** + + - **metrics.object.metric.name** (string),必需 + + name 是给定指标的名称。 + + + + - **metrics.object.metric.selector** (}}">LabelSelector) + + selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 + 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 + 未设置时,仅 metricName 参数将用于收集指标。 + + + + - **metrics.object.target** (MetricTarget),必需 + + target 表示给定指标的目标值。 + + + + + + **MetricTarget 定义特定指标的目标值、平均值或平均利用率** + + - **metrics.object.target.type** (string),必需 + + type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 + + + + - **metrics.object.target.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的目标值, + 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 + + + + - **metrics.object.target.averageValue** (}}">Quantity) + + averageValue 是跨所有 Pod 得出的指标均值的目标值(以数量形式给出)。 + + + + - **metrics.object.target.value** (}}">Quantity) + + value 是指标的目标值(以数量形式给出)。 + + + + - **metrics.pods** (PodsMetricSource) + + pods 是指描述当前扩缩目标中每个 Pod 的指标(例如,`transactions-processed-per-second`)。 + 在与目标值进行比较之前,这些指标值将被平均。 + + + + + + PodsMetricSource 表示如何根据描述当前扩缩目标中每个 Pod 的指标进行扩缩容(例如,`transactions-processed-per-second`)。 + 在与目标值进行比较之前,这些指标值将被平均。 + + - **metrics.pods.metric** (MetricIdentifier),必需 + + metric 通过名称和选择算符识别目标指标。 + + + + + + **MetricIdentifier 定义指标的名称和可选的选择算符** + + - **metrics.pods.metric.name** (string),必需 + + name 是给定指标的名称。 + + + + - **metrics.pods.metric.selector** (}}">LabelSelector) + + selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 + 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 + 未设置时,仅 metricName 参数将用于收集指标。 + + + + - **metrics.pods.target** (MetricTarget),必需 + + target 表示给定指标的目标值。 + + + + + + **MetricTarget 定义特定指标的目标值、平均值或平均利用率** + + - **metrics.pods.target.type** (string),必需 + + type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 + + + + - **metrics.pods.target.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的目标值, + 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 + + + + - **metrics.pods.target.averageValue** (}}">Quantity) + + averageValue 是跨所有 Pod 得出的指标均值的目标值(以数量形式给出)。 + + + + - **metrics.pods.target.value** (}}">Quantity) + + value 是指标的目标值(以数量形式给出)。 + + + + - **metrics.resource** (ResourceMetricSource) + + resource 是指 Kubernetes 已知的资源指标(例如在请求和限制中指定的那些), + 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。此类指标内置于 Kubernetes 中, + 并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 + + + + + + ResourceMetricSource 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, + 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。在与目标值比较之前,这些指标值将被平均。 + 此类指标内置于 Kubernetes 中,并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 + 只应设置一种 “target” 类别。 + + - **metrics.resource.name** (string),必需 + + name 是相关资源的名称。 + + + + - **metrics.resource.target** (MetricTarget),必需 + + target 指定给定指标的目标值。 + + + + + + **MetricTarget 定义特定指标的目标值、平均值或平均利用率** + + - **metrics.resource.target.type** (string),必需 + + type 表示指标类别是 `Utilization`、`Value` 或 `AverageValue`。 + + + + - **metrics.resource.target.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的目标值, + 表示为 Pod 资源请求值的百分比。目前仅对 “Resource” 指标源类别有效。 + + + + - **metrics.resource.target.averageValue** (}}">Quantity) + + averageValue 是跨所有 Pod 得出的指标均值的目标值(以数量形式给出)。 + + + + - **metrics.resource.target.value** (}}">Quantity) + + value 是指标的目标值(以数量形式给出)。 + +## HorizontalPodAutoscalerStatus {#HorizontalPodAutoscalerStatus} + + +HorizontalPodAutoscalerStatus 描述了水平 Pod 自动扩缩器的当前状态。 + +
+ + + +- **currentReplicas** (int32),必需 + + currentReplicas 是此自动扩缩器管理的 Pod 的当前副本数,如自动扩缩器最后一次看到的那样。 + + + +- **desiredReplicas** (int32),必需 + + desiredReplicas 是此自动扩缩器管理的 Pod 的所期望的副本数,由自动扩缩器最后计算。 + + + +- **conditions** ([]HorizontalPodAutoscalerCondition) + + conditions 是此自动扩缩器扩缩其目标所需的一组条件,并指示是否满足这些条件。 + + + + + + **HorizontalPodAutoscalerCondition 描述 HorizontalPodAutoscaler 在某一时间点的状态。** + + - **conditions.status** (string),必需 + + status 是状况的状态(True、False、Unknown)。 + + + + - **conditions.type** (string),必需 + + type 描述当前状况。 + + + + - **conditions.lastTransitionTime** (Time) + + lastTransitionTime 是状况最近一次从一种状态转换到另一种状态的时间。 + + + + + **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。为 time 包的许多函数方法提供了封装器。** + + + + - **conditions.message** (string) + + message 是一个包含有关转换的可读的详细信息。 + + + + - **conditions.reason** (string) + + reason 是状况最后一次转换的原因。 + + + +- **currentMetrics** ([]MetricStatus) + + currentMetrics 是此自动扩缩器使用的指标的最后读取状态。 + + + + + + **MetricStatus 描述了单个指标的最后读取状态。** + + - **currentMetrics.type** (string),必需 + + type 是指标源的类别。它取值是 “ContainerResource”、“External”、“Object”、“Pods” 或 “Resource” 之一, + 每个类别映射到对象中的一个对应的字段。注意:“ContainerResource” 类别在特性门控 HPAContainerMetrics 启用时可用。 + + + + - **currentMetrics.containerResource** (ContainerResourceMetricStatus) + + containerResource 是指 Kubernetes 已知的一种资源指标(例如在请求和限制中指定的那些), + 描述当前扩缩目标中每个 Pod 中的单个容器(例如 CPU 或内存)。 + 此类指标内置于 Kubernetes 中,并且在使用 "Pods" 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 + + + + + + ContainerResourceMetricStatus 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, + 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。此类指标内置于 Kubernetes 中, + 并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 + + - **currentMetrics.containerResource.container** (string),必需 + + container 是扩缩目标的 Pod 中的容器名称。 + + + + - **currentMetrics.containerResource.current** (MetricValueStatus),必需 + + current 包含给定指标的当前值。 + + + + + + **MetricValueStatus 保存指标的当前值** + + - **currentMetrics.containerResource.current.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 + + + + - **currentMetrics.containerResource.current.averageValue** (}}">Quantity) + + averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 + + + + - **currentMetrics.containerResource.current.value** (}}">Quantity) + + value 是指标的当前值(以数量形式给出)。 + + + + - **currentMetrics.containerResource.name** (string),必需 + + name 是相关资源的名称。 + + + + - **currentMetrics.external** (ExternalMetricStatus) + + external 指的是不与任何 Kubernetes 对象关联的全局指标。这一字段允许基于来自集群外部运行的组件 + (例如云消息服务中的队列长度,或来自集群外部运行的负载均衡器的 QPS)的信息进行自动扩缩。 + + + + + + **ExternalMetricStatus 表示与任何 Kubernetes 对象无关的全局指标的当前值。** + + - **currentMetrics.external.current** (MetricValueStatus),必需 + + current 包含给定指标的当前值。 + + + + + + **MetricValueStatus 保存指标的当前值** + + - **currentMetrics.external.current.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 + + + + - **currentMetrics.external.current.averageValue** (}}">Quantity) + + averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 + + + + - **currentMetrics.external.current.value** (}}">Quantity) + + value 是指标的当前值(以数量形式给出)。 + + + + - **currentMetrics.external.metric** (MetricIdentifier),必需 + + metric 通过名称和选择算符识别目标指标。 + + + + + + **MetricIdentifier 定义指标的名称和可选的选择算符** + + - **currentMetrics.external.metric.name** (string),必需 + + name 是给定指标的名称。 + + + + - **currentMetrics.external.metric.selector** (}}">LabelSelector) + + selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 + 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 + 未设置时,仅 metricName 参数将用于收集指标。 + + + + - **currentMetrics.object** (ObjectMetricStatus) + + object 是指描述单个 Kubernetes 对象的指标(例如,Ingress 对象的 `hits-per-second`)。 + + + + + + **ObjectMetricStatus 表示描述 Kubernetes 对象的指标的当前值(例如,Ingress 对象的 `hits-per-second`)。** + + - **currentMetrics.object.current** (MetricValueStatus),必需 + + current 包含给定指标的当前值。 + + + + + + **MetricValueStatus 保存指标的当前值** + + - **currentMetrics.object.current.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 + + + + - **currentMetrics.object.current.averageValue** (}}">Quantity) + + averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 + + + + - **currentMetrics.object.current.value** (}}">Quantity) + + value 是指标的当前值(以数量形式给出)。 + + + + - **currentMetrics.object.describedObject** (CrossVersionObjectReference),必需 + + + + + + **CrossVersionObjectReference 包含足够的信息来让你识别所引用的资源。** + + - **currentMetrics.object.describedObject.kind** (string),必需 + + 被引用对象的类别;更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds" + + + + - **currentMetrics.object.describedObject.name** (string),必需 + + 被引用对象的名称;更多信息: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + + + - **currentMetrics.object.describedObject.apiVersion** (string) + + 被引用对象的 API 版本。 + + + + - **currentMetrics.object.metric** (MetricIdentifier),必需 + + metric 通过名称和选择算符识别目标指标。 + + + + + + **MetricIdentifier 定义指标的名称和可选的选择算符** + + - **currentMetrics.object.metric.name** (string),必需 + + name 是给定指标的名称。 + + + + - **currentMetrics.object.metric.selector** (}}">LabelSelector) + + selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 + 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 + 未设置时,仅 metricName 参数将用于收集指标。 + + + + - **currentMetrics.pods** (PodsMetricStatus) + + pods 是指描述当前扩缩目标中每个 Pod 的指标(例如,`transactions-processed-per-second`)。 + 在与目标值进行比较之前,这些指标值将被平均。 + + + + + + **PodsMetricStatus 表示描述当前扩缩目标中每个 Pod 的指标的当前值(例如,`transactions-processed-per-second`)。** + + - **currentMetrics.pods.current** (MetricValueStatus),必需 + + current 包含给定指标的当前值。 + + + + + + **MetricValueStatus 保存指标的当前值** + + - **currentMetrics.pods.current.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值,表示为 Pod 资源请求值的百分比。 + + + + - **currentMetrics.pods.current.averageValue** (}}">Quantity) + + averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 + + + + - **currentMetrics.pods.current.value** (}}">Quantity) + + value 是指标的当前值(以数量形式给出)。 + + + + - **currentMetrics.pods.metric** (MetricIdentifier),必需 + + metric 通过名称和选择算符识别目标指标。 + + + + + + **MetricIdentifier 定义指标的名称和可选的选择算符** + + - **currentMetrics.pods.metric.name** (string),必需 + + name 是给定指标的名称。 + + + + - **currentMetrics.pods.metric.selector** (}}">LabelSelector) + + selector 是给定指标的标准 Kubernetes 标签选择算符的字符串编码形式。 + 设置后,它作为附加参数传递给指标服务器,以获取更具体的指标范围。 + 未设置时,仅 metricName 参数将用于收集指标。 + + + + - **currentMetrics.resource** (ResourceMetricStatus) + + resource 是指 Kubernetes 已知的资源指标(例如在请求和限制中指定的那些), + 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。此类指标内置于 Kubernetes 中, + 并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 + + + + + + ResourceMetricSource 指示如何根据请求和限制中指定的 Kubernetes 已知的资源指标进行扩缩容, + 此结构描述当前扩缩目标中的每个 Pod(例如 CPU 或内存)。在与目标值比较之前,这些指标值将被平均。 + 此类指标内置于 Kubernetes 中,并且在使用 “Pods” 源的、按 Pod 统计的普通指标之外支持一些特殊的扩缩选项。 + + - **currentMetrics.resource.current** (MetricValueStatus),必需 + + current 包含给定指标的当前值。 + + + + + + **MetricValueStatus 保存指标的当前值** + + - **currentMetrics.resource.current.averageUtilization** (int32) + + averageUtilization 是跨所有相关 Pod 得出的资源指标均值的当前值, + 表示为 Pod 资源请求值的百分比。 + + + + - **currentMetrics.resource.current.averageValue** (}}">Quantity) + + averageValue 是跨所有相关 Pod 的指标均值的当前值(以数量形式给出)。 + + + + - **currentMetrics.resource.current.value** (}}">Quantity) + + value 是指标的当前值(以数量形式给出)。 + + + + - **currentMetrics.resource.name** (string),必需 + + name 是相关资源的名称。 + + + +- **lastScaleTime** (Time) + + lastScaleTime 是 HorizontalPodAutoscaler 上次扩缩 Pod 数量的时间,自动扩缩器使用它来控制更改 Pod 数量的频率。 + + + + + + **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。为 time 包的许多函数方法提供了封装器。** + + + +- **observedGeneration** (int64) + + observedGeneration 是此自动扩缩器观察到的最新一代。 + +## HorizontalPodAutoscalerList {#HorizontalPodAutoscalerList} + + +HorizontalPodAutoscalerList 是水平 Pod 自动扩缩器对象列表。 + +
+ +- **apiVersion**: autoscaling/v2beta2 + +- **kind**: HorizontalPodAutoscalerList + + + +- **metadata** (}}">ListMeta) + + metadata 是标准的列表元数据。 + + + +- **items** ([]}}">HorizontalPodAutoscaler),必需 + + items 是水平 Pod 自动扩缩器对象的列表。 + +## Operations {#Operations} + +
+ + +### `get` 读取指定的 HorizontalPodAutoscaler + +#### HTTP 请求 + +GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + HorizontalPodAutoscaler 的名称。 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">HorizontalPodAutoscaler): OK + +401: Unauthorized + + +### `get` 读取指定 HorizontalPodAutoscaler 的状态 + +#### HTTP 请求 + +GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + HorizontalPodAutoscaler 的名称。 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">HorizontalPodAutoscaler): OK + +401: Unauthorized + + +### `list` 列出或观察 HorizontalPodAutoscaler 类别的对象 + +#### HTTP 请求 + +GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">HorizontalPodAutoscalerList): OK + +401: Unauthorized + + +### `list` 列出或观察 HorizontalPodAutoscaler 类别的对象 + +#### HTTP 请求 + +GET /apis/autoscaling/v2beta2/horizontalpodautoscalers + + +#### 参数 + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">HorizontalPodAutoscalerList): OK + +401: Unauthorized + + +### `create` 创建一个 HorizontalPodAutoscaler + +#### HTTP 请求 + +POST /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">HorizontalPodAutoscaler必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">HorizontalPodAutoscaler): OK + +201 (}}">HorizontalPodAutoscaler): Created + +202 (}}">HorizontalPodAutoscaler): Accepted + +401: Unauthorized + + +### `update` 替换指定的 HorizontalPodAutoscaler + +#### HTTP 请求 + +PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + HorizontalPodAutoscaler 的名称。 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">HorizontalPodAutoscaler必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">HorizontalPodAutoscaler): OK + +201 (}}">HorizontalPodAutoscaler): Created + +401: Unauthorized + + +### `update` 替换指定 HorizontalPodAutoscaler 的状态 + +#### HTTP 请求 + +PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + HorizontalPodAutoscaler 的名称。 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">HorizontalPodAutoscaler必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">HorizontalPodAutoscaler): OK + +201 (}}">HorizontalPodAutoscaler): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 HorizontalPodAutoscaler + +#### HTTP 请求 + +PATCH /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + HorizontalPodAutoscaler 的名称。 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">HorizontalPodAutoscaler): OK + +201 (}}">HorizontalPodAutoscaler): Created + +401: Unauthorized + + +### `patch` 部分更新指定 HorizontalPodAutoscaler 的状态 + +#### HTTP 请求 + +PATCH /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + HorizontalPodAutoscaler 的名称。 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">HorizontalPodAutoscaler): OK + +201 (}}">HorizontalPodAutoscaler): Created + +401: Unauthorized + + +### `delete` 删除一个 HorizontalPodAutoscaler + +#### HTTP 请求 + +DELETE /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + HorizontalPodAutoscaler 的名称。 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">Status): OK + +202 (}}">Status): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 HorizontalPodAutoscaler 的集合 + +#### HTTP 请求 + +DELETE /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **continue** (**查询参数**): string + + }}">continue + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized + From 028b56a33a219fe62847a6990bfd1e71f590a968 Mon Sep 17 00:00:00 2001 From: sarazqy <100755318+sarazqy@users.noreply.github.com> Date: Fri, 15 Jul 2022 09:57:38 +0800 Subject: [PATCH 113/292] update website/content/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md update website/content/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md --- .../workload-resources/priority-class-v1.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md index 52fc887ab97e0..dd9fd1137320e 100644 --- a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md @@ -87,7 +87,7 @@ PriorityClass 定义了从优先级类名到优先级数值的映射。 --> - **globalDefault** (boolean) - globalDefault 指定是否应将此 PriorityClass 视为没有任何优先级类的 pod 的默认优先级。 + globalDefault 指定是否应将此 PriorityClass 视为没有任何优先级类的 Pod 的默认优先级。 只有一个 PriorityClass 可以标记为 `globalDefault`。 但是,如果存在多个 PriorityClasses 且其 `globalDefault` 字段设置为 true, 则将使用此类全局默认 PriorityClasses 的最小值作为默认优先级。 @@ -430,7 +430,7 @@ PUT /apis/scheduling.k8s.io/v1/priorityclasses/{name} name of the PriorityClass --> -- **name** (*路径参数*): string,必需 +- **name** (**路径参数**): string,必需 PriorityClass 名称 @@ -518,7 +518,7 @@ PATCH /apis/scheduling.k8s.io/v1/priorityclasses/{name} name of the PriorityClass --> -- **name** (*路径参数*): string,必须 +- **name** (**路径参数**): string,必需 PriorityClass 名称 @@ -613,7 +613,7 @@ Parameters name of the PriorityClass --> -- **name** (*路径参数*): string,必需 +- **name** (**路径参数**): string,必需 PriorityClass 名称。 From f25dfcdf6113050fbfee6f7e72464af62137b135 Mon Sep 17 00:00:00 2001 From: Shubham Kuchhal Date: Wed, 20 Jul 2022 17:39:48 +0530 Subject: [PATCH 114/292] Fixed Hyperlinks for custom CA and dedicated CA. --- content/en/docs/tasks/tls/managing-tls-in-a-cluster.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index 2ca497842d0f7..bca428d7501a8 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -18,7 +18,7 @@ draft](https://github.com/ietf-wg-acme/acme/). {{< note >}} Certificates created using the `certificates.k8s.io` API are signed by a -[dedicated CA](#a-note-to-cluster-administrators). It is possible to configure your cluster to use the cluster root +[dedicated CA](#configuring-your-cluster-to-provide-signing). It is possible to configure your cluster to use the cluster root CA for this purpose, but you should never rely on this. Do not assume that these certificates will validate against the cluster root CA. {{< /note >}} @@ -42,7 +42,7 @@ install it via your operating system's software sources, or fetch it from ## Trusting TLS in a cluster -Trusting the [custom CA](#a-note-to-cluster-administrators) from an application running as a pod usually requires +Trusting the [custom CA](#configuring-your-cluster-to-provide-signing) from an application running as a pod usually requires some extra application configuration. You will need to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. For example, you would do this with a golang TLS config by parsing the certificate From fa9b18e701f64b2bc33be7e61c0b736ff0ea84cf Mon Sep 17 00:00:00 2001 From: windsonsea Date: Wed, 20 Jul 2022 20:25:03 +0800 Subject: [PATCH 115/292] [zh-cn] updated /releases/version-skew-policy.md --- content/zh-cn/releases/version-skew-policy.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/zh-cn/releases/version-skew-policy.md b/content/zh-cn/releases/version-skew-policy.md index d5fb2ef85b5bb..f16e8b456989b 100644 --- a/content/zh-cn/releases/version-skew-policy.md +++ b/content/zh-cn/releases/version-skew-policy.md @@ -32,7 +32,7 @@ Specific cluster deployment tools may place additional restrictions on version s ## Supported versions Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. -For more information, see [Kubernetes Release Versioning](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning). +For more information, see [Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning). The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}}, {{< skew currentVersionAddMinor -2 >}}). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support. --> @@ -41,7 +41,7 @@ The Kubernetes project maintains release branches for the most recent three mino Kubernetes 版本以 **x.y.z** 表示,其中 **x** 是主要版本, **y** 是次要版本,**z** 是补丁版本,遵循[语义版本控制](https://semver.org/)术语。 更多信息请参见 -[Kubernetes 版本发布控制](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)。 +[Kubernetes 版本发布控制](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning)。 Kubernetes 项目维护最近的三个次要版本({{< skew currentVersion >}}、{{< skew currentVersionAddMinor -1 >}}、{{< skew currentVersionAddMinor -2 >}})的发布分支。 Kubernetes 1.19 和更新的版本获得大约 1 年的补丁支持。 @@ -49,14 +49,14 @@ Kubernetes 1.18 及更早的版本获得了大约 9 个月的补丁支持。 适当的修复,包括安全问题修复,可能会被后沿三个发布分支,具体取决于问题的严重性和可行性。 -补丁版本按[常规节奏](https://kubernetes.io/releases/patch-releases/#cadence)从这些分支中删除,并在需要时增加额外的紧急版本。 +补丁版本按[常规节奏](/zh-cn/releases/patch-releases/#cadence)从这些分支中删除,并在需要时增加额外的紧急版本。 [发布管理员](/zh-cn/releases/release-managers/)小组拥有这件事的决定权。 From f0ec2f9ffea7a1cfbe1a656a8d295f87320438fc Mon Sep 17 00:00:00 2001 From: jacky Date: Wed, 20 Jul 2022 21:37:13 +0800 Subject: [PATCH 116/292] sync nordstrom en language Signed-off-by: jacky --- content/en/case-studies/nordstrom/index.html | 2 +- .../zh-cn/case-studies/nordstrom/index.html | 279 +++++++++--------- 2 files changed, 143 insertions(+), 138 deletions(-) diff --git a/content/en/case-studies/nordstrom/index.html b/content/en/case-studies/nordstrom/index.html index 73bc4e147e055..149811e6c9587 100644 --- a/content/en/case-studies/nordstrom/index.html +++ b/content/en/case-studies/nordstrom/index.html @@ -41,7 +41,7 @@

Impact

But new environments still took too long to turn up, so the next step was working in the cloud. Today, Nordstrom Technology has built an enterprise platform that allows the company's 1,500 developers to deploy applications running as Docker containers in the cloud, orchestrated with Kubernetes.

{{< case-studies/quote image="/images/case-studies/nordstrom/banner3.jpg" >}} -"We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core," +"We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core." {{< /case-studies/quote >}}

"The cloud provided faster access to resources, because it took weeks for us to get a virtual machine (VM) on premises," says Patel. "But now we can do the same thing in only five minutes."

diff --git a/content/zh-cn/case-studies/nordstrom/index.html b/content/zh-cn/case-studies/nordstrom/index.html index 11edf1c652c74..f44f35997d4e1 100644 --- a/content/zh-cn/case-studies/nordstrom/index.html +++ b/content/zh-cn/case-studies/nordstrom/index.html @@ -2,151 +2,156 @@ title: 案例研究:Nordstrom case_study_styles: true cid: caseStudies -css: /css/style_case_studies.css ---- - - - -
-

案例研究:
在艰难的零售环境下寻找数百万的潜在成本节约 - - -

- -
- - - -
- 公司  Nordstrom     地点  西雅图, 华盛顿     行业  零售 -
- -
-
-
-
- +new_case_study_styles: true +heading_background: /images/case-studies/nordstrom/banner1.jpg +heading_title_logo: /images/nordstrom_logo.png +subheading: > + 在艰难的零售环境下寻找数百万的潜在成本节约 +case_study_details: + - 公司: Nordstrom + - 地点: Seattle, Washington + - 行业: 零售 +--- +

挑战

-Nordstrom 希望提高其技术运营的效率和速度,其中包括 Nordstrom.com 电子商务网站。与此同时,Nordstrom 技术公司正在寻找压缩技术运营成本的方法。 -
- + +

Nordstrom 希望提高其技术运营的效率和速度,其中包括 Nordstrom.com 电子商务网站。与此同时,Nordstrom 技术公司正在寻找压缩技术运营成本的方法。

+

解决方案

-在四年前采用 DevOps 转型并启动持续集成/部署 (CI/CD)项目后,该公司将部署时间从 3 个月缩短到 30 分钟。但是他们想在部署环境上走得更快,所以他们开始他们的云原生之旅,采用与 Kubernetes 协调的Docker容器。 - -
+ +

在四年前采用 DevOps 转型并启动持续集成/部署 (CI/CD)项目后,该公司将部署时间从 3 个月缩短到 30 分钟。但是他们想在部署环境上走得更快,所以他们开始他们的云原生之旅,采用与 Kubernetes 协调的 Docker 容器。

-
- -
- - - +

影响

-为 Nordstrom 构建 Kubernetes 企业平台的团队高级工程师 Dhawal Patel 说,“使用 Kubernetes 的 Nordstrom 技术开发人员现在项目部署得更快,并且能够只专注于编写应用程序。”此外,该团队还提高了运营效率,根据工作负载将 CPU 利用率从 5 倍提高到 12 倍。Patel 说:“我们运行了数千台虚拟机 (VM),但无法有效地使用所有这些资源。借助 Kubernetes ,我们甚至不需要尝试去提高群集的效率,就能使运营效率增长 10 倍。” - -
-
-
-
-
- + + +

为 Nordstrom 构建 Kubernetes 企业平台的团队高级工程师 Dhawal Patel 说,“使用 Kubernetes 的 Nordstrom 技术开发人员现在项目部署得更快,并且能够只专注于编写应用程序。”此外,该团队还提高了运营效率,根据工作负载将 CPU 利用率从 5 倍提高到 12 倍。Patel 说:“我们运行了数千台虚拟机 (VM),但无法有效地使用所有这些资源。借助 Kubernetes,我们甚至不需要尝试去提高集群的效率,就能使运营效率增长 10 倍。”

+ + +{{< case-studies/quote author="Dhawal Patel, Nordstrom 高级工程师" >}} “我们一直在寻找通过技术进行优化和提供更多价值的方法。通过 Kubernetes ,我们在开发效率和运营效率这两方面取得了示范性的提升。这是一个双赢。” - -

-— Dhawal Patel, Nordstrom 高级工程师
-
-
-
-
- -当 Dhawal Patel 五年前加入 Nordstrom ,担任该零售商网站的应用程序开发人员时,他意识到有机会帮助加快开发周期。 -

- -在早期的 DevOps 时代,,Nordstrom 技术仍然遵循传统的孤岛团队和功能模型。Patel 说:“作为开发人员,我花在维护环境上的时间比编写代码和为业务增加价值的时间要多。我对此充满热情,因此我有机会参与帮助修复它。” -

- -公司也渴望加快步伐,并在 2013 年启动了首个持续集成/部署 (CI/CD)项目。该项目是 Nordstrom 云原生之旅的第一步。 -

- -开发人员和运营团队成员构建了一个 CI/CD 管道,在内部使用公司的服务器。团队选择了 Chef ,并编写了自动虚拟 IP 创建、服务器和负载均衡的指导手册。Patel 说:“项目完成后,部署时间从 3 个月减少到 30 分钟。我们仍有开发、测试、暂存、然后生产等多个环境需要重新部署。之后,每个运行 Chef 说明书的环境部署都只花 30 分钟。在那个时候,这是一个巨大的成就。” -

- -但是,新环境仍然需要很长时间才能出现,因此下一步是在云中工作。如今,Nordstrom Technology 已经构建了一个企业平台,允许公司的1500 名开发人员在云中部署以 Docker 容器身份运行的应用程序,这些应用程序由 Kubernetes 进行编排。 -
-
-
-
- +{{< /case-studies/quote >}} + + +

当 Dhawal Patel 五年前加入 Nordstrom ,担任该零售商网站的应用程序开发人员时,他意识到有机会帮助加快开发周期。

+ + +

在早期的 DevOps 时代,,Nordstrom 技术仍然遵循传统的孤岛团队和功能模型。Patel 说:“作为开发人员,我花在维护环境上的时间比编写代码和为业务增加价值的时间要多。我对此充满热情,因此我有机会参与帮助修复它。”

+ + +

公司也渴望加快步伐,并在 2013 年启动了首个持续集成/部署 (CI/CD)项目。该项目是 Nordstrom 云原生之旅的第一步。

+ + +

开发人员和运营团队成员构建了一个 CI/CD 管道,在内部使用公司的服务器。团队选择了 Chef ,并编写了自动虚拟 IP 创建、服务器和负载均衡的指导手册。Patel 说:“项目完成后,部署时间从 3 个月减少到 30 分钟。我们仍有开发、测试、暂存、然后生产等多个环境需要重新部署。之后,每个运行 Chef 说明书的环境部署都只花 30 分钟。在那个时候,这是一个巨大的成就。”

+ + +

但是,新环境仍然需要很长时间才能出现,因此下一步是在云中工作。如今,Nordstrom Technology 已经构建了一个企业平台,允许公司的 1500 名开发人员在云中部署以 Docker 容器身份运行的应用程序,这些应用程序由 Kubernetes 进行编排。

+ + +{{< case-studies/quote image="/images/case-studies/nordstrom/banner3.jpg" >}} “了解到早期的社区支持和项目迭代指标,我们肯定 Kubernetes 一定会成功的,因此我们以 Kubernetes 为核心重建了我们的系统。” -
-
-
-
- - -Patel 说:“云提供了对资源的更快访问,因为我们在内部需要花数周时间才能部署一个虚拟机 (VM)来提供服务。但现在我们可以做同样的事情,只需五分钟。” -

- -Nordstrom 首次尝试在集群上调度容器,是基于 CoreOS fleet 的原生系统。他们开始使用该系统做一些概念验证项目,直到 Kubernetes 1.0发布时才将正式项目迁移到里面。Nordstrom 的 Kubernetes 团队经理 Marius Grigoriu 表示:“了解到早期的社区支持和项目迭代指标,我们肯定 Kubernetes 一定会成功的,因此我们以 Kubernetes 为核心重建了我们的系统。” - -虽然 Kubernetes 通常被视为微服务的平台,但在 Nordstrom 担任关键生产角色的 Kubernetes 上推出的第一个应用程序是 Jira。Patel 承认:“这不是我们希望作为第一个应用程序获得的理想微服务,但致力于此应用程序的团队对 Docker 和 Kubernetes 非常热情,他们希望尝试一下。他们的应用程序部署在内部运行,并希望将其移动到 Kubernetes。 -

- -对于加入的团队来说,这些好处是立竿见影的。Grigoriu 说:“在我们的 Kubernetes 集群中运行的团队喜欢这样一个事实,即他们担心的问题更少,他们不需要管理基础设施或操作系统。早期使用者喜欢 Kubernetes 的声明特性,让他们不得不处理的面积减少。 -
-
-
-
- +{{< /case-studies/quote >}} + + +

Patel 说:“云提供了对资源的更快访问,因为我们在内部需要花数周时间才能部署一个虚拟机 (VM)来提供服务。但现在我们可以做同样的事情,只需五分钟。”

+ + +

Nordstrom 首次尝试在集群上调度容器,是基于 CoreOS fleet 的原生系统。他们开始使用该系统做一些概念验证项目,直到 Kubernetes 1.0 发布时才将正式项目迁移到里面。Nordstrom 的 Kubernetes 团队经理 Marius Grigoriu 表示:“了解到早期的社区支持和项目迭代指标,我们肯定 Kubernetes 一定会成功的,因此我们以 Kubernetes 为核心重建了我们的系统。”

+ + +

虽然 Kubernetes 通常被视为微服务的平台,但在 Nordstrom 担任关键生产角色的 Kubernetes 上推出的第一个应用程序是 Jira。Patel 承认:“这不是我们希望作为第一个应用程序获得的理想微服务,但致力于此应用程序的团队对 Docker 和 Kubernetes 非常热情,他们希望尝试一下。他们的应用程序部署在内部运行,并希望将其移动到 Kubernetes。

+ + +

对于加入的团队来说,这些好处是立竿见影的。Grigoriu 说:“在我们的 Kubernetes 集群中运行的团队喜欢这样一个事实,即他们担心的问题更少,他们不需要管理基础设施或操作系统。早期使用者喜欢 Kubernetes 的声明特性,让他们不得不处理的面积减少。

+ + +{{< case-studies/quote image="/images/case-studies/nordstrom/banner4.jpg">}} Grigoriu 说:“在我们的 Kubernetes 集群中运行的团队喜欢这样一个事实,即他们担心的问题更少,他们不需要管理基础设施或操作系统。早期使用者喜欢 Kubernetes 的声明特性,让他们不得不处理的面积减少。” -
-
- -
-
- -为了支持这些早期使用者,Patel 的团队开始发展集群并构建生产级服务。“我们与 Prometheus 集成了监控功能,并配有 Grafana 前端;我们使用 Fluentd 将日志推送到 Elasticsearch ,从而提供日志聚合”Patel 说。该团队还增加了数十个开源组件,包括 CNCF 项目,而且把这些成果都贡献给了 Kubernetes 、Terraform 和 kube2iam 。 -

- -现在有60多个开发团队在 Nordstrom 上运行 Kubernetes ,随着成功案例的涌现,更多的团队加入 -进来。Patel 说:“我们最初的客户群,那些愿意尝试这些的客户群,现在已经开始向后续用户宣传。一个早期使用者拥有 Docker 容器,他不知道如何在生产中运行它。我们和他坐在一起,在15分钟内,我们将其部署到生产中。他认为这是惊人的,他所在的组织更多的人开始加入进来。” -

- -对于 Nordstrom 而言,云原生极大地提高了开发和运营 -效率。现在,使用 Kubernetes 的开发人员部署速度更快,可以专注于在其应用程序中构建价值。一个团队从 25 分钟的合并开始,通过在云中启动虚拟机来进行部署。切换到 Kubernetes 的过程速度是原来 5 倍,将合并时间缩短为 5 分钟。 -
- -
-
- -“借助 Kubernetes ,我们甚至不需要尝试去提高群集的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 pod ,如果它们直接进入云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。 -
-
- -
- -速度是伟大的,并且很容易证明,但也许更大的影响在于运营效率。Patel 说:“我们在 AWS 上运行了数千个 VM ,它们的总体平均 CPU 利用率约为 4%。借助 Kubernetes ,我们甚至不需要尝试去提高群集的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 pod ,如果它们直接进入云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。 -

- -Patel 说:“如果我们能构建一个本地 Kubernetes 集群,我们就能将云的力量带到本地快速调配资源。然后,对于开发人员,他们的接口是Kubernetes;他们甚至可能没有意识到或不关心他们的服务现在部署在内部,因为他们只与 Kubernetes 合作。 - -因此,Patel 热切关注 Kubernetes 多集群能力的发展。他说:“有了集群联合,我们可以将内部部署作为主群集,将云作为辅助可突发集群。因此,当有周年销售或黑色星期五销售,我们需要更多的容器时,我们可以去云。” -

- -这种可能性以及 Grigoriu 和 Patel 的团队已经使用Kubernetes所提供的影响,是 Nordstrom 最初在云原生之旅中所起 -的作用。Grigoriu 说:“在当下的零售模式下,我们正在努力在力所能及的地方建立响应能力和灵活性。Kubernetes 使得为开发端和运维端同时带来效率的提升,这是一个双赢。” -
-
+{{< /case-studies/quote >}} + + +

为了支持这些早期使用者,Patel 的团队开始发展集群并构建生产级服务。“我们与 Prometheus 集成了监控功能,并配有 Grafana 前端;我们使用 Fluentd 将日志推送到 Elasticsearch ,从而提供日志聚合”Patel 说。该团队还增加了数十个开源组件,包括 CNCF 项目,而且把这些成果都贡献给了 Kubernetes 、Terraform 和 kube2iam 。

+ + +

现在有 60 多个开发团队在 Nordstrom 上运行 Kubernetes ,随着成功案例的涌现,更多的团队加入进来。Patel 说:“我们最初的客户群,那些愿意尝试这些的客户群,现在已经开始向后续用户宣传。一个早期使用者拥有 Docker 容器,他不知道如何在生产中运行它。我们和他坐在一起,在 15 分钟内,我们将其部署到生产中。他认为这是惊人的,他所在的组织更多的人开始加入进来。”

+ + +

对于 Nordstrom 而言,云原生极大地提高了开发和运营效率。现在,使用 Kubernetes 的开发人员部署速度更快,可以专注于在其应用程序中构建价值。一个团队从 25 分钟的合并开始,通过在云中启动虚拟机来进行部署。切换到 Kubernetes 的过程速度是原来 5 倍,将合并时间缩短为 5 分钟。

+ + +{{< case-studies/quote >}} +“借助 Kubernetes ,我们甚至不需要尝试去提高集群的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 Pod,如果它们直接进入云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。 +{{< /case-studies/quote >}} + + +

速度很重要,并且很容易证明,但也许更大的影响在于运营效率。Patel 说:“我们在 AWS 上运行了数千个 VM ,它们的总体平均 CPU 利用率约为 4%。借助 Kubernetes ,我们甚至不需要尝试去提高集群的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 pod ,如果它们直接上云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。

+ + +

Patel 说:“如果我们能构建一个本地 Kubernetes 集群,我们就能将云的力量带到本地快速调配资源。之后对于开发人员来说,他们面向的接口是 Kubernetes;他们甚至可能没有意识到或不关心他们的服务现在部署在内部,因为他们只与 Kubernetes 一起工作。

+ + +

因此,Patel 热切关注 Kubernetes 多集群能力的发展。他说:“有了集群联合,我们可以将内部部署作为主集群,将云作为辅助可突发集群。因此,当有周年销售或黑色星期五销售并且我们需要更多的容器时,我们可以上云。”

+ + +

这种可能性以及 Grigoriu 和 Patel 的团队已经使用Kubernetes所提供的影响,是 Nordstrom 最初在云原生之旅中所起的作用。Grigoriu 说:“在当下的零售模式下,我们正在努力在力所能及的地方建立响应能力和灵活性。Kubernetes 使得为开发端和运维端同时带来效率的提升,这是一个双赢。”

From 33f5ed626e06f99d6b6683f5e82acd572314c3d8 Mon Sep 17 00:00:00 2001 From: jacky Date: Wed, 20 Jul 2022 22:11:46 +0800 Subject: [PATCH 117/292] zh-cn:sync netease en language Signed-off-by: jacky --- content/zh-cn/case-studies/netease/index.html | 231 +++++++++++------- .../netease/netease_featured_logo.svg | 1 + 2 files changed, 141 insertions(+), 91 deletions(-) create mode 100644 content/zh-cn/case-studies/netease/netease_featured_logo.svg diff --git a/content/zh-cn/case-studies/netease/index.html b/content/zh-cn/case-studies/netease/index.html index f5c70dedffc70..c97d9183c9eb2 100644 --- a/content/zh-cn/case-studies/netease/index.html +++ b/content/zh-cn/case-studies/netease/index.html @@ -1,105 +1,154 @@ --- title: 案例研究:NetEase +linkTitle: NetEase case_study_styles: true cid: caseStudies -css: /css/style_case_studies.css +logo: netease_featured_logo.png +featured: false + +new_case_study_styles: true +heading_background: /images/case-studies/netease/banner1.jpg +heading_title_logo: /images/netease_logo.png +subheading: > + NetEase 如何利用 Kubernetes 支持在全球的互联网业务 +case_study_details: + - 公司: NetEase + - 位置: Hangzhou, China + - 行业: 互联网科技 --- + +

挑战

+ + +

其游戏业务是世界上最大的游戏业务之一,但这不是 NetEase 为中国消费者提供的所有。公司还经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务有近10亿用户通过网站使用免费的电子邮件服务,如 163.com。在2015 年,为所有这些系统提供基础设施的 NetEase Cloud 团队意识到,他们的研发流程正在减缓开发人员的速度。NetEase Cloud 和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们希望通过无服务器容器服务自动为用户提供基础设施和工具。”

+ +

解决方案

- +

在考虑构建自己的业务流程解决方案后,NetEase 决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自 Google,这一事实让团队有信心,它能够跟上 NetEase 的规模。“经过2到3个月的评估,我们相信它能满足我们的需求,”冯长健说。该团队于 2015 年开始与 Kubernetes 合作,那会它甚至还不是 1.0 版本。如今,NetEase 内部云平台还使用了 CNCF 项目 PrometheusEnvoyHarborgRPC Helm, 在生产集群中运行 10000 个节点,并可支持集群多达 30000 个节点。基于对内部平台的学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,NetEase 轻舟微服务。

- --> -
-

案例研究:
网易如何利用 Kubernetes 支持在全球的互联网业务

+ +

影响

-
+ +

NetEase 团队报告说,Kubernetes 已经提高了研发效率一倍多,部署效率提高了 2.8 倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。

+ +{{< case-studies/quote author="曾宇兴,NetEase 架构师" >}} +“系统可以在单个集群中支持 30000 个节点。在生产中,我们在单个集群中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。” +{{< /case-studies/quote >}} -
- 公司  网易     位置  杭州,中国     行业  互联网科技 -
+ +{{< case-studies/lead >}} +其游戏业务是世界第五大游戏业务,但这不是 NetEase 为消费者提供的所有业务。 +{{< /case-studies/lead >}} -
-
-
-
-

挑战

- -其游戏业务是世界上最大的游戏业务之一,但这不是网易为中国消费者提供的所有。公司还经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务有近10亿用户通过网站使用免费的电子邮件服务,如163.com。2015 年,为所有这些系统提供基础设施的网易云团队意识到,他们的研发流程正在减缓开发人员的速度。网易云和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们希望通过无服务器容器服务自动为用户提供基础设施和工具。” -

-

解决方案

- -在考虑构建自己的业务流程解决方案后,网易决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自 Google,这一事实让团队有信心,它能够跟上网易的规模。“经过2到3个月的评估,我们相信它能满足我们的需求,”冯长健说。该团队于 2015 年开始与 Kubernetes 合作,那会它甚至还不是1.0版本。如今,网易内部云平台还使用了 CNCF 项目 PrometheusEnvoyHarborgRPCHelm, 在生产集群中运行 10000 个节点,并可支持集群多达 30000 个节点。基于对内部平台的学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,网易轻舟微服务。 + +

公司还在中国经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务是有近 10 亿用户使用的网站,如 163.com 126.com 免费电子邮件服务。有了这样的规模,为所有这些系统提供基础设施的 NetEase Cloud 团队在 2015 年就意识到,他们的研发流程使得开发人员难以跟上需求。NetEase Cloud 和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们渴望通过无服务器容器服务自动为用户提供基础设施和工具。”

+ +

在考虑构建自己的业务流程解决方案后,NetEase 决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自谷歌,这一事实让团队有信心,它能够跟上 NetEase 的规模。“经过 2 到 3 个月的评估,我们相信它能满足我们的需求,”冯长健说。

-

-

影响

- -网易团队报告说,Kubernetes 已经提高了研发效率一倍多,部署效率提高了 2.8倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。 -
-
- -
-
-
- - “系统可以在单个群集中支持 30000 个节点。在生产中,我们在单个群集中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。”

— 曾宇兴,网易架构师
-
-
-
-
- -

其游戏业务是世界第五大游戏业务,但这不是网易为消费者提供的所有业务。

公司还在中国经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务是有近10亿用户使用的网站,如163.com126.com免费电子邮件服务。有了这样的规模,为所有这些系统提供基础设施的网易云团队在 2015 年就意识到,他们的研发流程使得开发人员难以跟上需求。网易云和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们渴望通过无服务器容器服务自动为用户提供基础设施和工具。”

- -在考虑构建自己的业务流程解决方案后,网易决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自谷歌,这一事实让团队有信心,它能够跟上网易的规模。“经过2到3个月的评估,我们相信它能满足我们的需求,”冯长健说。 -
-
-
-
- -“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。”

- 冯长健,网易云和容器托管平台架构师
-
-
-
-
- -该团队于 2015 年开始采用 Kubernetes,那会它甚至还不是1.0版本,因为它相对易于使用,并且使 DevOps 在公司中得以实现。“我们放弃了 Kubernetes 的一些概念;我们只想使用标准化框架,”冯长健说。“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。”

- -团队首先专注于构建容器平台以更好地管理资源,然后通过添加内部系统(如监视)来改进对微服务的支持。这意味着整合了 CNCF 项目 PrometheusEnvoyHarborgRPCHelm。“我们正在努力提供简化和标准化的流程,以便我们的用户和客户能够利用我们的最佳实践,”冯长健说。

- -团队正在继续改进。例如,企业的电子商务部分需要利用混合部署,过去需要使用两个单独的平台:基础架构即服务平台和 Kubernetes 平台。最近,网易创建了一个跨平台应用程序,支持将两者同时使用单命令部署。 -
-
-
-
- -“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的有所助力的技术。”

- 李兰青, 网易 Kubernetes 开发人员
-
-
- - -
-
- -“系统可以在单个群集中支持 30000 个节点。在生产中,我们在单个群集中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。”

- -网易团队报告说,Kubernetes 已经提高了研发效率一倍多。部署效率提高了 2.8倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。 - -
- -
-
- -“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。”

- 李兰青, 网易 Kubernetes 开发人员
-
-
-
- -基于使用内部平台的成果和学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,网易轻舟微服务。“我们的想法是,我们可以找到我们的游戏和电子商务以及云音乐提供商遇到的问题,所以我们可以整合他们的体验,并提供一个平台,以满足所有用户的需求,”曾宇兴说。

- -无论是否使用网易产品,该团队鼓励其他公司尝试 Kubernetes。Kubernetes 开发者李兰青表示:“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的技术,可以帮助他们。”

- -作为最终用户和供应商,网易已经更多地参与社区,向其他公司学习,分享他们所做的工作。该团队一直在为 Harbor 和 Envoy 项目做出贡献,在网易进行规模测试技术时提供反馈。“我们是一个团队,专注于应对微服务架构的挑战,”冯长健说。“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。” -
-
+ +{{< case-studies/quote + image="/images/case-studies/netease/banner3.jpg" + author="冯长健,NetEase Cloud 和容器托管平台架构师" +>}} +“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。” +{{< /case-studies/quote >}} + + +

该团队于 2015 年开始采用 Kubernetes,那会它甚至还不是 1.0 版本,因为它相对易于使用,并且使 DevOps 在公司中得以实现。“我们放弃了 Kubernetes 的一些概念;我们只想使用标准化框架,”冯长健说。“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。”

+ + +

团队首先专注于构建容器平台以更好地管理资源,然后通过添加内部系统(如监视)来改进对微服务的支持。这意味着整合了 CNCF 项目 Prometheus EnvoyHarborgRPC Helm。“我们正在努力提供简化和标准化的流程,以便我们的用户和客户能够利用我们的最佳实践,”冯长健说。

+ + +

团队正在继续改进。例如,企业的电子商务部分需要利用混合部署,过去需要使用两个单独的平台:基础架构即服务平台和 Kubernetes 平台。最近,NetEase 创建了一个跨平台应用程序,支持将两者同时使用单命令部署。

+ + +{{< case-studies/quote + image="/images/case-studies/netease/banner4.jpg" + author="李兰青,NetEase Kubernetes 开发人员" +>}} +“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的有所助力的技术。” +{{< /case-studies/quote >}} + + +

“系统可以在单个群集中支持 30000 个节点。在生产中,我们在单个群集中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。”

+ + +

NetEase 团队报告说,Kubernetes 已经提高了研发效率一倍多。部署效率提高了 2.8 倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。

+ + +{{< case-studies/quote author="李兰青,NetEase Kubernetes 开发人员">}} +“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。” +{{< /case-studies/quote >}} + + +

基于使用内部平台的成果和学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品, NetEase 轻舟微服务。“我们的想法是,我们可以找到我们的游戏和电子商务以及云音乐提供商遇到的问题,所以我们可以整合他们的体验,并提供一个平台,以满足所有用户的需求,”曾宇兴说。

+ + +

无论是否使用 NetEase 产品,该团队鼓励其他公司尝试 Kubernetes。Kubernetes 开发者李兰青表示:“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的技术,可以帮助他们。”

+ + +

作为最终用户和供应商,NetEase 已经更多地参与社区,向其他公司学习,分享他们所做的工作。该团队一直在为 Harbor 和 Envoy 项目做出贡献,在 NetEase 进行规模测试技术时提供反馈。“我们是一个团队,专注于应对微服务架构的挑战,”冯长健说。“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。”

diff --git a/content/zh-cn/case-studies/netease/netease_featured_logo.svg b/content/zh-cn/case-studies/netease/netease_featured_logo.svg new file mode 100644 index 0000000000000..0ea176812dd65 --- /dev/null +++ b/content/zh-cn/case-studies/netease/netease_featured_logo.svg @@ -0,0 +1 @@ +kubernetes.io-logos \ No newline at end of file From 00984b38b76a3e241f9a95f720b789e1ab9c618e Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 07:49:41 +0100 Subject: [PATCH 118/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Tim Bannister --- .../tasks/configure-pod-container/configure-pod-configmap.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index c13fc2ce6a573..668db33f091a3 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -684,7 +684,8 @@ If the ConfigMap exists, but the referenced key is non-existent the path will be #### Optional ConfigMap in environment variables There might be situations where environment variables are not always required. -These environment variables can be marked as optional in a pod like so: +You can mark an environment variables for a container as optional, +like this: ```yaml apiVersion: v1 From 9513b63d2e21816a9bc37cbc34fc7997223c99c0 Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 07:49:51 +0100 Subject: [PATCH 119/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Tim Bannister --- .../tasks/configure-pod-container/configure-pod-configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 668db33f091a3..f0055771b963d 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -703,7 +703,7 @@ spec: configMapKeyRef: name: a-config key: akey - optional: true + optional: true # mark the variable as optional restartPolicy: Never ``` From 9289136116ae969e1e1657eb59274341aa63edc2 Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 07:49:59 +0100 Subject: [PATCH 120/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Tim Bannister --- .../tasks/configure-pod-container/configure-pod-configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index f0055771b963d..538fddeed6255 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -732,7 +732,7 @@ spec: - name: config-volume configMap: name: no-config - optional: true + optional: true # mark the source ConfigMap as optional restartPolicy: Never ``` From 8066c73a871f247ec1f17156d54383de03f5e4c9 Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 07:50:09 +0100 Subject: [PATCH 121/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Tim Bannister --- .../tasks/configure-pod-container/configure-pod-configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 538fddeed6255..2785177312117 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -736,7 +736,7 @@ spec: restartPolicy: Never ``` -When this pod is run, the output will be: +If you run this pod, and there is no ConfigMap named `no-config`, the output is: ```shell ``` From a24f7c6febee21e979f8d2490d40fbc6ead73ddf Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 07:50:27 +0100 Subject: [PATCH 122/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Tim Bannister --- .../tasks/configure-pod-container/configure-pod-configmap.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 2785177312117..b3cba70557571 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -707,7 +707,10 @@ spec: restartPolicy: Never ``` -When this Pod is run, the output will be empty. +If you run this pod, and there is no ConfigMap named `a-config`, the output is empty. +If you run this pod, and there is a ConfigMap named `a-config` but that ConfigMap doesn't have +a key named `akey`, the output is also empty. If you do set a value for `akey` in the `a-config` +ConfigMap, this pod prints that value and then terminates. #### Optional ConfigMap via volume plugin From 461f5c72e73d4a6858b541953b4bd8ba32f06192 Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 07:50:36 +0100 Subject: [PATCH 123/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Tim Bannister --- .../tasks/configure-pod-container/configure-pod-configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index b3cba70557571..215eb7b65c7bc 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -10,7 +10,7 @@ card: Many applications rely on configuration which is used during either application initialization or runtime. Most of the times there is a requirement to adjust values assigned to configuration parameters. -ConfigMaps is the Kubernetes way to inject application pods with configuration data. +ConfigMaps are the Kubernetes way to inject application pods with configuration data. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps. From b0b5f5f2640446a8cd30a9d7e5b6eb6df731a11e Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 07:50:45 +0100 Subject: [PATCH 124/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Tim Bannister --- .../tasks/configure-pod-container/configure-pod-configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 215eb7b65c7bc..11959ff8f232a 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -677,7 +677,7 @@ data: ### Optional ConfigMaps -A ConfigMap reference may be marked "optional". +In a Pod, or pod template, you can mark a reference to a ConfigMap as _optional_. If the ConfigMap is non-existent, the mounted volume will be empty. If the ConfigMap exists, but the referenced key is non-existent the path will be absent beneath the mount point. From 6d3dcd0f675b33bd2cabba951e44d4072d55c087 Mon Sep 17 00:00:00 2001 From: Oliver Radwell Date: Thu, 21 Jul 2022 08:55:20 +0100 Subject: [PATCH 125/292] [en] Fix containerd config link --- .../en/docs/setup/production-environment/container-runtimes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index b9b38521c4987..44f098ccfcf69 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -217,7 +217,7 @@ When using kubeadm, manually configure the #### Overriding the sandbox (pause) image {#override-pause-image-containerd} -In your [containerd config](https://github.com/containerd/cri/blob/master/docs/config.md) you can overwrite the +In your [containerd config](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) you can overwrite the sandbox image by setting the following config: ```toml From 7e16543b9dbe6771017730199b62e2b99c18803e Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Thu, 21 Jul 2022 08:58:21 +0100 Subject: [PATCH 126/292] Update configure-pod-configmap.md --- .../configure-pod-container/configure-pod-configmap.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 11959ff8f232a..b6679980ad8c4 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -678,8 +678,8 @@ data: ### Optional ConfigMaps In a Pod, or pod template, you can mark a reference to a ConfigMap as _optional_. -If the ConfigMap is non-existent, the mounted volume will be empty. -If the ConfigMap exists, but the referenced key is non-existent the path will be absent beneath the mount point. +If the ConfigMap is non-existent, the configuration for which it provides data in the Pod (e.g. environment variable, mounted volume) will be empty. +If the ConfigMap exists, but the referenced key is non-existent the data is also empty. #### Optional ConfigMap in environment variables @@ -739,10 +739,7 @@ spec: restartPolicy: Never ``` -If you run this pod, and there is no ConfigMap named `no-config`, the output is: - -```shell -``` +If you run this pod, and there is no ConfigMap named `no-config`, the mounted volume will be empty. ### Mounted ConfigMaps are updated automatically From f7a73a151bf32b5171daad86632f7785088cc799 Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 21 Jul 2022 22:00:18 +0800 Subject: [PATCH 127/292] [zh-cn] Pick a nit from /tasks/tools/install-kubectl-linux.md --- content/zh-cn/docs/tasks/tools/install-kubectl-linux.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md b/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md index 7d088e8371c0d..45705fb314379 100644 --- a/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md +++ b/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md @@ -217,10 +217,9 @@ Or use this for detailed view of version: cat < Date: Fri, 22 Jul 2022 00:47:00 +0800 Subject: [PATCH 128/292] Fix link in glossary/api-eviction.md --- content/en/docs/reference/glossary/api-eviction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/glossary/api-eviction.md b/content/en/docs/reference/glossary/api-eviction.md index d450f907439bf..e6db562461478 100644 --- a/content/en/docs/reference/glossary/api-eviction.md +++ b/content/en/docs/reference/glossary/api-eviction.md @@ -22,6 +22,6 @@ When an `Eviction` object is created, the API server terminates the Pod. API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/) and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination). -API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction). +API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/). * See [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/) for more information. From a1035d6a4a13f1c5ebd9851268a6e35d188639c2 Mon Sep 17 00:00:00 2001 From: Oliver Radwell Date: Thu, 21 Jul 2022 22:26:45 +0100 Subject: [PATCH 129/292] Apply the same fix to runtime-class.md --- content/en/docs/concepts/containers/runtime-class.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index 6366ee05519e1..cd8f74aa29238 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -116,7 +116,7 @@ Runtime handlers are configured through containerd's configuration at [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}] ``` -See containerd's [config documentation](https://github.com/containerd/cri/blob/master/docs/config.md) +See containerd's [config documentation](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) for more details: #### {{< glossary_tooltip term_id="cri-o" >}} From 35231abc79a899f41a523212947c5d88a1dadad1 Mon Sep 17 00:00:00 2001 From: Sean Wei Date: Fri, 22 Jul 2022 12:33:00 +0800 Subject: [PATCH 130/292] [zh-cn] Sync api-eviction.md --- .../zh-cn/docs/reference/glossary/api-eviction.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/content/zh-cn/docs/reference/glossary/api-eviction.md b/content/zh-cn/docs/reference/glossary/api-eviction.md index 4ed41b8648f00..6288b317b7d75 100644 --- a/content/zh-cn/docs/reference/glossary/api-eviction.md +++ b/content/zh-cn/docs/reference/glossary/api-eviction.md @@ -2,27 +2,26 @@ title: API 发起的驱逐 id: api-eviction date: 2021-04-27 -full_link: /zh-cn/docs/concepts/scheduling-eviction/pod-eviction/#api-eviction +full_link: /zh-cn/docs/concepts/scheduling-eviction/api-eviction/ short_description: > API 发起的驱逐是一个先调用 Eviction API 创建驱逐对象,再由该对象体面地中止 Pod 的过程。 aka: tags: - operation --- - - + 你可以通过 kube-apiserver 的客户端,比如 `kubectl drain` 这样的命令,直接调用 Eviction API 发起驱逐。 -当 `Eviction` 对象创建出来之后,该对象将驱动 API 服务器终止选定的Pod。 +当 `Eviction` 对象创建出来之后,该对象将驱动 API 服务器终止选定的 Pod。 API 发起的驱逐取决于你配置的 [`PodDisruptionBudgets`](/zh-cn/docs/tasks/run-application/configure-pdb/) 和 [`terminationGracePeriodSeconds`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-termination)。 API 发起的驱逐不同于 -[节点压力引发的驱逐](/zh-cn/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction)。 +[节点压力引发的驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)。 ## 背景 {#background} -Docker 也有 [卷(Volume)](https://docs.docker.com/storage/) 的概念,但对它只有少量且松散的管理。 +Docker 也有[卷(Volume)](https://docs.docker.com/storage/) 的概念,但对它只有少量且松散的管理。 Docker 卷是磁盘上或者另外一个容器内的一个目录。 Docker 提供卷驱动程序,但是其功能非常有限。 @@ -133,9 +133,9 @@ EBS volume can be pre-populated with data, and that data can be shared between p {{< feature-state for_k8s_version="v1.17" state="deprecated" >}} -`awsElasticBlockStore` 卷将 Amazon Web服务(AWS)[EBS 卷](https://aws.amazon.com/ebs/) -挂载到你的 Pod 中。与 `emptyDir` 在 Pod 被删除时也被删除不同,EBS 卷的内容在删除 Pod -时会被保留,卷只是被卸载掉了。 +`awsElasticBlockStore` 卷将 Amazon Web 服务(AWS)[EBS 卷](https://aws.amazon.com/ebs/)挂载到你的 +Pod 中。与 `emptyDir` 在 Pod 被删除时也被删除不同,EBS 卷的内容在删除 +Pod 时会被保留,卷只是被卸载掉了。 这意味着 EBS 卷可以预先填充数据,并且该数据可以在 Pod 之间共享。 -确保该区域与你的群集所在的区域相匹配。还要检查卷的大小和 EBS 卷类型都适合你的用途。 +确保该区域与你的集群所在的区域相匹配。还要检查卷的大小和 EBS 卷类型都适合你的用途。 -[区域持久盘](https://cloud.google.com/compute/docs/disks/#repds) -特性允许你创建能在同一区域的两个可用区中使用的持久盘。 +[区域持久盘](https://cloud.google.com/compute/docs/disks/#repds)特性允许你创建能在同一区域的两个可用区中使用的持久盘。 要使用这个特性,必须以持久卷(PersistentVolume)的方式提供卷;直接从 Pod 引用这种卷是不可以的。 @@ -1063,8 +1062,8 @@ Watch out when using this type of volume, because: * 具有相同配置(例如基于同一 PodTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。 * 下层主机上创建的文件或目录只能由 root 用户写入。你需要在 - [特权容器](/zh-cn/docs/tasks/configure-pod-container/security-context/) - 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 `hostPath` 卷。 + [特权容器](/zh-cn/docs/tasks/configure-pod-container/security-context/)中以 + root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 `hostPath` 卷。 Quobyte 支持{{< glossary_tooltip text="容器存储接口(CSI)" term_id="csi" >}}。 推荐使用 CSI 插件以在 Kubernetes 中使用 Quobyte 卷。 -Quobyte 的 GitHub 项目包含以 CSI 形式部署 Quobyte 的 -[说明](https://github.com/quobyte/quobyte-csi#quobyte-csi) -及使用示例。 +Quobyte 的 GitHub 项目包含以 CSI 形式部署 Quobyte +的[说明](https://github.com/quobyte/quobyte-csi#quobyte-csi)及使用示例。 ### rbd @@ -1672,8 +1670,7 @@ must be installed on the cluster and the `CSIMigration` and `CSIMigrationvSphere You can find additional advice on how to migrate in VMware's documentation page [Migrating In-Tree vSphere Volumes to vSphere Container Storage Plug-in](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-968D421F-D464-4E22-8127-6CB9FF54423F.html). --> -你可以在 VMware 的文档页面 -[迁移树内 vSphere 卷插件到 vSphere 容器存储插件](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-968D421F-D464-4E22-8127-6CB9FF54423F.html) +你可以在 VMware 的文档页面[迁移树内 vSphere 卷插件到 vSphere 容器存储插件](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-968D421F-D464-4E22-8127-6CB9FF54423F.html) 中找到有关如何迁移的其他建议。 在本教程中,你会学到如何以及为什么要实现外部化微服务应用配置。 具体来说,你将学习如何使用 Kubernetes ConfigMaps 和 Secrets 设置环境变量, @@ -24,7 +26,12 @@ In this tutorial you will learn how and why to externalize your microservice’s ### 创建 Kubernetes ConfigMaps 和 Secrets {#creating-kubernetes-configmaps-secrets} 在 Kubernetes 中,为 docker 容器设置环境变量有几种不同的方式,比如: @@ -34,9 +41,16 @@ Dockerfile、kubernetes.yml、Kubernetes ConfigMaps、和 Kubernetes Secrets。 比如赋值给不同的容器中的不同环境变量。 ConfigMaps 是存储非机密键值对的 API 对象。 在互动教程中,你会学到如何用 ConfigMap 来保存应用名字。 @@ -49,7 +63,10 @@ Secrets 的更多信息,你可以在[这里](/zh-cn/docs/concepts/configuratio ### 从代码外部化配置 外部化应用配置之所以有用处,是因为配置常常根据环境的不同而变化。 @@ -58,9 +75,18 @@ MicroProfile config 是 MicroProfile 的功能特性, 是一组开放 Java 技术,用于开发、部署云原生微服务。 CDI 提供一套标准的依赖注入能力,使得应用程序可以由相互协作的、松耦合的 beans 组装而成。 MicroProfile Config 为 app 和微服务提供从各种来源,比如应用、运行时、环境,获取配置参数的标准方法。 @@ -87,7 +113,9 @@ CDI & MicroProfile 都会被用在互动教程中, ## 示例:使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置 -### [启动互动教程](/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) + +[启动互动教程](/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) From 9bebc0e4fa732f1a4ac6a129cd6476ad55c6eaa7 Mon Sep 17 00:00:00 2001 From: 0xff-dev Date: Thu, 21 Jul 2022 14:34:28 +0800 Subject: [PATCH 134/292] fix rendering errors --- .../custom-resource-definitions.md | 126 +++++++++++------- 1 file changed, 78 insertions(+), 48 deletions(-) diff --git a/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index df078bdb74740..67db6aa683951 100644 --- a/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -310,7 +310,8 @@ kubectl get crontabs ``` ```none -Error from server (NotFound): Unable to list {"stable.example.com" "v1" "crontabs"}: the server could not find the requested resource (get crontabs.stable.example.com) +Error from server (NotFound): Unable to list {"stable.example.com" "v1" "crontabs"}: the server could not +find the requested resource (get crontabs.stable.example.com) ``` 结构化模式本身是一个 [OpenAPI v3.0 验证模式](#validation),其中: @@ -509,7 +514,9 @@ Violations of the structural schema rules are reported in the `NonStructural` co ### 字段剪裁 {#field-pruning} @@ -521,9 +528,8 @@ CustomResourceDefinition 在集群的持久性存储 被 _剪裁(Pruned)_ 掉(删除)。 输出类似于: -```console +```yaml apiVersion: stable.example.com/v1 kind: CronTab metadata: @@ -618,7 +624,9 @@ to clients, `kubectl` also checks for unknown fields and rejects those objects w #### 控制剪裁 {#controlling-pruning} @@ -731,9 +739,8 @@ properties: ``` 此外,所有这类节点也不再受规则 3 约束,也就是说,下面两种模式是被允许的 (注意,仅限于这两种模式,不支持添加新字段的任何其他变种): @@ -776,7 +783,8 @@ RawExtensions (as in `runtime.RawExtension` defined in [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery/blob/03ac7a9ade429d715a1a46ceaa3724c18ebae54f/pkg/runtime/types.go#L94)) holds complete Kubernetes objects, i.e. with `apiVersion` and `kind` fields. -It is possible to specify those embedded objects (both completely without constraints or partially specified) by setting `x-kubernetes-embedded-resource: true`. For example: +It is possible to specify those embedded objects (both completely without constraints or partially specified) +by setting `x-kubernetes-embedded-resource: true`. For example: --> RawExtensions(就像在 [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery/blob/03ac7a9ade429d715a1a46ceaa3724c18ebae54f/pkg/runtime/types.go#L94) @@ -809,9 +817,8 @@ foo: ``` @@ -1250,14 +1257,14 @@ Compilation process includes type checking as well. The compilation failure: - `no_matching_overload`: this function has no overload for the types of the arguments. - e.g. Rule like `self == true` against a field of integer type will get error: + e.g. Rule like `self == true` against a field of integer type will get error: ``` Invalid value: apiextensions.ValidationRule{Rule:"self == true", Message:""}: compilation failed: ERROR: \:1:6: found no matching overload for '_==_' applied to '(int, bool)' ``` - `no_such_field`: does not contain the desired field. - e.g. Rule like `self.nonExistingField > 0` against a non-existing field will return the error: + e.g. Rule like `self.nonExistingField > 0` against a non-existing field will return the error: ``` Invalid value: apiextensions.ValidationRule{Rule:"self.nonExistingField > 0", Message:""}: compilation failed: ERROR: \:1:5: undefined field 'nonExistingField' ``` @@ -1303,7 +1310,7 @@ Validation Rules Examples: | `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration | | `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' | | `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 | -| `type(self) == string ? self == '100%' : self == 1000` | Validate an int-or-string field for both the the int and string cases | +| `type(self) == string ? self == '100%' : self == 1000` | Validate an int-or-string field for both the int and string cases | | `self.metadata.name.startsWith(self.prefix)` | Validate that an object's name has the prefix of another field value | | `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint | | `size(self.names) == size(self.details) && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet | @@ -1469,7 +1476,7 @@ Examples: The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the object and from any x-kubernetes-embedded-resource annotated objects. No other metadata properties are accessible. --> -`apiVersion`、`kind``metadata.name` 和 `metadata.generateName` 始终可以从对象的根目录和任何 +`apiVersion`、`kind`、`metadata.name` 和 `metadata.generateName` 始终可以从对象的根目录和任何 带有 `x-kubernetes-embedded-resource` 注解的对象访问。 其他元数据属性都不可访问。 @@ -1605,8 +1612,9 @@ Here is the declarations type mapping between OpenAPIv3 and CEL type: | 带有 format=duration 字符串 | duration (google.protobuf.Duration) | 参考:[CEL 类型](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#values), [OpenAPI 类型](https://swagger.io/specification/#data-types), @@ -1619,10 +1627,10 @@ types](https://swagger.io/specification/#data-types), [Kubernetes Structural Sch 可用的函数包括: - CEL 标准函数,在[标准定义列表](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#list-of-standard-definitions)中定义 @@ -1690,7 +1698,8 @@ schema is not mergeable"。 转换规则只允许在模式的“可关联部分(Correlatable Portions)”中使用。 如果所有 `array` 父模式都是 `x-kubernetes-list-type=map`类型的,那么该模式的一部分就是可关联的; @@ -1766,16 +1775,17 @@ longer to execute depending on how long `foo` is. 但是,如果 `foo` 是一个字符串,而你定义了一个验证规则 `self.foo.contains("someString")`, 这个规则需要更长的时间来执行,取决于 `foo` 有多长。 另一个例子是如果 `foo` 是一个数组,而你指定了验证规则 `self.foo.all(x, x > 5)`。 如果没有给出 `foo` 的长度限制,成本系统总是假设最坏的情况,这将发生在任何可以被迭代的事物上(list、map 等)。 因此,通过 `maxItems`,`maxProperties` 和 `maxLength` 进行限制被认为是最佳实践, 以在验证规则中处理任何内容,以防止在成本估算期间验证错误。例如,给定具有一个规则的模式: @@ -1797,9 +1807,9 @@ then the API server rejects this rule on validation budget grounds with error: --> API 服务器以验证预算为由拒绝该规则,并显示错误: ``` - spec.validation.openAPIV3Schema.properties[spec].properties[foo].x-kubernetes-validations[0].rule: Forbidden: - CEL rule exceeded budget by more than 100x (try simplifying the rule, or adding maxItems, maxProperties, and - maxLength where arrays, maps, and strings are used) +spec.validation.openAPIV3Schema.properties[spec].properties[foo].x-kubernetes-validations[0].rule: Forbidden: +CEL rule exceeded budget by more than 100x (try simplifying the rule, or adding maxItems, maxProperties, and +maxLength where arrays, maps, and strings are used) ``` 如果在一个列表内部的一个列表有一个使用 `self.all` 的验证规则,那就会比具有相同规则的非嵌套列表的成本高得多。 @@ -1993,7 +2004,8 @@ Defaulting happens on the object * when reading from etcd using the storage version defaults, * after mutating admission plugins with non-empty patches using the admission webhook object version defaults. -Defaults applied when reading data from etcd are not automatically written back to etcd. An update request via the API is required to persist those defaults back into etcd. +Defaults applied when reading data from etcd are not automatically written back to etcd. +An update request via the API is required to persist those defaults back into etcd. --> 默认值设定的行为发生在定制对象上: @@ -2008,7 +2020,9 @@ Defaults applied when reading data from etcd are not automatically written back 默认值一定会被剪裁(除了 `metadata` 字段的默认值设置),且必须通过所提供 的模式定义的检查。 @@ -2020,7 +2034,9 @@ Default values for `metadata` fields of `x-kubernetes-embedded-resources: true` @@ -2074,7 +2090,9 @@ spec: ``` 其中的 `foo` 字段被剪裁掉并重新设置默认值,因为该字段是不可为空的。 `bar` 字段的 `nullable: true` 使得其能够保有其空值。 @@ -2083,9 +2101,14 @@ with `foo` pruned and defaulted because the field is non-nullable, `bar` maintai ### 以 OpenAPI v2 形式发布合法性检查模式 {#publish-validation-schema-in-openapi-v2} @@ -2117,9 +2140,13 @@ OpenAPI v3 合法性检查模式定义会被转换为 OpenAPI v2 模式定义, 的[合法性检查](#validation)。 1. 以下字段会被移除,因为它们在 OpenAPI v2 中不支持(在将来版本中将使用 OpenAPI v3, 因而不会有这些限制) @@ -2251,7 +2278,8 @@ View)和宽视图(Wide View)(使用 `-o wide` 标志)中显示的列 - `labelSelectorPath` 指定定制资源内与 `scale.status.selector` 对应的 JSON 路径。 @@ -2695,6 +2724,7 @@ crontabs/my-new-cron-object 3s * Serve [multiple versions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/) of a CustomResourceDefinition. + --> * 阅读了解[定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * 参阅 [CustomResourceDefinition](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#customresourcedefinition-v1-apiextensions-k8s-io) From 97d7b02613abbba40e2eeeae788a4bbb197ded04 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Sun, 26 Jun 2022 13:23:41 +0800 Subject: [PATCH 135/292] [zh-cn] Resync and normalize configure service account page --- .../configure-service-account.md | 64 ++++++++++--------- 1 file changed, 35 insertions(+), 29 deletions(-) diff --git a/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md b/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md index badb1f5de2f3d..63f29d00e5e50 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md @@ -26,7 +26,7 @@ not apply. 服务账户为 Pod 中运行的进程提供了一个标识。 {{< note >}} -本文是服务账户的用户使用介绍,描述服务账号在集群中如何起作用。 +本文是服务账户的用户使用介绍,描述服务账户在集群中如何起作用。 你的集群管理员可能已经对你的集群做了定制,因此导致本文中所讲述的内容并不适用。 {{< /note >}} @@ -69,17 +69,16 @@ You can access the API from inside a pod using automatically mounted service acc as described in [Accessing the Cluster](/docs/tasks/accessing-application-cluster/access-cluster/). The API permissions of the service account depend on the [authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules) in use. -In version 1.6+, you can opt out of automounting API credentials for a service account by setting -`automountServiceAccountToken: false` on the service account: +You can opt out of automounting API credentials on `/var/run/secrets/kubernetes.io/serviceaccount/token` for a service account by setting `automountServiceAccountToken: false` on the ServiceAccount: --> 你可以使用自动挂载给 Pod 的服务账户凭据访问 API, [访问集群](/zh-cn/docs/tasks/access-application-cluster/access-cluster/)页面中有相关描述。 服务账户的 API 许可取决于你所使用的 [鉴权插件和策略](/zh-cn/docs/reference/access-authn-authz/authorization/#authorization-modules)。 -在 1.6 以上版本中,你可以通过在服务账户上设置 `automountServiceAccountToken: false` -来实现不给服务账号自动挂载 API 凭据: - +你可以通过在 ServiceAccount 上设置 `automountServiceAccountToken: false` +来实现不给服务账户自动挂载 API 凭据到 `/var/run/secrets/kubernetes.io/serviceaccount/token` +的目的: ```yaml apiVersion: v1 @@ -194,8 +193,7 @@ field of a pod to the name of the service account you wish to use. --> 那么你就能看到系统已经自动创建了一个令牌并且被服务账户所引用。 -你可以使用授权插件来 -[设置服务账户的访问许可](/zh-cn/docs/reference/access-authn-authz/rbac/#service-account-permissions)。 +你可以使用授权插件来[设置服务账户的访问许可](/zh-cn/docs/reference/access-authn-authz/rbac/#service-account-permissions)。 要使用非默认的服务账户,将 Pod 的 `spec.serviceAccountName` 字段设置为你想用的服务账户名称。 @@ -224,7 +222,7 @@ a new secret manually. --> ## 手动创建服务账户 API 令牌 -假设我们有一个上面提到的名为 "build-robot" 的服务账户,然后我们手动创建一个新的 Secret。 +假设我们有一个上面提到的名为 "build-robot" 的服务账户,现在我们手动创建一个新的 Secret。 ```shell kubectl create -f - <}} -{{< note >}} 这里省略了 `token` 的内容。 {{< /note >}} @@ -289,7 +287,7 @@ The content of `token` is elided here. ### 创建 ImagePullSecret -- 创建一个 ImagePullSecret,如同[为 Pod 设置 ImagePullSecret](/zh-cn/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)所述。 +- 创建一个 ImagePullSecret,如[为 Pod 设置 ImagePullSecret](/zh-cn/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)所述。 ```shell kubectl create secret docker-registry myregistrykey --docker-server=DUMMY_SERVER \ @@ -306,7 +304,9 @@ The content of `token` is elided here. kubectl get secrets myregistrykey ``` - + 输出类似于: ``` @@ -319,9 +319,9 @@ The content of `token` is elided here. Next, modify the default service account for the namespace to use this secret as an imagePullSecret. --> -### 将镜像拉取 Secret 添加到服务账号 +### 将镜像拉取 Secret 添加到服务账户 -接着修改命名空间的 `default` 服务帐户,以将该 Secret 用作 `imagePullSecret`。 +接着修改命名空间的 `default` 服务帐户,令其使用该 Secret 用作 `imagePullSecret`。 ```shell kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' @@ -336,7 +336,11 @@ You can instead use `kubectl edit`, or manually edit the YAML manifests as shown kubectl get serviceaccounts default -o yaml > ./sa.yaml ``` -`sa.yaml` 文件的内容类似于: + + +`sa.yaml` 文件的输出类似这样: ```yaml apiVersion: v1 @@ -378,7 +382,7 @@ imagePullSecrets: -最后,用新的更新的 `sa.yaml` 文件替换服务账号。 +最后,用新的更新的 `sa.yaml` 文件替换服务账户。 ```shell kubectl replace serviceaccount default -f ./sa.yaml @@ -391,7 +395,7 @@ Now, when a new Pod is created in the current namespace and using the default Se --> ### 验证镜像拉取 Secret 已经被添加到 Pod 规约 -现在,在当前命名空间中创建使用默认服务账号的新 Pod 时,新 Pod +现在,在当前命名空间中创建使用默认服务账户的新 Pod 时,新 Pod 会自动设置其 `.spec.imagePullSecrets` 字段: ```shell @@ -399,7 +403,9 @@ kubectl run nginx --image=nginx --restart=Never kubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{"\n"}' ``` - + 输出为: ``` @@ -469,7 +475,7 @@ command line arguments to `kube-apiserver`: --> * `--api-audiences` (can be omitted) - 服务账号令牌身份检查组件会检查针对 API 访问所使用的令牌, + 服务账户令牌身份检查组件会检查针对 API 访问所使用的令牌, 确认令牌至少是被绑定到这里所给的受众(audiences)之一。 如果此参数被多次指定,则针对所给的多个受众中任何目标的令牌都会被 Kubernetes API 服务器当做合法的令牌。如果 `--service-account-issuer` @@ -528,7 +534,7 @@ The application is responsible for reloading the token when it rotates. Periodic -## 发现服务账号分发者 +## 发现服务账户分发者 {{< feature-state for_k8s_version="v1.21" state="stable" >}} @@ -537,7 +543,7 @@ The Service Account Issuer Discovery feature is enabled when the Service Account Token Projection feature is enabled, as described [above](#service-account-token-volume-projection). --> -当启用服务账号令牌投射时启用发现服务账号分发者(Service Account Issuer Discovery) +当启用服务账户令牌投射时启用发现服务账户分发者(Service Account Issuer Discovery) 这一功能特性,如[上文所述](#service-account-token-volume-projection)。 -发现服务账号分发者这一功能使得用户能够用联邦的方式结合使用 Kubernetes +发现服务账户分发者这一功能使得用户能够用联邦的方式结合使用 Kubernetes 集群(“Identity Provider”,标识提供者)与外部系统(“Relying Parties”, -依赖方)所分发的服务账号令牌。 +依赖方)所分发的服务账户令牌。 当此功能被启用时,Kubernetes API 服务器会在 `/.well-known/openid-configuration` 提供一个 OpenID 提供者配置文档,并在 `/openid/v1/jwks` 处提供与之关联的 @@ -600,9 +606,9 @@ The responses served at `/.well-known/openid-configuration` and compliant. Those documents contain only the parameters necessary to perform validation of Kubernetes service account tokens. --> -对 `/.well-known/openid-configuration` 和 `/openid/v1/jwks` 路径请求的响应 -被设计为与 OIDC 兼容,但不是完全与其一致。 -返回的文档仅包含对 Kubernetes 服务账号令牌进行验证所必须的参数。 +对 `/.well-known/openid-configuration` 和 `/openid/v1/jwks` 路径请求的响应被设计为与 +OIDC 兼容,但不是与其完全一致。 +返回的文档仅包含对 Kubernetes 服务账户令牌进行验证所必须的参数。 -JWKS 响应包含依赖方可以用来验证 Kubernetes 服务账号令牌的公钥数据。 +JWKS 响应包含依赖方可以用来验证 Kubernetes 服务账户令牌的公钥数据。 依赖方先会查询 OpenID 提供者配置,之后使用返回响应中的 `jwks_uri` 来查找 JWKS。 另请参见: -- [服务账号的集群管理员指南](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/) -- [服务账号签署密钥检索 KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery) +- [服务账户的集群管理员指南](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/) +- [服务账户签署密钥检索 KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery) - [OIDC 发现规范](https://openid.net/specs/openid-connect-discovery-1_0.html) From fc851ff0af767ce098ab5fd4bea26866bb85cf8a Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Wed, 20 Jul 2022 00:36:10 +0800 Subject: [PATCH 136/292] [zh-cn]Update content/zh-cn/docs/concepts/configuration/manage-resources-containers.md [zh-cn]Update content/zh-cn/docs/concepts/configuration/manage-resources-containers.md --- .../configuration/manage-resources-containers.md | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md b/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md index 4a0cc2e521693..9d71bcdf21af1 100644 --- a/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md +++ b/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md @@ -235,7 +235,7 @@ Kubernetes 不允许设置精度小于 `1m` 的 CPU 资源。 Limits and requests for `memory` are measured in bytes. You can express memory as a plain integer or as a fixed-point number using one of these [quantity](/docs/reference/kubernetes-api/common-definitions/quantity/) suffixes: -E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, +E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: --> ## 内存资源单位 {#meaning-of-memory} @@ -256,8 +256,8 @@ Pay attention to the case of the suffixes. If you request `400m` of memory, this for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`) or 400 megabytes (`400M`). --> -请注意后缀的大小写。如果你请求 `400m` 内存,实际上请求的是 0.4 字节。 -如果有人这样设定资源请求或限制,可能他的实际想法是申请 400 兆字节(`400Mi`) +请注意后缀的大小写。如果你请求 `400m` 临时存储,实际上所请求的是 0.4 字节。 +如果有人这样设定资源请求或限制,可能他的实际想法是申请 400Mi 字节(`400Mi`) 或者 400M 字节。 +请注意后缀的大小写。如果你请求 `400m` 临时存储,实际上所请求的是 0.4 字节。 +如果有人这样设定资源请求或限制,可能他的实际想法是申请 400Mi 字节(`400Mi`) +或者 400M 字节。 + -除了[内置的 admission 插件](/zh/docs/reference/access-authn-authz/admission-controllers/), +除了[内置的 admission 插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/), 准入插件可以作为扩展独立开发,并以运行时所配置的 Webhook 的形式运行。 此页面描述了如何构建、配置、使用和监视准入 Webhook。 @@ -36,8 +36,8 @@ Mutating admission Webhooks are invoked first, and can modify objects sent to th --> 准入 Webhook 是一种用于接收准入请求并对其进行处理的 HTTP 回调机制。 可以定义两种类型的准入 webhook,即 -[验证性质的准入 Webhook](/zh/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook) 和 -[修改性质的准入 Webhook](/zh/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)。 +[验证性质的准入 Webhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook) 和 +[修改性质的准入 Webhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)。 修改性质的准入 Webhook 会先被调用。它们可以更改发送到 API 服务器的对象以执行自定义的设置默认值操作。 @@ -57,11 +57,9 @@ should use a validating admission webhook, since objects can be modified after b 则应使用验证性质的准入 Webhook,因为对象被修改性质 Webhook 看到之后仍然可能被修改。 {{< /note >}} - ### 先决条件 {#prerequisites} -* 确保 Kubernetes 集群版本至少为 v1.16(以便使用 `admissionregistration.k8s.io/v1` API) 或者 v1.9 (以便使用 `admissionregistration.k8s.io/v1beta1` API)。 - * 确保启用 MutatingAdmissionWebhook 和 ValidatingAdmissionWebhook 控制器。 - [这里](/zh/docs/reference/access-authn-authz/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use) + [这里](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use) 是一组推荐的 admission 控制器,通常可以启用。 -* 确保启用了 `admissionregistration.k8s.io/v1beta1` API。 +* 确保启用了 `admissionregistration.k8s.io/v1` API。 请参阅 Kubernetes e2e 测试中的 [admission webhook 服务器](https://github.com/kubernetes/kubernetes/blob/release-1.21/test/images/agnhost/webhook/main.go) -的实现。webhook 处理由 apiserver 发送的 `AdmissionReview` 请求,并且将其决定 +的实现。webhook 处理由 API 服务器发送的 `AdmissionReview` 请求,并且将其决定 作为 `AdmissionReview` 对象以相同版本发送回去。 示例准入 Webhook 服务器置 `ClientAuth` 字段为 [空](https://github.com/kubernetes/kubernetes/blob/v1.22.0/test/images/agnhost/webhook/config.go#L38-L39), @@ -163,7 +155,7 @@ your webhook configurations accordingly. 你也可以在集群外部署 webhook。这样做需要相应地更新你的 webhook 配置。 ### 即时配置准入 Webhook @@ -184,8 +176,6 @@ See the [webhook configuration](#webhook-configuration) section for details abou --> 以下是一个 `ValidatingWebhookConfiguration` 示例,mutating webhook 配置与此类似。有关每个配置字段的详细信息,请参阅 [webhook 配置](#webhook-configuration) 部分。 -{{< tabs name="ValidatingWebhookConfiguration_example_1" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration @@ -203,43 +193,26 @@ webhooks: service: namespace: "example-namespace" name: "example-service" - caBundle: "Ci0tLS0tQk......tLS0K" - admissionReviewVersions: ["v1", "v1beta1"] + caBundle: + admissionReviewVersions: ["v1"] sideEffects: None timeoutSeconds: 5 ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# 1.16 中被淘汰,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -metadata: - name: "pod-policy.example.com" -webhooks: -- name: "pod-policy.example.com" - rules: - - apiGroups: [""] - apiVersions: ["v1"] - operations: ["CREATE"] - resources: ["pods"] - scope: "Namespaced" - clientConfig: - service: - namespace: "example-namespace" - name: "example-service" - caBundle: "Ci0tLS0tQk......tLS0K" - admissionReviewVersions: ["v1beta1"] - timeoutSeconds: 5 -``` -{{% /tab %}} -{{< /tabs >}} +{{< note >}} + +你必须在以上示例中将 `` 替换为一个有效的 VA 证书包, +这是一个用 PEM 编码的 CA 证书包,用于校验 Webhook 的服务器证书。 +{{< /note >}} - -scope 字段指定是仅集群范围的资源(Cluster)还是名字空间范围的资源资源(Namespaced)将与此规则匹配。`*` 表示没有范围限制。 +scope 字段指定是仅集群范围的资源(Cluster)还是名字空间范围的资源资源(Namespaced)将与此规则匹配。 +`*` 表示没有范围限制。 {{< note >}} -对于使用 `admissionregistration.k8s.io/v1` 创建的 webhook 而言,其 webhook 调用的默认超时是 10 秒; -对于使用 `admissionregistration.k8s.io/v1beta1` 创建的 webhook 而言,其默认超时是 30 秒。 -从 kubernetes 1.14 开始,可以设置超时。建议对 webhooks 设置较短的超时时间。 +Webhook 调用的默认超时是 10 秒,你可以设置 `timeout` 并建议对 webhook 设置较短的超时时间。 如果 webhook 调用超时,则根据 webhook 的失败策略处理请求。 {{< /note >}} -当 apiserver 收到与 `rules` 相匹配的请求时,apiserver 按照 `clientConfig` 中指定的方式向 webhook 发送一个 `admissionReview` 请求。 +当一个 API 服务器收到与 `rules` 相匹配的请求时, +该 API 服务器将按照 `clientConfig` 中指定的方式向 webhook 发送一个 `admissionReview` 请求。 -创建 webhook 配置后,系统将花费几秒钟使新配置生效。 +创建 Webhook 配置后,系统将花费几秒钟使新配置生效。 -### 对 apiservers 进行身份认证 {#authenticate-apiservers} +### 对 API 服务器进行身份认证 {#authenticate-apiservers} -如果你的 webhook 需要身份验证,则可以将 apiserver 配置为使用基本身份验证、持有者令牌或证书来向 webhook 提供身份证明。完成此配置需要三个步骤。 +如果你的 Webhook 需要身份验证,则可以将 API 服务器配置为使用基本身份验证、持有者令牌或证书来向 +Webhook 提供身份证明。完成此配置需要三个步骤。 -* 启动 apiserver 时,通过 `--admission-control-config-file` 参数指定准入控制配置文件的位置。 +* 启动 API 服务器时,通过 `--admission-control-config-file` 参数指定准入控制配置文件的位置。 * 在准入控制配置文件中,指定 MutatingAdmissionWebhook 控制器和 ValidatingAdmissionWebhook 控制器应该读取凭据的位置。 凭证存储在 kubeConfig 文件中(是​​的,与 kubectl 使用的模式相同),因此字段名称为 `kubeConfigFile`。 以下是一个准入控制配置文件示例: - - - {{< tabs name="admissionconfiguration_example1" >}} {{% tab name="apiserver.config.k8s.io/v1" %}} ```yaml @@ -329,6 +294,7 @@ plugins: ``` {{% /tab %}} {{% tab name="apiserver.k8s.io/v1alpha1" %}} + ```yaml # 1.17 中被淘汰,推荐使用 apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 @@ -347,6 +313,7 @@ plugins: kind: WebhookAdmission kubeConfigFile: "" ``` + {{% /tab %}} {{< /tabs >}} @@ -409,9 +376,9 @@ See the [webhook configuration](#webhook-configuration) section for details abou token: "" ``` -当然,你需要设置 webhook 服务器来处理这些身份验证。 +当然,你需要设置 Webhook 服务器来处理这些身份验证请求。 -创建 `admissionregistration.k8s.io/v1` webhook 配置时,`admissionReviewVersions` 是必填字段。 -Webhook 必须支持至少一个当前和以前的 apiserver 都可以解析的 `AdmissionReview` 版本。 -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被淘汰,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - admissionReviewVersions: ["v1beta1"] - ... -``` - - -如果未指定 `admissionReviewVersions`,则创建 `admissionregistration.k8s.io/v1beta1` Webhook 配置时的默认值为 `v1beta1`。 -{{% /tab %}} -{{< /tabs >}} +创建 webhook 配置时,`admissionReviewVersions` 是必填字段。 +Webhook 必须支持至少一个当前和以前的 API 服务器都可以解析的 `AdmissionReview` 版本。 @@ -647,7 +545,7 @@ Example of a minimal response from a webhook to allow a request: --> `response` 至少必须包含以下字段: -* `uid`,从发送到 webhook 的 `request.uid` 中复制而来 +* `uid`,从发送到 Webhook 的 `request.uid` 中复制而来 * `allowed`,设置为 `true` 或 `false` Webhook 允许请求的最简单响应示例: -{{< tabs name="AdmissionReview_response_allow" >}} -{{% tab name="admission.k8s.io/v1" %}} ```json { "apiVersion": "admission.k8s.io/v1", @@ -667,28 +563,12 @@ Webhook 允许请求的最简单响应示例: } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": true - } -} -``` -{{% /tab %}} -{{< /tabs >}} Webhook 禁止请求的最简单响应示例: -{{< tabs name="AdmissionReview_response_forbid_minimal" >}} -{{% tab name="admission.k8s.io/v1" %}} ```json { "apiVersion": "admission.k8s.io/v1", @@ -699,24 +579,11 @@ Webhook 禁止请求的最简单响应示例: } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": false - } -} -``` -{{% /tab %}} -{{< /tabs >}} + 当拒绝请求时,Webhook 可以使用 `status` 字段自定义 http 响应码和返回给用户的消息。 @@ -724,8 +591,6 @@ Example of a response to forbid a request, customizing the HTTP status code and [API 文档](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#status-v1-meta)。 禁止请求的响应示例,它定制了向用户显示的 HTTP 状态码和消息: -{{< tabs name="AdmissionReview_response_forbid_details" >}} -{{% tab name="admission.k8s.io/v1" %}} ```json { "apiVersion": "admission.k8s.io/v1", @@ -740,30 +605,12 @@ Example of a response to forbid a request, customizing the HTTP status code and } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": false, - "status": { - "code": 403, - "message": "You cannot do this because it is Tuesday and your name starts with A" - } - } -} -``` -{{% /tab %}} -{{< /tabs >}} 当允许请求时,mutating准入 Webhook 也可以选择修改传入的对象。 @@ -786,9 +633,8 @@ Base64-encoded, this would be `W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljY -因此,添加该标签的 webhook 响应为: -{{< tabs name="AdmissionReview_response_modify" >}} -{{% tab name="admission.k8s.io/v1" %}} +因此,添加该标签的 Webhook 响应为: + ```json { "apiVersion": "admission.k8s.io/v1", @@ -801,22 +647,50 @@ So a webhook response to add that label would be: } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} + + +准入 Webhook 可以选择性地返回在 HTTP `Warning` 头中返回给请求客户端的警告消息,警告代码为 299。 +警告可以与允许或拒绝的准入响应一起发送。 + + +如果你正在实现返回一条警告的 webhook,则: + +* 不要在消息中包括 "Warning:" 前缀 +* 使用警告消息描述该客户端进行 API 请求时会遇到或应意识到的问题 +* 如果可能,将警告限制为 120 个字符 + +{{< caution >}} + +超过 256 个字符的单条警告消息在返回给客户之前可能会被 API 服务器截断。 +如果超过 4096 个字符的警告消息(来自所有来源),则额外的警告消息会被忽略。 +{{< /caution >}} + ```json { - "apiVersion": "admission.k8s.io/v1beta1", + "apiVersion": "admission.k8s.io/v1", "kind": "AdmissionReview", "response": { "uid": "", "allowed": true, - "patchType": "JSONPatch", - "patch": "W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=" + "warnings": [ + "duplicate envvar entries specified with name MY_ENV", + "memory request less than 4MB specified for container mycontainer, which will not start successfully" + ] } } ``` -{{% /tab %}} -{{< /tabs >}} + @@ -824,22 +698,23 @@ So a webhook response to add that label would be: -要注册准入 Webhook,请创建 `MutatingWebhookConfiguration` 或 -`ValidatingWebhookConfiguration` API 对象。 +要注册准入 Webhook,请创建 `MutatingWebhookConfiguration` 或 `ValidatingWebhookConfiguration` API 对象。 +`MutatingWebhookConfiguration` 或`ValidatingWebhookConfiguration` 对象的名称必须是有效的 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 每种配置可以包含一个或多个 Webhook。如果在单个配置中指定了多个 -Webhook,则应为每个 webhook 赋予一个唯一的名称。 -这在 `admissionregistration.k8s.io/v1` 中是必需的,但是在使用 -`admissionregistration.k8s.io/v1beta1` 时强烈建议使用, -以使生成的审核日志和指标更易于与活动配置相匹配。 +Webhook,则应为每个 Webhook 赋予一个唯一的名称。 +这是必需的,以使生成的审计日志和指标更易于与激活的配置相匹配。 每个 Webhook 定义以下内容。 @@ -864,7 +739,7 @@ Each rule specifies one or more operations, apiGroups, apiVersions, and resource * `"*/*"` matches all resources and subresources. * `"pods/*"` matches all subresources of pods. * `"*/status"` matches all status subresources. -* `scope` specifies a scope to match. Valid values are `"Cluster"`, `"Namespaced"`, and `"*"`. Subresources match the scope of their parent resource. Supported in v1.14+. Default is `"*"`, matching pre-1.14 behavior. +* `scope` specifies a scope to match. Valid values are `"Cluster"`, `"Namespaced"`, and `"*"`. Subresources match the scope of their parent resource. Default is `"*"`. * `"Cluster"` means that only cluster-scoped resources will match this rule (Namespace API objects are cluster-scoped). * `"Namespaced"` means that only namespaced resources will match this rule. * `"*"` means that there are no scope restrictions. @@ -879,28 +754,27 @@ Each rule specifies one or more operations, apiGroups, apiVersions, and resource * `"pods/*"` 匹配 pod 的所有子资源。 * `"*/status"` 匹配所有 status 子资源。 * `scope` 指定要匹配的范围。有效值为 `"Cluster"`、`"Namespaced"` 和 `"*"`。 - 子资源匹配其父资源的范围。在 Kubernetes v1.14+ 版本中才被支持。 - 默认值为 `"*"`,对应 1.14 版本之前的行为。 + 子资源匹配其父资源的范围。默认值为 `"*"`。 * `"Cluster"` 表示只有集群作用域的资源才能匹配此规则(API 对象 Namespace 是集群作用域的)。 * `"Namespaced"` 意味着仅具有名字空间的资源才符合此规则。 - * `"*"` 表示没有范围限制。 + * `"*"` 表示没有作用域限制。 -如果传入请求与任何 Webhook 规则的指定操作、组、版本、资源和范围匹配,则该请求将发送到 Webhook。 +如果传入请求与任何 Webhook `rules` 的指定 `operations`、`groups`、`versions`、 +`resources` 和 `scope` 匹配,则该请求将发送到 Webhook。 以下是可用于指定应拦截哪些资源的规则的其他示例。 匹配针对 `apps/v1` 和 `apps/v1beta1` 组中 `deployments` 和 `replicasets` 资源的 `CREATE` 或 `UPDATE` 请求: -{{< tabs name="ValidatingWebhookConfiguration_rules_1" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration @@ -915,130 +789,64 @@ webhooks: scope: "Namespaced" ... ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - rules: - - operations: ["CREATE", "UPDATE"] - apiGroups: ["apps"] - apiVersions: ["v1", "v1beta1"] - resources: ["deployments", "replicasets"] - scope: "Namespaced" - ... -``` -{{% /tab %}} -{{< /tabs >}} 匹配所有 API 组和版本中的所有资源(但不包括子资源)的创建请求: -{{< tabs name="ValidatingWebhookConfiguration_rules_2" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "*" - ... -``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "*" - ... + - name: my-webhook.example.com + rules: + - operations: ["CREATE"] + apiGroups: ["*"] + apiVersions: ["*"] + resources: ["*"] + scope: "*" ``` -{{% /tab %}} -{{< /tabs >}} 匹配所有 API 组和版本中所有 `status` 子资源的更新请求: -{{< tabs name="ValidatingWebhookConfiguration_rules_2" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - rules: - - operations: ["UPDATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*/status"] - scope: "*" - ... -``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - rules: - - operations: ["UPDATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*/status"] - scope: "*" - ... + - name: my-webhook.example.com + rules: + - operations: ["UPDATE"] + apiGroups: ["*"] + apiVersions: ["*"] + resources: ["*/status"] + scope: "*" ``` -{{% /tab %}} -{{< /tabs >}} -### 匹配请求:objectSelector{#matching-requests-objectselector} +### 匹配请求:objectSelector {#matching-requests-objectselector} -在版本 v1.15+ 中, 通过指定 `objectSelector`,Webhook 能够根据 -可能发送的对象的标签来限制哪些请求被拦截。 +通过指定 `objectSelector`,Webhook 能够根据可能发送的对象的标签来限制哪些请求被拦截。 如果指定,则将对 `objectSelector` 和可能发送到 Webhook 的 object 和 oldObject 进行评估。如果两个对象之一与选择器匹配,则认为该请求已匹配。 -空对象(对于创建操作而言为 oldObject,对于删除操作而言为 newObject), +空对象(对于创建操作而言为 `oldObject`,对于删除操作而言为 `newObject`), 或不能带标签的对象(例如 `DeploymentRollback` 或 `PodProxyOptions` 对象) 被认为不匹配。 @@ -1054,12 +862,9 @@ This example shows a mutating webhook that would match a `CREATE` of any resourc 这个例子展示了一个 mutating webhook,它将匹配带有标签 `foo:bar` 的任何资源的 `CREATE` 的操作: -{{< tabs name="objectSelector_example" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com objectSelector: @@ -1071,34 +876,13 @@ webhooks: apiVersions: ["*"] resources: ["*"] scope: "*" - ... ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - objectSelector: - matchLabels: - foo: bar - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "*" - ... -``` -{{% /tab %}} -{{< /tabs >}} + -有关标签选择器的更多示例,请参见[标签](/zh/docs/concepts/overview/working-with-objects/labels)。 +有关标签选择器的更多示例,请参见[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels)。 -通过指定 `namespaceSelector`,Webhook 可以根据具有名字空间的资源所处的 -名字空间的标签来选择拦截哪些资源的操作。 +通过指定 `namespaceSelector`, +Webhook 可以根据具有名字空间的资源所处的名字空间的标签来选择拦截哪些资源的操作。 有关标签选择器的更多示例,请参见 -[标签](/zh/docs/concepts/overview/working-with-objects/labels)。 +[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels)。 API 服务器可以通过多个 API 组或版本来提供对象。 -例如,Kubernetes API 服务器允许通过 `extensions/v1beta1`、`apps/v1beta1`、 -`apps/v1beta2` 和 `apps/v1` API 创建和修改 `Deployment` 对象。 例如,如果一个 webhook 仅为某些 API 组/版本指定了规则(例如 -`apiGroups:["apps"], apiVersions:["v1","v1beta1"]`),而修改资源的请求 -是通过另一个 API 组/版本(例如 `extensions/v1beta1`)发出的, -该请求将不会被发送到 Webhook。 +`apiGroups:["apps"], apiVersions:["v1","v1beta1"]`),而修改资源的请求是通过另一个 +API 组/版本(例如 `extensions/v1beta1`)发出的,该请求将不会被发送到 Webhook。 -在 v1.15+ 中,`matchPolicy` 允许 webhook 定义如何使用其 `rules` 匹配传入的请求。 +`matchPolicy` 允许 webhook 定义如何使用其 `rules` 匹配传入的请求。 允许的值为 `Exact` 或 `Equivalent`。 当 API 服务器停止提供某资源时,该资源不再被视为等同于该资源的其他仍在提供服务的版本。 -例如,`extensions/v1beta1` 中的 Deployment 已被废弃,计划在 v1.16 中默认停止使用。 -在这种情况下,带有 `apiGroups:["extensions"], apiVersions:["v1beta1"], resources: ["deployments"]` -规则的 Webhook 将不再拦截通过 `apps/v1` API 来创建 Deployment 的请求。 -["deployments"] 规则将不再拦截通过 `apps/v1` API 创建的部署。 +例如,`extensions/v1beta1` 中的 Deployment 已被废弃,计划在 v1.16 中移除。 + +移除后,带有 `apiGroups:["extensions"], apiVersions:["v1beta1"], resources: ["deployments"]` +规则的 Webhook 将不再拦截通过 `apps/v1` API 来创建的 Deployment。 +因此,Webhook 应该优先注册稳定版本的资源。 -使用 `admissionregistration.k8s.io/v1` 创建的 admission webhhok 默认为 `Equivalent`。 - -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com matchPolicy: Equivalent @@ -1347,15 +1048,12 @@ webhooks: apiVersions: ["v1"] resources: ["deployments"] scope: "Namespaced" - ... ``` - -使用 `admissionregistration.k8s.io/v1beta1` 创建的准入 Webhook 默认为 `Exact`。 -{{% /tab %}} -{{< /tabs >}} +准入 Webhook 所用的 `matchPolicy` 默认为 `Equivalent`。 `host` 不应引用集群中运行的服务;通过指定 `service` 字段来使用服务引用。 -主机可以通过某些 apiserver 中的外部 DNS 进行解析。 +主机可以通过某些 API 服务器中的外部 DNS 进行解析。 (例如,`kube-apiserver` 无法解析集群内 DNS,因为这将违反分层规则)。`host` 也可以是 IP 地址。 +你必须在以上示例中将 `` 替换为一个有效的 VA 证书包, +这是一个用 PEM 编码的 CA 证书包,用于校验 Webhook 的服务器证书。 +{{< /note >}} @@ -1554,65 +1217,30 @@ or the dry-run request will not be sent to the webhook and the API request will Webhook 使用 webhook 配置中的 `sideEffects` 字段显示它们是否有副作用: -* `Unknown`:有关调用 Webhook 的副作用的信息是不可知的。 -如果带有 `dryRun:true` 的请求将触发对该 Webhook 的调用,则该请求将失败,并且不会调用该 Webhook。 + * `None`:调用 webhook 没有副作用。 -* `Some`:调用 webhook 可能会有副作用。 - 如果请求具有 `dry-run` 属性将触发对此 Webhook 的调用, - 则该请求将会失败,并且不会调用该 Webhook。 * `NoneOnDryRun`:调用 webhook 可能会有副作用,但是如果将带有 `dryRun: true` 属性的请求发送到 webhook,则 webhook 将抑制副作用(该 webhook 可识别 `dryRun`)。 - -允许值: -* 在 `admissionregistration.k8s.io/v1beta1` 中,`sideEffects` 可以设置为 - `Unknown`、`None`、`Some` 或者 `NoneOnDryRun`,并且默认值为 `Unknown`。 -* 在 `admissionregistration.k8s.io/v1` 中, `sideEffects` 必须设置为 - `None` 或者 `NoneOnDryRun`。 - 这是一个 validating webhook 的示例,表明它对 `dryRun: true` 请求没有副作用: -{{< tabs name="ValidatingWebhookConfiguration_sideEffects" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - sideEffects: NoneOnDryRun - ... -``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - sideEffects: NoneOnDryRun - ... + - name: my-webhook.example.com + sideEffects: NoneOnDryRun ``` -{{% /tab %}} -{{< /tabs >}} -使用 `admissionregistration.k8s.io/v1` 创建的准入 Webhook 默认超时为 10 秒。 -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - timeoutSeconds: 2 - ... + - name: my-webhook.example.com + timeoutSeconds: 2 ``` - -使用 `admissionregistration.k8s.io/v1beta1` 创建的准入 Webhook 默认超时为 30 秒。 -{{% /tab %}} -{{< /tabs >}} +准入 Webhook 所用的超时时间默认为 10 秒。 -在 v1.15+ 中,允许修改性质的准入插件感应到其他插件所做的更改, +要允许修改性质的准入插件感应到其他插件所做的更改, 如果修改性质的 Webhook 修改了一个对象,则会重新运行内置的修改性质的准入插件, 并且修改性质的 Webhook 可以指定 `reinvocationPolicy` 来控制是否也重新调用它们。 @@ -1740,31 +1345,13 @@ Here is an example of a mutating webhook opting into being re-invoked if later a --> 这是一个修改性质的 Webhook 的示例,该 Webhook 在以后的准入插件修改对象时被重新调用: -{{< tabs name="MutatingWebhookConfiguration_reinvocationPolicy" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com reinvocationPolicy: IfNeeded - ... -``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - reinvocationPolicy: IfNeeded - ... ``` -{{% /tab %}} -{{< /tabs >}} -使用 `admissionregistration.k8s.io/v1` 创建的准入 Webhook 将 -`failurePolicy` 默认设置为 `Fail`。 - -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# v1.16 中被废弃,推荐使用 admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - failurePolicy: Fail - ... -``` - -使用 `admissionregistration.k8s.io/v1beta1` 创建的准入 Webhook 将 -`failurePolicy` 默认设置为 `Ignore`。 -{{% /tab %}} -{{< /tabs >}} +准入 Webhook 所用的默认 `failurePolicy` 是 `Fail`。 -API 服务器提供了监视准入 Webhook 行为的方法。这些监视机制可帮助集群管理员 -回答以下问题: +API 服务器提供了监视准入 Webhook 行为的方法。这些监视机制可帮助集群管理员回答以下问题: 1. 哪个修改性质的 webhook 改变了 API 请求中的对象? 2. 修改性质的 Webhook 对对象做了哪些更改? @@ -1868,19 +1429,19 @@ API 服务器提供了监视准入 Webhook 行为的方法。这些监视机制 Sometimes it's useful to know which mutating webhook mutated the object in a API request, and what change did the webhook apply. --> -有时,了解 API 请求中的哪个修改性质的 Webhook 使对象改变以及该 -Webhook 应用了哪些更改很有用。 +有时,了解 API 请求中的哪个修改性质的 Webhook 使对象改变以及该 Webhook 应用了哪些更改很有用。 -在 v1.16+ 中,kube-apiserver 针对每个修改性质的 Webhook 调用执行[审计](/zh/docs/tasks/debug/debug-cluster/audit/)操作。 +Kubernetes API 服务器针对每个修改性质的 Webhook 调用执行[审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/)操作。 每个调用都会生成一个审计注解,记述请求对象是否发生改变, -可选地还可以根据 webhook 的准入响应生成一个注解,记述所应用的修补。 +可选地还可以根据 Webhook 的准入响应生成一个注解,记述所应用的修补。 针对给定请求的给定执行阶段,注解被添加到审计事件中, 然后根据特定策略进行预处理并写入后端。 @@ -1891,122 +1452,124 @@ The audit level of a event determines which annotations get recorded: -在 `Metadata` 或更高审计级别上,将使用 JSON 负载记录带有键名 +- 在 `Metadata` 或更高审计级别上,将使用 JSON 负载记录带有键名 `mutation.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` 的注解, 该注解表示针对给定请求调用了 Webhook,以及该 Webhook 是否更改了对象。 - -例如,对于正在被重新调用的某 Webhook,所记录的注解如下。 -Webhook 在 mutating Webhook 链中排在第三个位置,并且在调用期间未改变请求对象。 - -```yaml -# 审计事件相关记录 -{ - "kind": "Event", - "apiVersion": "audit.k8s.io/v1", - "annotations": { - "mutation.webhook.admission.k8s.io/round_1_index_2": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook.example.com\",\"mutated\": false}" - # 其他注解 - ... - } - # 其他字段 - ... -} -``` - -```yaml -# 反序列化的注解值 -{ - "configuration": "my-mutating-webhook-configuration.example.com", - "webhook": "my-webhook.example.com", - "mutated": false -} -``` - - -对于在第一轮中调用的 Webhook,所记录的注解如下。 -Webhook 在 mutating Webhook 链中排在第一位,并在调用期间改变了请求对象。 - -```yaml -# 审计事件相关记录 -{ - "kind": "Event", - "apiVersion": "audit.k8s.io/v1", - "annotations": { - "mutation.webhook.admission.k8s.io/round_0_index_0": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"mutated\": true}" - # 其他注解 - ... - } - # 其他字段 - ... -} -``` - -```yaml -# 反序列化的注解值 -{ - "configuration": "my-mutating-webhook-configuration.example.com", - "webhook": "my-webhook-always-mutate.example.com", - "mutated": true -} -``` + + 例如,对于正在被重新调用的某 Webhook,所记录的注解如下。 + Webhook 在 mutating Webhook 链中排在第三个位置,并且在调用期间未改变请求对象。 + + ```yaml + # 审计事件相关记录 + { + "kind": "Event", + "apiVersion": "audit.k8s.io/v1", + "annotations": { + "mutation.webhook.admission.k8s.io/round_1_index_2": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook.example.com\",\"mutated\": false}" + # 其他注解 + ... + } + # 其他字段 + ... + } + ``` + + ```yaml + # 反序列化的注解值 + { + "configuration": "my-mutating-webhook-configuration.example.com", + "webhook": "my-webhook.example.com", + "mutated": false + } + ``` + + + 对于在第一轮中调用的 Webhook,所记录的注解如下。 + Webhook 在 mutating Webhook 链中排在第一位,并在调用期间改变了请求对象。 + + ```yaml + # 审计事件相关记录 + { + "kind": "Event", + "apiVersion": "audit.k8s.io/v1", + "annotations": { + "mutation.webhook.admission.k8s.io/round_0_index_0": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"mutated\": true}" + # 其他注解 + ... + } + # 其他字段 + ... + } + ``` + + ```yaml + # 反序列化的注解值 + { + "configuration": "my-mutating-webhook-configuration.example.com", + "webhook": "my-webhook-always-mutate.example.com", + "mutated": true + } + ``` -在 `Request` 或更高审计级别上,将使用 JSON 负载记录带有键名为 -`patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` 的注解, -该注解表明针对给定请求调用了 Webhook 以及应用于请求对象之上的修改。 - - -例如,以下是针对正在被重新调用的某 Webhook 所记录的注解。 -Webhook 在修改性质的 Webhook 链中排在第四,并在其响应中包含一个 JSON 补丁, -该补丁已被应用于请求对象。 - -```yaml -# 审计事件相关记录 -{ - "kind": "Event", - "apiVersion": "audit.k8s.io/v1", - "annotations": { - "patch.webhook.admission.k8s.io/round_1_index_3": "{\"configuration\":\"my-other-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"patch\":[{\"op\":\"add\",\"path\":\"/data/mutation-stage\",\"value\":\"yes\"}],\"patchType\":\"JSONPatch\"}" - # 其他注解 - ... - } - # 其他字段 - ... -} -``` - -```yaml -# 反序列化的注解值 -{ - "configuration": "my-other-mutating-webhook-configuration.example.com", - "webhook": "my-webhook-always-mutate.example.com", - "patchType": "JSONPatch", - "patch": [ - { - "op": "add", - "path": "/data/mutation-stage", - "value": "yes" - } - ] -} -``` + `patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` gets logged with JSON payload indicating + a webhook gets invoked for given request and what patch gets applied to the request object. +--> +- 在 `Request` 或更高审计级别上,将使用 JSON 负载记录带有键名为 + `patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` 的注解, + 该注解表明针对给定请求调用了 Webhook 以及应用于请求对象之上的修改。 + + + 例如,以下是针对正在被重新调用的某 Webhook 所记录的注解。 + Webhook 在修改性质的 Webhook 链中排在第四,并在其响应中包含一个 JSON 补丁, + 该补丁已被应用于请求对象。 + + ```yaml + # 审计事件相关记录 + { + "kind": "Event", + "apiVersion": "audit.k8s.io/v1", + "annotations": { + "patch.webhook.admission.k8s.io/round_1_index_3": "{\"configuration\":\"my-other-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"patch\":[{\"op\":\"add\",\"path\":\"/data/mutation-stage\",\"value\":\"yes\"}],\"patchType\":\"JSONPatch\"}" + # 其他注解 + ... + } + # 其他字段 + ... + } + ``` + + ```yaml + # 反序列化的注解值 + { + "configuration": "my-other-mutating-webhook-configuration.example.com", + "webhook": "my-webhook-always-mutate.example.com", + "patchType": "JSONPatch", + "patch": [ + { + "op": "add", + "path": "/data/mutation-stage", + "value": "yes" + } + ] + } + ``` -Kube-apiserver 从 `/metrics` 端点公开 Prometheus 指标,这些指标可用于监控和诊断 -apiserver 状态。以下指标记录了与准入 Webhook 相关的状态。 +API 服务器从 `/metrics` 端点公开 Prometheus 指标,这些指标可用于监控和诊断 API 服务器状态。 +以下指标记录了与准入 Webhook 相关的状态。 `kube-system` 名字空间包含由 Kubernetes 系统创建的对象, 例如用于控制平面组件的服务账号,诸如 `kube-dns` 之类的 Pod 等。 -意外更改或拒绝 `kube-system` 名字空间中的请求可能会导致控制平面组件 -停止运行或者导致未知行为发生。 +意外更改或拒绝 `kube-system` +名字空间中的请求可能会导致控制平面组件停止运行或者导致未知行为发生。 如果你的准入 Webhook 不想修改 Kubernetes 控制平面的行为,请使用 -[`namespaceSelector`](#matching-requests-namespaceselector) 避免 -拦截 `kube-system` 名字空间。 +[`namespaceSelector`](#matching-requests-namespaceselector) +避免拦截 `kube-system` 名字空间。 From 38495ab1c089dc66d628be0af9f795e0105f3f43 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Fri, 22 Jul 2022 07:59:03 -0300 Subject: [PATCH 138/292] Update whatsnext translation Signed-off-by: Mr. Erlison --- .../pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md index 406d3b2983d85..a4ce8074fa87c 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token.md @@ -22,6 +22,6 @@ O `kubeadm init` cria um token inicial com um TTL de 24 horas. Os comandos a seg ## Listar um token kubeadm {#cmd-token-list} {{< include "generated/kubeadm_token_list.md" >}} -## {{% heading "O que vem a seguir?" %}} +## {{% heading "whatsnext" %}} -* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join) para inicializar um nó `worker` do Kubernetes e associá-lo ao cluster \ No newline at end of file +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join) para inicializar um nó `worker` do Kubernetes e associá-lo ao cluster From 945222d47f4ab191c8305e56165bdd463fc1f2e4 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Fri, 22 Jul 2022 08:06:33 -0300 Subject: [PATCH 139/292] Remove reviewers Signed-off-by: Mr. Erlison --- .../docs/reference/setup-tools/kubeadm/kubeadm-version.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md index 5bf2ed0e31b6d..c990af8c5fc3c 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md @@ -1,7 +1,4 @@ --- -reviewers: -- luxas -- jbeda title: kubeadm version content_type: conceito weight: 80 From 6d95407f1066c5b464a3e9ec5bd54e31f250712c Mon Sep 17 00:00:00 2001 From: Michael Date: Fri, 22 Jul 2022 19:14:01 +0800 Subject: [PATCH 140/292] [zh-cn] Fix containerd config link --- content/zh-cn/docs/concepts/containers/runtime-class.md | 4 ++-- .../docs/setup/production-environment/container-runtimes.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/zh-cn/docs/concepts/containers/runtime-class.md b/content/zh-cn/docs/concepts/containers/runtime-class.md index 5eb922eee5d31..808db309ea539 100644 --- a/content/zh-cn/docs/concepts/containers/runtime-class.md +++ b/content/zh-cn/docs/concepts/containers/runtime-class.md @@ -198,10 +198,10 @@ handler 需要配置在 runtimes 块中: ``` -更详细信息,请查阅 containerd 的[配置指南](https://github.com/containerd/cri/blob/master/docs/config.md) +更详细信息,请查阅 containerd 的[配置指南](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) #### [cri-o](https://cri-o.io/) diff --git a/content/zh-cn/docs/setup/production-environment/container-runtimes.md b/content/zh-cn/docs/setup/production-environment/container-runtimes.md index 5d28727d05897..da4016068a206 100644 --- a/content/zh-cn/docs/setup/production-environment/container-runtimes.md +++ b/content/zh-cn/docs/setup/production-environment/container-runtimes.md @@ -398,12 +398,12 @@ When using kubeadm, manually configure the #### 重载沙箱(pause)镜像 {#override-pause-image-containerd} -在你的 [containerd 配置](https://github.com/containerd/cri/blob/master/docs/config.md)中, +在你的 [containerd 配置](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)中, 你可以通过设置以下选项重载沙箱镜像: ```toml From 83c60fad73cbd01bd11b753e59e6b7758f422b01 Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Fri, 22 Jul 2022 09:32:00 -0300 Subject: [PATCH 141/292] Fix typo Signed-off-by: Mr. Erlison --- content/pt-br/docs/reference/glossary/sig.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/pt-br/docs/reference/glossary/sig.md b/content/pt-br/docs/reference/glossary/sig.md index cde20f5451f56..0a5911c0bf3a0 100644 --- a/content/pt-br/docs/reference/glossary/sig.md +++ b/content/pt-br/docs/reference/glossary/sig.md @@ -4,13 +4,13 @@ id: sig date: 2018-04-12 full_link: https://github.com/kubernetes/community/blob/master/sig-list.md#master-sig-list short_description: > - Membros da comunidade que gerenciam coletivamente e continuamente uma parte ou projeto maior do cõdigo aberto do Kubernetes. + Membros da comunidade que gerenciam coletivamente e continuamente uma parte ou projeto maior do código aberto do Kubernetes. aka: tags: - community --- - {{< glossary_tooltip text="Membros da comunidade" term_id="member" >}} que gerenciam coletivamente e continuamente uma parte ou projeto maior do cõdigo aberto do Kubernetes. + {{< glossary_tooltip text="Membros da comunidade" term_id="member" >}} que gerenciam coletivamente e continuamente uma parte ou projeto maior do código aberto do Kubernetes. From 5035fb3dea88a317b9871796c0252e125d2297ce Mon Sep 17 00:00:00 2001 From: Michael Date: Fri, 22 Jul 2022 20:43:57 +0800 Subject: [PATCH 142/292] [zh-cn] resync /glossary/extensions.md --- content/zh-cn/docs/reference/glossary/extensions.md | 9 ++++----- .../zh-cn/docs/reference/glossary/garbage-collection.md | 5 +++-- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/content/zh-cn/docs/reference/glossary/extensions.md b/content/zh-cn/docs/reference/glossary/extensions.md index ae33b45a2ef00..a9d100f98e877 100644 --- a/content/zh-cn/docs/reference/glossary/extensions.md +++ b/content/zh-cn/docs/reference/glossary/extensions.md @@ -2,7 +2,7 @@ title: 扩展组件(Extensions) id: Extensions date: 2019-02-01 -full_link: /zh-cn/docs/concepts/extend-kubernetes/extend-cluster/#extensions +full_link: /zh-cn/docs/concepts/extend-kubernetes/#extensions short_description: > 扩展组件是扩展并与 Kubernetes 深度集成以支持新型硬件的软件组件。 aka: @@ -15,7 +15,7 @@ tags: title: Extensions id: Extensions date: 2019-02-01 -full_link: /docs/concepts/extend-kubernetes/extend-cluster/#extensions +full_link: /docs/concepts/extend-kubernetes/#extensions short_description: > Extensions are software components that extend and deeply integrate with Kubernetes to support new types of hardware. @@ -32,10 +32,9 @@ tags: 许多集群管理员会使用托管的 Kubernetes 或其某种发行包,这些集群预装了扩展。 -因此,大多数 Kubernetes 用户将不需要 -安装[扩展组件](/zh-cn/docs/concepts/extend-kubernetes/extend-cluster/#extensions), +因此,大多数 Kubernetes 用户将不需要安装[扩展组件](/zh-cn/docs/concepts/extend-kubernetes/), 需要编写新的扩展组件的用户就更少了。 diff --git a/content/zh-cn/docs/reference/glossary/garbage-collection.md b/content/zh-cn/docs/reference/glossary/garbage-collection.md index f6ca64f58d061..3974bc47661b9 100644 --- a/content/zh-cn/docs/reference/glossary/garbage-collection.md +++ b/content/zh-cn/docs/reference/glossary/garbage-collection.md @@ -35,14 +35,15 @@ tags: Kubernetes 使用垃圾收集机制来清理资源,例如: -[未使用的容器和镜像](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/#containers-images)、 +[未使用的容器和镜像](/zh-cn/docs/concepts/architecture/garbage-collection/#containers-images)、 [失败的 Pod](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)、 [目标资源拥有的对象](/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents/)、 [已完成的 Job](/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/)、 From f8aa712c5f2c8fe50cd1ed75a53e2ac18c198d99 Mon Sep 17 00:00:00 2001 From: Michael Date: Fri, 22 Jul 2022 22:28:22 +0800 Subject: [PATCH 143/292] [zh-cn] updated /access-authn-authz/service-accounts-admin.md --- .../service-accounts-admin.md | 45 +++++++++---------- 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md index 1fec09b6a95c4..74af35317df01 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md @@ -17,14 +17,14 @@ weight: 50 这是一篇针对服务账号的集群管理员指南。你应该熟悉 -[配置 Kubernetes 服务账号](/zh/docs/tasks/configure-pod-container/configure-service-account/)。 +[配置 Kubernetes 服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。 对鉴权和用户账号的支持已在规划中,当前并不完备。 为了更好地描述服务账号,有时这些不完善的特性也会被提及。 @@ -56,16 +56,15 @@ Kubernetes 区分用户账号和服务账号的概念,主要基于以下原因 accounts for components of that system. Because service accounts can be created without many constraints and have namespaced names, such config is portable. --> -- 用户账号是针对人而言的。 服务账号是针对运行在 Pod 中的进程而言的。 -- 用户账号是全局性的。其名称跨集群中名字空间唯一的。服务账号是名字空间作用域的。 +- 用户账号是针对人而言的。而服务账号是针对运行在 Pod 中的进程而言的。 +- 用户账号是全局性的。其名称在某集群中的所有名字空间中必须是唯一的。服务账号是名字空间作用域的。 - 通常情况下,集群的用户账号可能会从企业数据库进行同步,其创建需要特殊权限, 并且涉及到复杂的业务流程。 - 服务账号创建有意做得更轻量,允许集群用户为了具体的任务创建服务账号 - 以遵从权限最小化原则。 + 服务账号创建有意做得更轻量,允许集群用户为了具体的任务创建服务账号以遵从权限最小化原则。 - 对人员和服务账号审计所考虑的因素可能不同。 -- 针对复杂系统的配置包可能包含系统组件相关的各种服务账号的定义。因为服务账号 - 的创建约束不多并且有名字空间域的名称,这种配置是很轻量的。 - +- 针对复杂系统的配置包可能包含系统组件相关的各种服务账号的定义。 + 因为服务账号的创建约束不多并且有名字空间域的名称,这种配置是很轻量的。 + ## 服务账号的自动化 {#service-account-automation} -三个独立组件协作完成服务账号相关的自动化: +以下三个独立组件协作完成服务账号相关的自动化: - `ServiceAccount` 准入控制器 - Token 控制器 @@ -95,11 +94,11 @@ It acts synchronously to modify pods as they are created or updated. When this p ### ServiceAccount 准入控制器 {#serviceaccount-admission-controller} 对 Pod 的改动通过一个被称为 -[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) +[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/) 的插件来实现。它是 API 服务器的一部分。 当 Pod 被创建或更新时,它会同步地修改 Pod。 -如果该插件处于激活状态(在大多数发行版中都是默认激活的),当 Pod 被创建 -或更新时它会进行以下操作: +如果该插件处于激活状态(在大多数发行版中都是默认激活的), +当 Pod 被创建或更新时它会进行以下操作: @@ -133,8 +132,8 @@ It acts synchronously to modify pods as they are created or updated. When this p -ServiceAccount 准入控制器将添加如下投射卷,而不是为令牌控制器 -所生成的不过期的服务账号令牌而创建的基于 Secret 的卷。 +ServiceAccount 准入控制器将添加如下投射卷, +而不是为令牌控制器所生成的不过期的服务账号令牌而创建的基于 Secret 的卷。 ```yaml - name: kube-api-access-<随机后缀> @@ -161,7 +160,7 @@ ServiceAccount 准入控制器将添加如下投射卷,而不是为令牌控 This projected volume consists of three sources: 1. A ServiceAccountToken acquired from kube-apiserver via TokenRequest API. It will expire after 1 hour by default or when the pod is deleted. It is bound to the pod and has kube-apiserver as the audience. -1. A ConfigMap containing a CA bundle used for verifying connections to the kube-apiserver. This feature depends on the `RootCAConfigMap` feature gate, which publishes a "kube-root-ca.crt" ConfigMap to every namespace. `RootCAConfigMap` feature gate is graduated to GA in 1.21 and default to true. (This feature will be removed from --feature-gate arg in 1.22). +1. A ConfigMap containing a CA bundle used for verifying connections to the kube-apiserver. This feature depends on the `RootCAConfigMap` feature gate, which publishes a "kube-root-ca.crt" ConfigMap to every namespace. `RootCAConfigMap` feature gate is graduated to GA in 1.21 and default to true. (This flag will be removed from --feature-gate arg in 1.22) 1. A DownwardAPI that references the namespace of the pod. --> 此投射卷有三个数据源: @@ -179,7 +178,7 @@ This projected volume consists of three sources: -参阅[投射卷](/zh/docs/tasks/configure-pod-container/configure-projected-volume-storage/) +参阅[投射卷](/zh-cn/docs/tasks/configure-pod-container/configure-projected-volume-storage/) 了解进一步的细节。 ### 服务账号控制器 {#serviceaccount-controller} -服务账号控制器管理各名字空间下的 ServiceAccount 对象,并且保证每个活跃的 -名字空间下存在一个名为 "default" 的 ServiceAccount。 +服务账号控制器管理各名字空间下的 ServiceAccount 对象, +并且保证每个活跃的名字空间下存在一个名为 "default" 的 ServiceAccount。 From 5760a94491b97dc9acb6d08f822873a79b638e94 Mon Sep 17 00:00:00 2001 From: "donghui.jiang" Date: Wed, 6 Jul 2022 10:59:41 +0800 Subject: [PATCH 144/292] [zh-cn] update resource-quota-v1.md Chinese version --- .../policy-resources/resource-quota-v1.md | 832 ++++++++++++++++++ 1 file changed, 832 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1.md b/content/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1.md new file mode 100644 index 0000000000000..044e865468ac6 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/policy-resources/resource-quota-v1.md @@ -0,0 +1,832 @@ +--- +api_metadata: + apiVersion: "v1" + import: "k8s.io/api/core/v1" + kind: "ResourceQuota" +content_type: "api_reference" +description: "ResourceQuota 设置每个命名空间强制执行的聚合配额限制。" +title: "ResourceQuota" +weight: 2 +--- + + + +`apiVersion: v1` + +`import "k8s.io/api/core/v1"` + +## ResourceQuota {#ResourceQuota} + + +ResourceQuota 设置每个命名空间强制执行的聚合配额限制。 + +
+ +- **apiVersion**: v1 + +- **kind**: ResourceQuota + +- **metadata** (}}">ObjectMeta) + + + + 标准的对象元数据。 + 更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **spec** (}}">ResourceQuotaSpec) + + + + spec 定义所需的配额。 + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +- **status** (}}">ResourceQuotaStatus) + + + + status 定义实际执行的配额及其当前使用情况。 + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +## ResourceQuotaSpec {#ResourceQuotaSpec} + +ResourceQuotaSpec 定义为 Quota 强制执行所需的硬限制。 + +
+ +- **hard** (map[string]}}">Quantity) + + + + hard 是每种指定资源所需的硬性限制集合。 + 更多信息: https://kubernetes.io/docs/concepts/policy/resource-quotas/ + +- **scopeSelector** (ScopeSelector) + + + + scopeSelector 也是一组过滤器的集合,和 scopes 类似, + 必须匹配配额所跟踪的每个对象,但使用 ScopeSelectorOperator 结合可能的值来表示。 + 对于要匹配的资源,必须同时匹配 scopes 和 scopeSelector(如果在 spec 中设置了的话)。 + + + + + scope 选择算符表示的是由限定范围的资源选择算符进行 **逻辑与** 计算得出的结果。 + + - **scopeSelector.matchExpressions** ([]ScopedResourceSelectorRequirement) + + + + 按资源范围划分的范围选择算符需求列表。 + + + + + 限定范围的资源选择算符需求是一种选择算符,包含值、范围名称和将二者关联起来的运算符。 + + - **scopeSelector.matchExpressions.operator** (string),必需 + + + + 表示范围与一组值之间的关系。有效的运算符为 In、NotIn、Exists、DoesNotExist。 + + - **scopeSelector.matchExpressions.scopeName** (string),必需 + + + + 选择器所适用的范围的名称。 + + - **scopeSelector.matchExpressions.values** ([]string) + + + + 字符串值数组。 + 如果操作符是 In 或 NotIn,values 数组必须是非空的。 + 如果操作符是 Exists 或 DoesNotExist,values 数组必须为空。 + 该数组将在策略性合并补丁操作期间被替换。 + +- **scopes** ([]string) + + + + 一个匹配被配额跟踪的所有对象的过滤器集合。 + 如果没有指定,则默认匹配所有对象。 + +## ResourceQuotaStatus {#ResourceQuotaStatus} + + +ResourceQuotaStatus 定义硬性限制和观测到的用量。 + +
+ +- **hard** (map[string]}}">Quantity) + + + + hard 是每种指定资源所强制实施的硬性限制集合。 + 更多信息: https://kubernetes.io/docs/concepts/policy/resource-quotas/ + +- **used** (map[string]}}">Quantity) + + + + used 是当前命名空间中所观察到的资源总用量。 + +## ResourceQuotaList {#ResourceQuotaList} + + +ResourceQuotaList 是 ResourceQuota 列表。 + +
+ +- **apiVersion**:v1 + +- **kind**:ResourceQuotaList + +- **metadata** (}}">ListMeta) + + + + 标准列表元数据。 + 更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + +- **items** ([]}}">ResourceQuota),必需 + + + + items 是 ResourceQuota 对象的列表。 + 更多信息: https://kubernetes.io/docs/concepts/policy/resource-quotas/ + + +## 操作 {#Operations} + +
+ + +### `get` 读取指定的 ResourceQuota + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/resourcequotas/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + ResourceQuota 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ResourceQuota): OK + +401: Unauthorized + + +### `get` 读取指定的 ResourceQuota 的状态 + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/resourcequotas/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + ResourceQuota 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ResourceQuota): OK + +401: Unauthorized + + +### `list` 列出或监视 ResourceQuota 类别的对象 + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/resourcequotas + + +#### 参数 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">ResourceQuotaList): OK + +401: Unauthorized + + +### `list` 列出或监视 ResourceQuota 类别的对象 + + +#### HTTP 请求 + +GET /api/v1/resourcequotas + + +#### 参数 + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">ResourceQuotaList): OK + +401: Unauthorized + + +### `create` 创建一个 ResourceQuota + + +#### HTTP 请求 + +POST /api/v1/namespaces/{namespace}/resourcequotas + + +#### 参数 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">ResourceQuota, 必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ResourceQuota): OK + +201 (}}">ResourceQuota): Created + +202 (}}">ResourceQuota): Accepted + +401: Unauthorized + + +### `update` 更新指定的 ResourceQuota + + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/resourcequotas/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + ResourceQuota 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">ResourceQuota, 必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ResourceQuota): OK + +201 (}}">ResourceQuota): Created + +401: Unauthorized + + +### `update` 更新指定 ResourceQuota 的状态 + + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/resourcequotas/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + ResourceQuota 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">ResourceQuota, 必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ResourceQuota): OK + +201 (}}">ResourceQuota): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 ResourceQuota + + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/resourcequotas/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + ResourceQuota 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">Patch, 必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ResourceQuota): OK + +201 (}}">ResourceQuota): Created + +401: Unauthorized + + +### `patch` 部分更新指定 ResourceQuota 的状态 + + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/resourcequotas/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + ResourceQuota 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">Patch, 必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ResourceQuota): OK + +201 (}}">ResourceQuota): Created + +401: Unauthorized + + +### `delete` 删除 ResourceQuota + + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/resourcequotas/{name} + + +#### 参数 + +- **name** (**路径参数**): string, 必需 + + ResourceQuota 的名称 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">ResourceQuota): OK + +202 (}}">ResourceQuota): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 ResourceQuota 的集合 + + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/resourcequotas + + +#### 参数 + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **continue** (**查询参数**): string + + }}">continue + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized + From 38893467ee9e035c2efe6a28158d4bfc8dcf7f4a Mon Sep 17 00:00:00 2001 From: yanrongshi Date: Fri, 22 Jul 2022 21:39:07 +0800 Subject: [PATCH 145/292] Update _index.md --- .../zh-cn/docs/concepts/workloads/_index.md | 43 +++++++++---------- 1 file changed, 20 insertions(+), 23 deletions(-) diff --git a/content/zh-cn/docs/concepts/workloads/_index.md b/content/zh-cn/docs/concepts/workloads/_index.md index 9f95e580f9879..a9e0ccc89d3d5 100644 --- a/content/zh-cn/docs/concepts/workloads/_index.md +++ b/content/zh-cn/docs/concepts/workloads/_index.md @@ -25,10 +25,10 @@ a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} wh Pod is running means that all the Pods on that node fail. Kubernetes treats that level of failure as final: you would need to create a new Pod even if the node later recovers. --> -无论你的负载是单一组件还是由多个一同工作的组件构成,在 Kubernetes 中你 -可以在一组 [Pods](/zh-cn/docs/concepts/workloads/pods) 中运行它。 +在 Kubernetes 中,无论你的负载是由单个组件还是由多个一同工作的组件构成, +你都可以在一组 [Pod](/zh-cn/docs/concepts/workloads/pods) 中运行它。 在 Kubernetes 中,Pod 代表的是集群上处于运行状态的一组 -{{< glossary_tooltip text="容器" term_id="container" >}}。 +{{< glossary_tooltip text="容器" term_id="container" >}} 的集合。 -Kubernetes Pods 有[确定的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 -例如,当某 Pod 在你的集群中运行时,Pod 运行所在的 +Kubernetes Pods 遵循[预定义的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 +例如,当在你的集群中运行了某个 Pod,但是 Pod 所在的 {{< glossary_tooltip text="节点" term_id="node" >}} 出现致命错误时, -所有该节点上的 Pods 都会失败。Kubernetes 将这类失败视为最终状态: -即使该节点后来恢复正常运行,你也需要创建新的 Pod 来恢复应用。 +所有该节点上的 Pods 的状态都会变成失败。Kubernetes 将这类失败视为最终状态: +即使该节点后来恢复正常运行,你也需要创建新的 Pod 以恢复应用。 -不过,为了让用户的日子略微好过一些,你并不需要直接管理每个 Pod。 -相反,你可以使用 _负载资源_ 来替你管理一组 Pods。 -这些资源配置 {{< glossary_tooltip term_id="controller" text="控制器" >}} -来确保合适类型的、处于运行状态的 Pod 个数是正确的,与你所指定的状态相一致。 +不过,为了减轻用户的使用负担,通常不需要用户直接管理每个 Pod。 +而是使用**负载资源**来替用户管理一组 Pod。 +这些负载资源通过配置 {{< glossary_tooltip term_id="controller" text="控制器" >}} +来确保正确类型的、处于运行状态的 Pod 个数是正确的,与用户所指定的状态相一致。 Kubernetes 提供若干种内置的工作负载资源: @@ -76,7 +76,7 @@ Kubernetes 提供若干种内置的工作负载资源: [ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) (替换原来的资源 {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}})。 `Deployment` 很适合用来管理你的集群上的无状态应用,`Deployment` 中的所有 - `Pod` 都是相互等价的,并且在需要的时候被换掉。 + `Pod` 都是相互等价的,并且在需要的时候被替换。 * [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/) 让你能够运行一个或者多个以某种方式跟踪应用状态的 Pods。 例如,如果你的负载会将数据作持久存储,你可以运行一个 `StatefulSet`,将每个 @@ -115,13 +115,12 @@ of Kubernetes' core. For example, if you wanted to run a group of `Pods` for you stop work unless _all_ the Pods are available (perhaps for some high-throughput distributed task), then you can implement or install an extension that does provide that feature. --> -在庞大的 Kubernetes 生态系统中,你还可以找到一些提供额外操作的第三方 -工作负载资源。通过使用 -[定制资源定义(CRD)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/), +在庞大的 Kubernetes 生态系统中,你还可以找到一些提供额外操作的第三方工作负载相关的资源。 +通过使用[定制资源定义(CRD)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/), 你可以添加第三方工作负载资源,以完成原本不是 Kubernetes 核心功能的工作。 例如,如果你希望运行一组 `Pods`,但要求所有 Pods 都可用时才执行操作 -(比如针对某种高吞吐量的分布式任务),你可以实现一个能够满足这一需求 -的扩展,并将其安装到集群中运行。 +(比如针对某种高吞吐量的分布式任务),你可以基于定制资源实现一个能够满足这一需求的扩展, +并将其安装到集群中运行。 ## {{% heading "whatsnext" %}} @@ -136,8 +135,7 @@ As well as reading about each resource, you can learn about specific tasks that 除了阅读了解每类资源外,你还可以了解与这些资源相关的任务: * [使用 Deployment 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/) -* 以[单实例](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/) - 或者[多副本集合](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/) +* 以[单实例](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/)或者[多副本集合](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/) 的形式运行有状态的应用; * [使用 `CronJob` 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/) @@ -145,8 +143,7 @@ As well as reading about each resource, you can learn about specific tasks that To learn about Kubernetes' mechanisms for separating code from configuration, visit [Configuration](/docs/concepts/configuration/). --> -要了解 Kubernetes 将代码与配置分离的实现机制,可参阅 -[配置部分](/zh-cn/docs/concepts/configuration/)。 +要了解 Kubernetes 将代码与配置分离的实现机制,可参阅[配置部分](/zh-cn/docs/concepts/configuration/)。 -使用 `certificates.k8s.io` API 创建的证书由指定 [CA](#a-note-to-cluster-administrators) 颁发。 +使用 `certificates.k8s.io` API 创建的证书由指定 [CA](#configuring-your-cluster-to-provide-signing) 颁发。 将集群配置为使用集群根目录 CA 可以达到这个目的,但是你永远不要依赖这一假定。 不要以为这些证书将针对群根目录 CA 进行验证。 {{< /note >}} @@ -62,7 +62,7 @@ install it via your operating system's software sources, or fetch it from ## 集群中的 TLS 信任 -信任 Pod 中运行的应用程序所提供的[自定义 CA](#a-note-to-cluster-administrators) 通常需要一些额外的应用程序配置。 +信任 Pod 中运行的应用程序所提供的[自定义 CA](#configuring-your-cluster-to-provide-signing) 通常需要一些额外的应用程序配置。 你需要将 CA 证书包添加到 TLS 客户端或服务器信任的 CA 证书列表中。 例如,你可以使用 Golang TLS 配置通过解析证书链并将解析的证书添加到 [`tls.Config`](https://pkg.go.dev/crypto/tls#Config) 结构中的 `RootCAs` From 35260abc77b75f35da173a80ed0eefb37abba959 Mon Sep 17 00:00:00 2001 From: Sean Wei Date: Sat, 23 Jul 2022 11:15:00 +0800 Subject: [PATCH 147/292] [zh-cn] Resync labels.md --- .../overview/working-with-objects/labels.md | 115 +++++++++++------- 1 file changed, 73 insertions(+), 42 deletions(-) diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md b/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md index 650eb02a4ee94..df23d8cf18d38 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/labels.md @@ -16,11 +16,10 @@ weight: 40 -_标签(Labels)_ 是附加到 Kubernetes 对象(比如 Pods)上的键值对。 +**标签(Labels)**是附加到 Kubernetes 对象(比如 Pods)上的键值对。 标签旨在用于指定对用户有意义且相关的对象的标识属性,但不直接对核心系统有语义含义。 标签可以用于组织和选择对象的子集。标签可以在创建时附加到对象,随后可以随时添加和修改。 每个对象都可以定义一组键/值标签。每个键对于给定对象必须是唯一的。 @@ -49,7 +48,7 @@ and CLIs. Non-identifying information should be recorded using Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings. --> -## 动机 +## 动机 {#motivation} 标签使用户能够以松散耦合的方式将他们自己的组织结构映射到系统对象,而无需客户端存储这些映射。 @@ -78,15 +77,15 @@ These are examples of [commonly used labels](/docs/concepts/overview/working-wit -## 语法和字符集 +## 语法和字符集 {#syntax-and-character-set} -_标签_ 是键值对。有效的标签键有两个段:可选的前缀和名称,用斜杠(`/`)分隔。 +**标签**是键值对。有效的标签键有两个段:可选的前缀和名称,用斜杠(`/`)分隔。 名称段是必需的,必须小于等于 63 个字符,以字母数字字符(`[a-z0-9A-Z]`)开头和结尾, 带有破折号(`-`),下划线(`_`),点( `.`)和之间的字母数字。 前缀是可选的。如果指定,前缀必须是 DNS 子域:由点(`.`)分隔的一系列 DNS 标签,总共不超过 253 个字符, @@ -111,10 +110,33 @@ Valid label value: * 除非标签值为空,必须以字母数字字符(`[a-z0-9A-Z]`)开头和结尾 * 包含破折号(`-`)、下划线(`_`)、点(`.`)和字母或数字 + +例如,这是一个有 `environment: production` 和 `app: nginx` 标签的 Pod 配置文件: + +```yaml + +apiVersion: v1 +kind: Pod +metadata: + name: label-demo + labels: + environment: production + app: nginx +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + +``` + ## 标签选择算符 {#label-selectors} @@ -124,15 +146,15 @@ Unlike [names and UIDs](/docs/user-guide/identifiers), labels do not provide uni -通过 _标签选择算符_,客户端/用户可以识别一组对象。标签选择算符是 Kubernetes 中的核心分组原语。 +通过**标签选择算符**,客户端/用户可以识别一组对象。标签选择算符是 Kubernetes 中的核心分组原语。 -API 目前支持两种类型的选择算符:_基于等值的_ 和 _基于集合的_。 -标签选择算符可以由逗号分隔的多个 _需求_ 组成。 -在多个需求的情况下,必须满足所有要求,因此逗号分隔符充当逻辑 _与_(`&&`)运算符。 +API 目前支持两种类型的选择算符:**基于等值的**和**基于集合的**。 +标签选择算符可以由逗号分隔的多个**需求**组成。 +在多个需求的情况下,必须满足所有要求,因此逗号分隔符充当逻辑**与**(`&&`)运算符。 -{{< note >}} 对于某些 API 类别(例如 ReplicaSet)而言,两个实例的标签选择算符不得在命名空间内重叠, 否则它们的控制器将互相冲突,无法确定应该存在的副本个数。 {{< /note >}} +{{< caution >}} -{{< caution >}} 对于基于等值的和基于集合的条件而言,不存在逻辑或(`||`)操作符。 你要确保你的过滤语句按合适的方式组织。 {{< /caution >}} @@ -162,14 +184,14 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`) ### _Equality-based_ requirement _Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well. -Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example: +Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example: --> -### _基于等值的_ 需求 +### **基于等值的**需求 -_基于等值_ 或 _基于不等值_ 的需求允许按标签键和值进行过滤。 +**基于等值**或**基于不等值**的需求允许按标签键和值进行过滤。 匹配对象必须满足所有指定的标签约束,尽管它们也可能具有其他标签。 -可接受的运算符有 `=`、`==` 和 `!=` 三种。 -前两个表示 _相等_(并且只是同义词),而后者表示 _不相等_。例如: +可接受的运算符有 `=`、`==` 和 `!=` 三种。 +前两个表示**相等**(并且是同义词),而后者表示**不相等**。例如: ``` environment = production @@ -214,9 +236,9 @@ spec: _Set-based_ label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: `in`,`notin` and `exists` (only the key identifier). For example: --> -### _基于集合_ 的需求 +### **基于集合**的需求 -_基于集合_ 的标签需求允许你通过一组值来过滤键。 +**基于集合**的标签需求允许你通过一组值来过滤键。 支持三种操作符:`in`、`notin` 和 `exists`(只可以用在键标识符上)。例如: ``` @@ -240,19 +262,19 @@ Similarly the comma separator acts as an _AND_ operator. So filtering resources * 第三个示例选择了所有包含了有 `partition` 标签的资源;没有校验它的值。 * 第四个示例选择了所有没有 `partition` 标签的资源;没有校验它的值。 -类似地,逗号分隔符充当 _与_ 运算符。因此,使用 `partition` 键(无论为何值)和 +类似地,逗号分隔符充当 **与**运算符。因此,使用 `partition` 键(无论为何值)和 `environment` 不同于 `qa` 来过滤资源可以使用 `partition, environment notin (qa)` 来实现。 -_基于集合_ 的标签选择算符是相等标签选择算符的一般形式,因为 `environment=production` +**基于集合**的标签选择算符是相等标签选择算符的一般形式,因为 `environment=production` 等同于 `environment in (production)`;`!=` 和 `notin` 也是类似的。 -_基于集合_ 的要求可以与基于 _相等_ 的要求混合使用。例如:`partition in (customerA, customerB),environment!=qa`。 +**基于集合**的要求可以与基于**相等**的要求混合使用。例如:`partition in (customerA, customerB),environment!=qa`。 ## API @@ -270,22 +292,24 @@ LIST 和 WATCH 操作可以使用查询参数指定标签选择算符过滤一 * _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` * _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` --> -* _基于等值_ 的需求:`?labelSelector=environment%3Dproduction,tier%3Dfrontend` -* _基于集合_ 的需求:`?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` +* **基于等值**的需求:`?labelSelector=environment%3Dproduction,tier%3Dfrontend` +* **基于集合**的需求:`?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` 两种标签选择算符都可以通过 REST 客户端用于 list 或者 watch 资源。 -例如,使用 `kubectl` 定位 `apiserver`,可以使用 _基于等值_ 的标签选择算符可以这么写: +例如,使用 `kubectl` 定位 `apiserver`,可以使用**基于等值**的标签选择算符可以这么写: ```shell kubectl get pods -l environment=production,tier=frontend ``` - -或者使用 _基于集合的_ 需求: + +或者使用**基于集合的**需求: ```shell kubectl get pods -l 'environment in (production),tier in (frontend)' @@ -294,14 +318,16 @@ kubectl get pods -l 'environment in (production),tier in (frontend)' -正如刚才提到的,_基于集合_ 的需求更具有表达力。例如,它们可以实现值的 _或_ 操作: +正如刚才提到的,**基于集合**的需求更具有表达力。例如,它们可以实现值的**或**操作: ```shell kubectl get pods -l 'environment in (production, qa)' ``` - -或者通过 _exists_ 运算符限制不匹配: + +或者通过**exists**运算符限制不匹配: ```shell kubectl get pods -l 'environment,environment notin (frontend)' @@ -318,7 +344,7 @@ also use label selectors to specify sets of other resources, such as ### 在 API 对象中设置引用 一些 Kubernetes 对象,例如 [`services`](/zh-cn/docs/concepts/services-networking/service/) -和 [`replicationcontrollers`](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/) , +和 [`replicationcontrollers`](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/), 也使用了标签选择算符去指定了其他资源的集合,例如 [pods](/zh-cn/docs/concepts/workloads/pods/)。 @@ -335,7 +361,7 @@ Labels selectors for both objects are defined in `json` or `yaml` files using ma 应该管理的 pods 的数量也是由标签选择算符定义的。 两个对象的标签选择算符都是在 `json` 或者 `yaml` 文件中使用映射定义的,并且只支持 -_基于等值_ 需求的选择算符: +**基于等值**需求的选择算符: ```json "selector": { @@ -343,7 +369,9 @@ _基于等值_ 需求的选择算符: } ``` - + 或者 ```yaml @@ -359,15 +387,19 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon #### 支持基于集合需求的资源 比较新的资源,例如 [`Job`](/zh-cn/docs/concepts/workloads/controllers/job/)、 [`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/)、 -[`Replica Set`](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 和 +[`ReplicaSet`](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 和 [`DaemonSet`](/zh-cn/docs/concepts/workloads/controllers/daemonset/), -也支持 _基于集合的_ 需求。 +也支持**基于集合的**需求。 ```yaml selector: @@ -379,7 +411,7 @@ selector: ``` `matchLabels` 是由 `{key,value}` 对组成的映射。 @@ -395,10 +427,9 @@ selector: #### Selecting sets of nodes One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. -See the documentation on [node selection](/docs/concepts/configuration/assign-pod-node/) for more information. +See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information. --> #### 选择节点集 通过标签进行选择的一个用例是确定节点集,方便 Pod 调度。 有关更多信息,请参阅[选择节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)文档。 - From 121f0419fa4f00363b92a4b21d814eed381d0719 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sat, 23 Jul 2022 14:00:06 +0800 Subject: [PATCH 148/292] updated text on the home page --- content/en/_index.html | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/_index.html b/content/en/_index.html index 09b4f8d06aedc..831e27b27fc60 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -16,7 +16,7 @@ {{% blocks/feature image="scalable" %}} #### Planet Scale -Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team. +Designed on the same principles that allow Google to run billions of containers a week, Kubernetes can scale without increasing your operations team. {{% /blocks/feature %}} @@ -43,12 +43,12 @@

The Challenges of Migrating 150+ Microservices to Kubernetes



- Attend KubeCon North America on October 24-28, 2022 + Attend KubeCon North America on October 24-28, 2022



- Attend KubeCon Europe on April 17-21, 2023 + Attend KubeCon Europe on April 17-21, 2023
From 4405b558e2ae8a50dbf4391dc0c0c2a218db3c67 Mon Sep 17 00:00:00 2001 From: Michael Date: Sat, 23 Jul 2022 08:32:02 +0800 Subject: [PATCH 149/292] [zh-cn] resync content/zh-cn/_index.html --- content/zh-cn/_index.html | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/content/zh-cn/_index.html b/content/zh-cn/_index.html index 71688a2e67a3d..3594f0900de68 100644 --- a/content/zh-cn/_index.html +++ b/content/zh-cn/_index.html @@ -60,21 +60,21 @@ {{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
- -

将 150+ 微服务迁移到 Kubernetes 上的挑战

- -

Sarah Wells, 运营和可靠性技术总监, 金融时报

- -
-
- - 参加11月13日到15日的上海 KubeCon -
-
-
-
- - 参加12月11日到13日的西雅图 KubeCon + +

将 150+ 微服务迁移到 Kubernetes 上的挑战

+ +

Sarah Wells, 运营和可靠性技术总监, 金融时报

+ +
+
+ + 参加 2022 年 10 月 24-28 日的北美 KubeCon +
+
+
+
+ + 参加 2023 年 4 月 17-21 日的欧洲 KubeCon
@@ -84,4 +84,4 @@

将 150+ 微服务迁移到 Kubernetes 上的挑战

{{< blocks/kubernetes-features >}} -{{< blocks/case-studies >}} \ No newline at end of file +{{< blocks/case-studies >}} From 8385803d9d9bc769bdd3ba42ecfab646ccc5e216 Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Sat, 23 Jul 2022 14:22:24 +0800 Subject: [PATCH 150/292] [zh-cn]Update content/zh-cn/docs/concepts/services-networking/dual-stack.md --- .../zh-cn/docs/concepts/services-networking/dual-stack.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/zh-cn/docs/concepts/services-networking/dual-stack.md b/content/zh-cn/docs/concepts/services-networking/dual-stack.md index 9476214ef3c55..a95af75e13a0e 100644 --- a/content/zh-cn/docs/concepts/services-networking/dual-stack.md +++ b/content/zh-cn/docs/concepts/services-networking/dual-stack.md @@ -335,7 +335,7 @@ dual-stack.) kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: 10.0.197.123 @@ -349,7 +349,7 @@ dual-stack.) protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp type: ClusterIP status: loadBalancer: {} @@ -385,7 +385,7 @@ dual-stack.) kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: None @@ -399,7 +399,7 @@ dual-stack.) protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp ``` 此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs/kubespray) -安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet(裸机)、Oracle Cloud +安装在 GCE、Azure、OpenStack、AWS、vSphere、Equinix Metal(曾用名 Packet)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。 Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、 -[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、 +[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory)、 制备工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 + Kubespray 提供: * 高可用性集群 -* 可组合属性 +* 可组合属性(例如可选择网络插件) * 支持大多数流行的 Linux 发行版 + * Kinvolk 的 Flatcar Container Linux + * Debian Bullseye、Buster、Jessie、Stretch * Ubuntu 16.04、18.04、20.04, 22.04 - * CentOS / RHEL / Oracle Linux 7、8 - * Debian Buster、Jessie、Stretch、Wheezy + * CentOS/RHEL 7、8 * Fedora 34、35 * Fedora CoreOS - * openSUSE Leap 15 - * Kinvolk 的 Flatcar Container Linux + * openSUSE Leap 15.x/Tumbleweed + * Oracle Linux 7、8 + * Alma Linux 8 + * Rocky Linux 8 + * Amazon Linux 2 * 持续集成测试 -* 在将运行 Ansible 命令的计算机上安装 Ansible v2.11 和 python-netaddr -* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)** -* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则, +* **Kubernetes** 的最低版本要求为 V1.22 +* **在将运行 Ansible 命令的计算机上安装 Ansible v2.11(或更高版本)、Jinja 2.11(或更高版本)和 python-netaddr** +* 目标服务器必须**能够访问 Internet** 才能拉取 Docker 镜像。否则, 需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) -* 目标服务器配置为允许 IPv4 转发 -* **你的 SSH 密钥必须复制**到部署集群的所有服务器中 +* 目标服务器配置为允许 **IPv4 转发** +* 如果针对 Pod 和 Service 使用 IPv6,则目标服务器配置为允许 **IPv6 转发** * **防火墙不是由 kubespray 管理的**。你需要根据需求设置适当的规则策略。为了避免部署过程中出现问题,可以禁用防火墙。 -* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法 -并指定 `ansible_become` 标志或命令参数 `--become` 或 `-b` +* 如果从非 root 用户帐户运行 kubespray,则应在目标服务器中配置正确的特权升级方法并指定 + `ansible_become` 标志或命令参数 `--become` 或 `-b` Kubespray 提供以下实用程序来帮助你设置环境: * 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本: -* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws) -* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/) -* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet) + * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws) + * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack) + * [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/metal) 可以修改[变量文件](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html) 以进行 Kubespray 定制。 -如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群 -并探索 Kubernetes 。 +如果你刚刚开始使用 Kubespray,请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes。 ## 集群操作 -Kubespray 提供了其他 Playbooks 来管理集群: _scale_ 和 _upgrade_。 +Kubespray 提供了其他 Playbooks 来管理集群: **scale** 和 **upgrade**。 ## 反馈 * Slack 频道:[#kubespray](https://kubernetes.slack.com/messages/kubespray/) - (你可以在[此处](https://slack.k8s.io/)获得邀请) -* [GitHub 问题](https://github.com/kubernetes-sigs/kubespray/issues) + (你可以在[此处](https://slack.k8s.io/)获得邀请)。 +* [GitHub 问题](https://github.com/kubernetes-sigs/kubespray/issues)。 ## {{% heading "whatsnext" %}} -查看有关 Kubespray 的 -[路线图](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md) -的计划工作。 +* 查看有关 Kubespray 的 + [路线图](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md)的计划工作。 +* 查阅有关 [Kubespray](https://github.com/kubernetes-sigs/kubespray) 的更多信息。 From a2166e41b581e26bd2b03c31cec2864c0ef47edc Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Sat, 23 Jul 2022 14:49:22 +0800 Subject: [PATCH 152/292] [zh-cn]Update content/zh-cn/docs/concepts/workloads/controllers/job.md --- content/zh-cn/docs/concepts/workloads/controllers/job.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/zh-cn/docs/concepts/workloads/controllers/job.md b/content/zh-cn/docs/concepts/workloads/controllers/job.md index 430cc4627ea9d..f722c19cf9a1c 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/job.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/job.md @@ -118,7 +118,7 @@ Pod Template: job-name=pi Containers: pi: - Image: perl + Image: perl:5.34.0 Port: Host Port: Command: @@ -561,7 +561,7 @@ spec: spec: containers: - name: pi - image: perl + image: perl:5.34.0 command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never ``` @@ -639,7 +639,7 @@ spec: spec: containers: - name: pi - image: perl + image: perl:5.34.0 command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never ``` From af07764779e146c1b69fbfada4b4868171e76962 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Sat, 23 Jul 2022 14:57:32 +0800 Subject: [PATCH 153/292] Update user-guide.md --- content/en/docs/concepts/windows/user-guide.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/windows/user-guide.md b/content/en/docs/concepts/windows/user-guide.md index c9e5775da8582..da5b8a6fec7fd 100644 --- a/content/en/docs/concepts/windows/user-guide.md +++ b/content/en/docs/concepts/windows/user-guide.md @@ -105,12 +105,12 @@ port 80 of the container directly to the Service. * Node-to-pod communication across the network, `curl` port 80 of your pod IPs from the Linux control plane node to check for a web server response * Pod-to-pod communication, ping between pods (and across hosts, if you have more than one Windows node) - using docker exec or kubectl exec + using `docker exec` or `kubectl exec` * Service-to-pod communication, `curl` the virtual service IP (seen under `kubectl get services`) from the Linux control plane node and from individual pods * Service discovery, `curl` the service name with the Kubernetes [default DNS suffix](/docs/concepts/services-networking/dns-pod-service/#services) * Inbound connectivity, `curl` the NodePort from the Linux control plane node or machines outside of the cluster - * Outbound connectivity, `curl` external IPs from inside the pod using kubectl exec + * Outbound connectivity, `curl` external IPs from inside the pod using `kubectl exec` {{< note >}} Windows container hosts are not able to access the IP of services scheduled on them due to current platform limitations of the Windows networking stack. From 7eaecb8cd54d0a6c450d2c2bbe53a703539baf82 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Sat, 16 Jul 2022 00:54:36 +0800 Subject: [PATCH 154/292] Update automated-tasks-with-cron-jobs.md --- .../docs/tasks/job/automated-tasks-with-cron-jobs.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs.md index 5e67e5bb39d46..c8bc0e3c04afb 100644 --- a/content/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -199,10 +199,9 @@ kubectl delete cronjob hello -删除 CronJob 会清除它创建的所有任务和 Pod,并阻止它创建额外的任务。你可以查阅 -[垃圾收集](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/)。 +删除 CronJob 会清除它创建的所有任务和 Pod,并阻止它创建额外的任务。你可以查阅[垃圾收集](/zh-cn/docs/concepts/architecture/garbage-collection/)。 {{< note >}} -调度中的问号 (`?`) 和星号 `*` 含义相同,表示给定字段的任何可用值。 +调度中的问号 (`?`) 和星号 `*` 含义相同,它们用来表示给定字段的任何可用值。 {{< /note >}} * [Pod](/zh-cn/docs/concepts/workloads/pods/) @@ -218,7 +218,7 @@ Kubernetes 关键组件在 Windows 上的工作方式与在 Linux 上相同。 * CronJob * ReplicationController * {{< glossary_tooltip text="Services" term_id="service" >}} - See [Load balancing and Services](#load-balancing-and-services) for more details. + See [Load balancing and Services](/docs/concepts/services-networking/windows-networking/#load-balancing-and-services) for more details. --> * [工作负载资源](/zh-cn/docs/concepts/workloads/controllers/)包括: @@ -232,7 +232,7 @@ Kubernetes 关键组件在 Windows 上的工作方式与在 Linux 上相同。 * {{< glossary_tooltip text="Services" term_id="service" >}} - 有关更多详细信息,请参考[负载均衡和 Service](#load-balancing-and-services)。 + 有关更多详细信息,请参考[负载均衡和 Service](/zh-cn/docs/concepts/services-networking/windows-networking/#load-balancing-and-services)。 ### 更新清单 {#update-manifests} -升级到新版本 Kubernetes 就可以提供新的 API。 +升级到新版本 Kubernetes 就可以获取到新的 API。 你可以使用 `kubectl convert` 命令在不同 API 版本之间转换清单。 例如: From 5e77d3298bb8e108b9305163c86566a906ebb35f Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Sat, 23 Jul 2022 14:15:56 +0800 Subject: [PATCH 157/292] [zh-cn]Update content/zh-cn/docs/concepts/configuration/overview.md [zh-cn]Update content/zh-cn/docs/concepts/configuration/overview.md --- content/zh-cn/docs/concepts/configuration/overview.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/zh-cn/docs/concepts/configuration/overview.md b/content/zh-cn/docs/concepts/configuration/overview.md index b0d9c2a033040..e3be0085dd853 100644 --- a/content/zh-cn/docs/concepts/configuration/overview.md +++ b/content/zh-cn/docs/concepts/configuration/overview.md @@ -163,12 +163,12 @@ services) (which have a `ClusterIP` of `None`) for service discovery when you do ## 使用标签 {#using-labels} - 定义并使用[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来识别应用程序 - 或 Deployment 的 __语义属性__,例如`{ app: myapp, tier: frontend, phase: test, deployment: v3 }`。 + 或 Deployment 的 **语义属性**,例如`{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`。 你可以使用这些标签为其他资源选择合适的 Pod; - 例如,一个选择所有 `tier: frontend` Pod 的服务,或者 `app: myapp` 的所有 `phase: test` 组件。 + 例如,一个选择所有 `tier: frontend` Pod 的服务,或者 `app.kubernetes.io/name: MyApp` 的所有 `phase: test` 组件。 有关此方法的示例,请参阅 [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) 。 1. 检查部署是否成功。请验证: * 使用 `kubectl get pods` 从 Linux 控制平面节点能够列出两个 Pod * 跨网络的节点到 Pod 通信,从 Linux 控制平面节点上执行 `curl` 访问 Pod IP 的 80 端口以检查 Web 服务器响应 - * Pod 间通信,使用 docker exec 或 kubectl exec + * Pod 间通信,使用 `docker exec` 或 `kubectl exec` 在 Pod 之间(以及跨主机,如果你有多个 Windows 节点)互 ping * Service 到 Pod 的通信,在 Linux 控制平面节点以及独立的 Pod 中执行 `curl` 访问虚拟的服务 IP(在 `kubectl get services` 下查看) * 服务发现,使用 Kubernetes [默认 DNS 后缀](/zh-cn/docs/concepts/services-networking/dns-pod-service/#services)的服务名称, 用 `curl` 访问服务名称 * 入站连接,在 Linux 控制平面节点或集群外的机器上执行 `curl` 来访问 NodePort 服务 - * 出站连接,使用 kubectl exec,从 Pod 内部执行 `curl` 访问外部 IP + * 出站连接,使用 `kubectl exec`,从 Pod 内部执行 `curl` 访问外部 IP {{< note >}} -“发布管理员(Release Managers)” 是一个总称,包括一批负责维护发布分支、标记发行版本以及构建/打包 -Kubernetes 的 Kubernetes 贡献者。 +“发布管理员(Release Managers)” 是一个总称,通过使用 SIG Release 提供的工具, +负责维护发布分支、标记发行版本以及创建发行版本的贡献者。 每个角色的职责如下所述。 From 6905bc24436478f46ceef6b78f409b5657b51927 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sat, 23 Jul 2022 20:46:23 +0800 Subject: [PATCH 162/292] [zh-cn] updated /concepts/services-networking/service.md --- .../concepts/services-networking/service.md | 191 ++++++++---------- 1 file changed, 87 insertions(+), 104 deletions(-) diff --git a/content/zh-cn/docs/concepts/services-networking/service.md b/content/zh-cn/docs/concepts/services-networking/service.md index bedb122737dd7..3964b190b57f2 100644 --- a/content/zh-cn/docs/concepts/services-networking/service.md +++ b/content/zh-cn/docs/concepts/services-networking/service.md @@ -30,7 +30,7 @@ Kubernetes gives Pods their own IP addresses and a single DNS name for a set of and can load-balance across them. --> 使用 Kubernetes,你无需修改应用程序即可使用不熟悉的服务发现机制。 -Kubernetes 为 Pods 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, +Kubernetes 为 Pod 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, 并且可以在它们之间进行负载均衡。 @@ -67,7 +67,7 @@ Pod 是非永久性资源。 这导致了一个问题: 如果一组 Pod(称为“后端”)为集群内的其他 Pod(称为“前端”)提供功能, 那么前端如何找出并跟踪要连接的 IP 地址,以便前端可以使用提供工作负载的后端部分? -进入 _Services_。 +进入 **Services**。 Kubernetes Service 定义了这样一种抽象:逻辑上的一组 Pod,一种可以访问它们的策略 —— 通常称为微服务。 -Service 所针对的 Pods 集合通常是通过{{< glossary_tooltip text="选择算符" term_id="selector" >}}来确定的。 +Service 所针对的 Pod 集合通常是通过{{< glossary_tooltip text="选择算符" term_id="selector" >}}来确定的。 要了解定义服务端点的其他方法,请参阅[不带选择算符的服务](#services-without-selectors)。 -举个例子,考虑一个图片处理后端,它运行了 3 个副本。这些副本是可互换的 —— +举个例子,考虑一个图片处理后端,它运行了 3 个副本。这些副本是可互换的 —— 前端不需要关心它们调用了哪个后端副本。 然而组成这一组后端程序的 Pod 实际上可能会发生变化, 前端客户端不应该也没必要知道,而且也不需要跟踪这一组后端的状态。 @@ -138,7 +138,7 @@ and contains a label `app=MyApp`: Service 在 Kubernetes 中是一个 REST 对象,和 Pod 类似。 像所有的 REST 对象一样,Service 定义可以基于 `POST` 方式,请求 API server 创建新的实例。 Service 对象的名称必须是合法的 -[RFC 1035 标签名称](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names).。 +[RFC 1035 标签名称](/zh-cn/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names)。 例如,假定有一组 Pod,它们对外暴露了 9376 端口,同时还被打上 `app=MyApp` 标签: @@ -155,7 +155,7 @@ spec: port: 80 targetPort: 9376 ``` - + - 服务的默认协议是 TCP;你还可以使用任何其他[受支持的协议](#protocol-support)。 由于许多服务需要公开多个端口,因此 Kubernetes 在服务对象上支持多个端口定义。 @@ -271,14 +269,14 @@ For example: --> ### 没有选择算符的 Service {#services-without-selectors} -由于选择器的存在,服务最常见的用法是为 Kubernetes Pod 的访问提供抽象, -但是当与相应的 Endpoints 对象一起使用且没有选择器时, +由于选择算符的存在,服务最常见的用法是为 Kubernetes Pod 的访问提供抽象, +但是当与相应的 Endpoints 对象一起使用且没有选择算符时, 服务也可以为其他类型的后端提供抽象,包括在集群外运行的后端。 例如: * 希望在生产环境中使用外部的数据库集群,但测试环境使用自己的数据库。 * 希望服务指向另一个 {{< glossary_tooltip term_id="namespace" >}} 中或其它集群中的服务。 - * 你正在将工作负载迁移到 Kubernetes。 在评估该方法时,你仅在 Kubernetes 中运行一部分后端。 + * 你正在将工作负载迁移到 Kubernetes。在评估该方法时,你仅在 Kubernetes 中运行一部分后端。 在任何这些场景中,都能够定义没有选择算符的 Service。 实例: @@ -300,8 +298,8 @@ Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually: --> -由于此服务没有选择算符,因此不会自动创建相应的 Endpoint 对象。 -你可以通过手动添加 Endpoint 对象,将服务手动映射到运行该服务的网络地址和端口: +由于此服务没有选择算符,因此不会自动创建相应的 Endpoints 对象。 +你可以通过手动添加 Endpoints 对象,将服务手动映射到运行该服务的网络地址和端口: ```yaml apiVersion: v1 @@ -425,7 +423,7 @@ domain prefixed names such as `mycompany.com/my-custom-protocol`. 该字段遵循标准的 Kubernetes 标签语法。 其值可以是 [IANA 标准服务名称](https://www.iana.org/assignments/service-names) -或以域名为前缀的名称,如 `mycompany.com/my-custom-protocol`。 +或以域名为前缀的名称,如 `mycompany.com/my-custom-protocol`。 在 `ipvs` 模式下,kube-proxy 监视 Kubernetes 服务和端点,调用 `netlink` 接口相应地创建 IPVS 规则, -并定期将 IPVS 规则与 Kubernetes 服务和端点同步。 该控制循环可确保IPVS -状态与所需状态匹配。访问服务时,IPVS 将流量定向到后端Pod之一。 +并定期将 IPVS 规则与 Kubernetes 服务和端点同步。该控制循环可确保 IPVS +状态与所需状态匹配。访问服务时,IPVS 将流量定向到后端 Pod 之一。 -IPVS代理模式基于类似于 iptables 模式的 netfilter 挂钩函数, +IPVS 代理模式基于类似于 iptables 模式的 netfilter 挂钩函数, 但是使用哈希表作为基础数据结构,并且在内核空间中工作。 这意味着,与 iptables 模式下的 kube-proxy 相比,IPVS 模式下的 kube-proxy 重定向通信的延迟要短,并且在同步代理规则时具有更好的性能。 与其他代理模式相比,IPVS 模式还支持更高的网络流量吞吐量。 -IPVS 提供了更多选项来平衡后端 Pod 的流量。 这些是: +IPVS 提供了更多选项来平衡后端 Pod 的流量。这些是: * `rr`:轮替(Round-Robin) * `lc`:最少链接(Least Connection),即打开链接数量最少者优先 @@ -628,17 +625,16 @@ You can also set the maximum session sticky time by setting (the default value is 10800, which works out to be 3 hours). --> -![IPVS代理的 Services 概述图](/images/docs/services-ipvs-overview.svg) +![IPVS 代理的 Services 概述图](/images/docs/services-ipvs-overview.svg) 在这些代理模型中,绑定到服务 IP 的流量: 在客户端不了解 Kubernetes 或服务或 Pod 的任何信息的情况下,将 Port 代理到适当的后端。 如果要确保每次都将来自特定客户端的连接传递到同一 Pod, -则可以通过将 `service.spec.sessionAffinity` 设置为 "ClientIP" -(默认值是 "None"),来基于客户端的 IP 地址选择会话关联。 -你还可以通过适当设置 `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` -来设置最大会话停留时间。 -(默认值为 10800 秒,即 3 小时)。 +则可以通过将 `service.spec.sessionAffinity` 设置为 "ClientIP" +(默认值是 "None"),来基于客户端的 IP 地址选择会话亲和性。 +你还可以通过适当设置 `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` +来设置最大会话停留时间。(默认值为 10800 秒,即 3 小时)。 {{< note >}} -与一般的Kubernetes名称一样,端口名称只能包含小写字母数字字符 和 `-`。 +与一般的Kubernetes名称一样,端口名称只能包含小写字母数字字符 和 `-`。 端口名称还必须以字母数字字符开头和结尾。 例如,名称 `123-abc` 和 `web` 有效,但是 `123_abc` 和 `-web` 无效。 @@ -763,8 +758,8 @@ as if the external traffic policy were set to `Cluster`. --> 如果本地有端点,而且所有端点处于终止中的状态,那么 kube-proxy 会忽略任何设为 `Local` 的外部流量策略。 -在所有本地端点处于终止中的状态的同时,kube-proxy 将请求指定服务的流量转发到位于其它节点的 -状态健康的端点,如同外部流量策略设为 `Cluster`。 +在所有本地端点处于终止中的状态的同时,kube-proxy 将请求指定服务的流量转发到位于其它节点的状态健康的端点, +如同外部流量策略设为 `Cluster`。 {{< note >}} 当你具有需要访问服务的 Pod 时,并且你正在使用环境变量方法将端口和集群 IP 发布到客户端 -Pod 时,必须在客户端 Pod 出现 *之前* 创建服务。 +Pod 时,必须在客户端 Pod 出现 **之前** 创建服务。 否则,这些客户端 Pod 将不会设定其环境变量。 如果仅使用 DNS 查找服务的集群 IP,则无需担心此设定问题。 @@ -900,8 +895,7 @@ You can find more information about `ExternalName` resolution in --> Kubernetes 还支持命名端口的 DNS SRV(服务)记录。 如果 `my-service.my-ns` 服务具有名为 `http` 的端口,且协议设置为 TCP, -则可以对 `_http._tcp.my-service.my-ns` 执行 DNS SRV 查询查询以发现该端口号, -`"http"` 以及 IP 地址。 +则可以对 `_http._tcp.my-service.my-ns` 执行 DNS SRV 查询以发现该端口号、`"http"` 以及 IP 地址。 Kubernetes DNS 服务器是唯一的一种能够访问 `ExternalName` 类型的 Service 的方式。 更多关于 `ExternalName` 信息可以查看 @@ -928,10 +922,9 @@ selectors defined: 遇到这种情况,可以通过指定 Cluster IP(`spec.clusterIP`)的值为 `"None"` 来创建 `Headless` Service。 -你可以使用无头 Service 与其他服务发现机制进行接口,而不必与 Kubernetes -的实现捆绑在一起。 +你可以使用一个无头 Service 与其他服务发现机制进行接口,而不必与 Kubernetes 的实现捆绑在一起。 -对这无头 Service 并不会分配 Cluster IP,kube-proxy 不会处理它们, +对于无头 `Services` 并不会分配 Cluster IP,kube-proxy 不会处理它们, 而且平台也不会为它们进行负载均衡和路由。 DNS 如何实现自动配置,依赖于 Service 是否定义了选择算符。 @@ -944,7 +937,7 @@ A records (IP addresses) that point directly to the `Pods` backing the `Service` --> ### 带选择算符的服务 {#with-selectors} -对定义了选择算符的无头服务,Endpoint 控制器在 API 中创建了 Endpoints 记录, +对定义了选择算符的无头服务,Endpoints 控制器在 API 中创建了 `Endpoints` 记录, 并且修改 DNS 配置返回 A 记录(IP 地址),通过这个地址直接到达 `Service` 的后端 Pod 上。 ### 无选择算符的服务 {#without-selectors} -对没有定义选择算符的无头服务,Endpoint 控制器不会创建 `Endpoints` 记录。 +对没有定义选择算符的无头服务,Endpoints 控制器不会创建 `Endpoints` 记录。 然而 DNS 系统会查找和配置,无论是: * 对于 [`ExternalName`](#external-name) 类型的服务,查找其 CNAME 记录 @@ -979,8 +972,7 @@ The default is `ClusterIP`. --> ## 发布服务(服务类型) {#publishing-services-service-types} -对一些应用的某些部分(如前端),可能希望将其暴露给 Kubernetes 集群外部 -的 IP 地址。 +对一些应用的某些部分(如前端),可能希望将其暴露给 Kubernetes 集群外部的 IP 地址。 Kubernetes `ServiceTypes` 允许指定你所需要的 Service 类型,默认是 `ClusterIP`。 @@ -1025,7 +1017,7 @@ You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expos --> 你也可以使用 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 来暴露自己的服务。 Ingress 不是一种服务类型,但它充当集群的入口点。 -它可以将路由规则整合到一个资源中,因为它可以在同一IP地址下公开多个服务。 +它可以将路由规则整合到一个资源中,因为它可以在同一 IP 地址下公开多个服务。 {{< note >}} -在 **Azure** 上,如果要使用用户指定的公共类型 `loadBalancerIP`,则 -首先需要创建静态类型的公共 IP 地址资源。 +在 **Azure** 上,如果要使用用户指定的公共类型 `loadBalancerIP`, +则首先需要创建静态类型的公共 IP 地址资源。 此公共 IP 地址资源应与集群中其他自动创建的资源位于同一资源组中。 例如,`MC_myResourceGroup_myAKSCluster_eastus`。 -将分配的 IP 地址设置为 loadBalancerIP。确保你已更新云提供程序配置文件中的 -securityGroupName。 +将分配的 IP 地址设置为 loadBalancerIP。确保你已更新云提供程序配置文件中的 securityGroupName。 有关对 `CreatingLoadBalancerFailed` 权限问题进行故障排除的信息, -请参阅 [与 Azure Kubernetes 服务(AKS)负载平衡器一起使用静态 IP 地址](https://docs.microsoft.com/en-us/azure/aks/static-ip) +请参阅[与 Azure Kubernetes 服务(AKS)负载平衡器一起使用静态 IP 地址](https://docs.microsoft.com/zh-cn/azure/aks/static-ip) 或[在 AKS 集群上使用高级联网时出现 CreatingLoadBalancerFailed](https://github.com/Azure/AKS/issues/357)。 {{< /note >}} #### 混合协议类型的负载均衡器 {{< feature-state for_k8s_version="v1.20" state="alpha" >}} -默认情况下,对于 LoadBalancer 类型的服务,当定义了多个端口时,所有 -端口必须具有相同的协议,并且该协议必须是受云提供商支持的协议。 +默认情况下,对于 LoadBalancer 类型的服务,当定义了多个端口时, +所有端口必须具有相同的协议,并且该协议必须是受云提供商支持的协议。 当服务中定义了多个端口时,特性门控 `MixedProtocolLBService`(在 kube-apiserver 1.24 版本默认为启用)允许 LoadBalancer 类型的服务使用不同的协议。 @@ -1243,7 +1232,7 @@ is `true` and type LoadBalancer Services will continue to allocate node ports. I is set to `false` on an existing Service with allocated node ports, those node ports will **not** be de-allocated automatically. You must explicitly remove the `nodePorts` entry in every Service port to de-allocate those node ports. --> -你可以通过设置 `spec.allocateLoadBalancerNodePorts` 为 `false` +你可以通过设置 `spec.allocateLoadBalancerNodePorts` 为 `false` 对类型为 LoadBalancer 的服务禁用节点端口分配。 这仅适用于直接将流量路由到 Pod 而不是使用节点端口的负载均衡器实现。 默认情况下,`spec.allocateLoadBalancerNodePorts` 为 `true`, @@ -1274,11 +1263,8 @@ Once set, it cannot be changed. `spec.loadBalancerClass` 允许你不使用云提供商的默认负载均衡器实现,转而使用指定的负载均衡器实现。 默认情况下,`.spec.loadBalancerClass` 的取值是 `nil`,如果集群使用 `--cloud-provider` 配置了云提供商, `LoadBalancer` 类型服务会使用云提供商的默认负载均衡器实现。 -如果设置了 `.spec.loadBalancerClass`,则假定存在某个与所指定的类相匹配的 -负载均衡器实现在监视服务变化。 -所有默认的负载均衡器实现(例如,由云提供商所提供的)都会忽略设置了此字段 -的服务。`.spec.loadBalancerClass` 只能设置到类型为 `LoadBalancer` 的 Service -之上,而且一旦设置之后不可变更。 +如果设置了 `.spec.loadBalancerClass`,则假定存在某个与所指定的类相匹配的负载均衡器实现在监视服务变化。 +所有默认的负载均衡器实现(例如,由云提供商所提供的)都会忽略设置了此字段的服务。`.spec.loadBalancerClass` 只能设置到类型为 `LoadBalancer` 的 Service 之上,而且一旦设置之后不可变更。 `.spec.loadBalancerClass` 的值必须是一个标签风格的标识符, -可以有选择地带有类似 "`internal-vip`" 或 "`example.com/internal-vip`" 这类 -前缀。没有前缀的名字是保留给最终用户的。 +可以有选择地带有类似 "`internal-vip`" 或 "`example.com/internal-vip`" 这类前缀。 +没有前缀的名字是保留给最终用户的。 -选择一个标签 +选择一个标签。 {{% /tab %}} {{% tab name="GCP" %}} + ```yaml [...] metadata: @@ -1470,8 +1457,8 @@ modifying the headers. In a mixed-use environment where some ports are secured and others are left unencrypted, you can use the following annotations: --> -第二个注解指定 Pod 使用哪种协议。 对于 HTTPS 和 SSL,ELB 希望 Pod 使用证书 -通过加密连接对自己进行身份验证。 +第二个注解指定 Pod 使用哪种协议。对于 HTTPS 和 SSL,ELB 希望 Pod +使用证书通过加密连接对自己进行身份验证。 HTTP 和 HTTPS 选择第7层代理:ELB 终止与用户的连接,解析标头,并在转发请求时向 `X-Forwarded-For` 标头注入用户的 IP 地址(Pod 仅在连接的另一端看到 ELB 的 IP 地址)。 @@ -1499,7 +1486,7 @@ To see which policies are available for use, you can use the `aws` command line 而 `80` 端口将转发 HTTP 数据包。 从 Kubernetes v1.9 起可以使用 -[预定义的 AWS SSL 策略](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) +[预定义的 AWS SSL 策略](https://docs.aws.amazon.com/zh_cn/elasticloadbalancing/latest/classic/elb-security-policy-table.html) 为你的服务使用 HTTPS 或 SSL 侦听器。 要查看可以使用哪些策略,可以使用 `aws` 命令行工具: @@ -1572,10 +1559,10 @@ specifies the logical hierarchy you created for your Amazon S3 bucket. 注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled` 控制是否启用访问日志。 -注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval` +注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval` 控制发布访问日志的时间间隔(以分钟为单位)。你可以指定 5 分钟或 60 分钟的间隔。 -注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name` +注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name` 控制存储负载均衡器访问日志的 Amazon S3 存储桶的名称。 注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix` @@ -1709,7 +1696,7 @@ on Elastic Load Balancing for a list of supported instance types. {{< note >}} NLB 仅适用于某些实例类。有关受支持的实例类型的列表, 请参见 -[AWS文档](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) +[AWS 文档](https://docs.aws.amazon.com/zh_cn/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) 中关于所支持的实例类型的 Elastic Load Balancing 说明。 {{< /note >}} @@ -1757,14 +1744,14 @@ groups are modified with the following IP rules: | Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | |------|----------|---------|------------|---------------------| | Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\ | -| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ | -| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | +| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (默认值为 `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ | +| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (默认值为 `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | -为了限制哪些客户端IP可以访问网络负载平衡器,请指定 `loadBalancerSourceRanges`。 +为了限制哪些客户端 IP 可以访问网络负载平衡器,请指定 `loadBalancerSourceRanges`。 ```yaml spec: @@ -1788,7 +1775,7 @@ Further documentation on annotations for Elastic IPs and other common use-cases in the [AWS Load Balancer Controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/). --> 有关弹性 IP 注解和更多其他常见用例, -请参阅[AWS负载均衡控制器文档](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)。 +请参阅[AWS 负载均衡控制器文档](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)。 {{< warning >}} -对于一些常见的协议,包括 HTTP 和 HTTPS, -你使用 ExternalName 可能会遇到问题。 -如果你使用 ExternalName,那么集群内客户端使用的主机名 -与 ExternalName 引用的名称不同。 +对于一些常见的协议,包括 HTTP 和 HTTPS,你使用 ExternalName 可能会遇到问题。 +如果你使用 ExternalName,那么集群内客户端使用的主机名与 ExternalName 引用的名称不同。 对于使用主机名的协议,此差异可能会导致错误或意外响应。 -HTTP 请求将具有源服务器无法识别的 `Host:` 标头;TLS 服 -务器将无法提供与客户端连接的主机名匹配的证书。 +HTTP 请求将具有源服务器无法识别的 `Host:` 标头; +TLS 服务器将无法提供与客户端连接的主机名匹配的证书。 {{< /warning >}} {{< note >}} -本部分感谢 [Alen Komljen](https://akomljen.com/)的 -[Kubernetes Tips - Part1](https://akomljen.com/kubernetes-tips-part-1/) 博客文章。 +有关这部分内容,我们要感谢 [Alen Komljen](https://akomljen.com/) 刊登的 +[Kubernetes Tips - Part1](https://akomljen.com/kubernetes-tips-part-1/) 这篇博文。 {{< /note >}} ### 外部 IP {#external-ips} -如果外部的 IP 路由到集群中一个或多个 Node 上,Kubernetes Service 会被暴露给这些 externalIPs。 +如果外部的 IP 路由到集群中一个或多个 Node 上,Kubernetes Service 会被暴露给这些 `externalIPs`。 通过外部 IP(作为目的 IP 地址)进入到集群,打到 Service 的端口上的流量, 将会被路由到 Service 的 Endpoint 上。 `externalIPs` 不会被 Kubernetes 管理,它属于集群管理员的职责范畴。 @@ -1970,7 +1955,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - name: http protocol: TCP @@ -2007,8 +1992,8 @@ but the current API requires it. 使用用户空间代理,隐藏了访问 Service 的数据包的源 IP 地址。 这使得一些类型的防火墙无法起作用。 -iptables 代理不会隐藏 Kubernetes 集群内部的 IP 地址,但却要求客户端请求 -必须通过一个负载均衡器或 Node 端口。 +iptables 代理不会隐藏 Kubernetes 集群内部的 IP 地址, +但却要求客户端请求必须通过一个负载均衡器或 Node 端口。 `Type` 字段支持嵌套功能 —— 每一层需要添加到上一层里面。 不会严格要求所有云提供商(例如,GCE 就没必要为了使一个 `LoadBalancer` @@ -2125,14 +2110,13 @@ each operate slightly differently. --> ### Service IP 地址 {#ips-and-vips} -不像 Pod 的 IP 地址,它实际路由到一个固定的目的地,Service 的 IP 实际上 -不能通过单个主机来进行应答。 -相反,我们使用 `iptables`(Linux 中的数据包处理逻辑)来定义一个 -虚拟 IP 地址(VIP),它可以根据需要透明地进行重定向。 +不像 Pod 的 IP 地址,它实际路由到一个固定的目的地,Service 的 IP 实际上不能通过单个主机来进行应答。 +相反,我们使用 `iptables`(Linux 中的数据包处理逻辑)来定义一个虚拟 IP 地址(VIP), +它可以根据需要透明地进行重定向。 当客户端连接到 VIP 时,它们的流量会自动地传输到一个合适的 Endpoint。 环境变量和 DNS,实际上会根据 Service 的 VIP 和端口来进行填充。 -kube-proxy支持三种代理模式: 用户空间,iptables和IPVS;它们各自的操作略有不同。 +kube-proxy 支持三种代理模式: 用户空间、iptables 和 IPVS;它们各自的操作略有不同。 #### Userspace {#userspace} @@ -2157,8 +2141,8 @@ of which Pods they are actually accessing. 作为一个例子,考虑前面提到的图片处理应用程序。 当创建后端 Service 时,Kubernetes master 会给它指派一个虚拟 IP 地址,比如 10.0.0.1。 假设 Service 的端口是 1234,该 Service 会被集群中所有的 `kube-proxy` 实例观察到。 -当代理看到一个新的 Service, 它会打开一个新的端口,建立一个从该 VIP 重定向到 -新端口的 iptables,并开始接收请求连接。 +当代理看到一个新的 Service,它会打开一个新的端口, +建立一个从该 VIP 重定向到新端口的 iptables,并开始接收请求连接。 当一个客户端连接到一个 VIP,iptables 规则开始起作用,它会重定向该数据包到 "服务代理" 的端口。 @@ -2209,11 +2193,10 @@ through a load-balancer, though in those cases the client IP does get altered. iptables operations slow down dramatically in large scale cluster e.g 10,000 Services. IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence). --> -在大规模集群(例如 10000 个服务)中,iptables 操作会显着降低速度。 IPVS -专为负载平衡而设计,并基于内核内哈希表。 +在大规模集群(例如 10000 个服务)中,iptables 操作会显着降低速度。 +IPVS 专为负载平衡而设计,并基于内核内哈希表。 因此,你可以通过基于 IPVS 的 kube-proxy 在大量服务中实现性能一致性。 -同时,基于 IPVS 的 kube-proxy 具有更复杂的负载均衡算法(最小连接、局部性、 -加权、持久性)。 +同时,基于 IPVS 的 kube-proxy 具有更复杂的负载均衡算法(最小连接、局部性、加权、持久性)。 ## API 对象 @@ -2275,7 +2258,7 @@ NAT for multihomed SCTP associations requires special logic in the corresponding {{< /warning >}} --> {{< warning >}} -支持多宿主SCTP关联要求 CNI 插件能够支持为一个 Pod 分配多个接口和IP地址。 +支持多宿主SCTP关联要求 CNI 插件能够支持为一个 Pod 分配多个接口和 IP 地址。 用于多宿主 SCTP 关联的 NAT 在相应的内核模块中需要特殊的逻辑。 {{< /warning >}} @@ -2344,7 +2327,7 @@ incoming connection, similar to this example [PROXY 协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) 的连接。 -负载平衡器将发送一系列初始字节,描述传入的连接,类似于此示例 +负载平衡器将发送一系列初始字节,描述传入的连接,类似于此示例: ``` PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n From 1099fb7849096dc52187b2854a4c679328c6de7e Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sat, 23 Jul 2022 21:37:03 +0800 Subject: [PATCH 163/292] [zh-cn] updated /concepts/configuration/secret.md --- .../docs/concepts/configuration/secret.md | 39 +++++++++---------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/content/zh-cn/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md index b67534c23848c..59ac5e6313b5c 100644 --- a/content/zh-cn/docs/concepts/configuration/secret.md +++ b/content/zh-cn/docs/concepts/configuration/secret.md @@ -4,7 +4,7 @@ content_type: concept feature: title: Secret 和配置管理 description: > - 部署和更新 Secrets 和应用程序的配置而不必重新构建容器镜像,且 + 部署和更新 Secret 和应用程序的配置而不必重新构建容器镜像,且 不必将软件堆栈配置中的秘密信息暴露出来。 weight: 30 --- @@ -295,7 +295,7 @@ You can package many keys and values into one Secret, or use many Secrets, which --> 这一示例清单定义了一个 Secret,其 `data` 字段中包含两个主键:`username` 和 `password`。 清单中的字段值是 Base64 字符串,不过,当你在 Pod 中使用 Secret 时,kubelet 为 Pod -及其中的容器提供的是解码后的数据。 +及其中的容器提供的是**解码**后的数据。 你可以在一个 Secret 中打包多个主键和数值,也可以选择使用多个 Secret, 完全取决于哪种方式最方便。 @@ -437,7 +437,7 @@ invalidated when the Pod they are mounted into is deleted. Kubernetes v1.22 版本之前都会自动创建用来访问 Kubernetes API 的凭证。 这一老的机制是基于创建可被挂载到 Pod 中的令牌 Secret 来实现的。 在最近的版本中,包括 Kubernetes v{{< skew currentVersion >}} 中,API 凭据是直接通过 -[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API 来获得的,这一凭据会使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume) 挂载到 Pod 中。使用这种方式获得的令牌有确定的生命期,并且在挂载它们的 Pod 被删除时自动作废。 @@ -452,7 +452,7 @@ command to obtain a token from the `TokenRequest` API. --> 你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token) 服务账号令牌。例如,当你需要一个永远都不过期的令牌时。 -不过,仍然建议使用 [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +不过,仍然建议使用 [TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) 子资源来获得访问 API 服务器的令牌。 你可以使用 [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) 命令调用 `TokenRequest` API 获得令牌。 @@ -845,7 +845,7 @@ level. ### 容器镜像拉取 Secret {#using-imagepullsecrets} 如果你尝试从私有仓库拉取容器镜像,你需要一种方式让每个节点上的 kubelet -能够完成与镜像库的身份认证。你可以配置 *镜像拉取 Secret* 来实现这点。 +能够完成与镜像库的身份认证。你可以配置 **镜像拉取 Secret** 来实现这点。 Secret 是在 Pod 层面来配置的。 特殊字符(例如 `$`、`\`、`*`、`=` 和 `!`)会被你的 -[Shell](https://en.wikipedia.org/wiki/Shell_(computing))解释,因此需要转义。 +[Shell](https://zh.wikipedia.org/wiki/Shell_(computing)) 解释,因此需要转义。 ## Secret 的类型 {#secret-types} -创建 Secret 时,你可以使用 [Secret](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) +创建 Secret 时,你可以使用 [Secret](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) 资源的 `type` 字段,或者与其等价的 `kubectl` 命令行参数(如果有的话)为其设置类型。 Secret 类型有助于对 Secret 数据进行编程处理。 @@ -1420,8 +1420,8 @@ command creates an empty Secret of type `Opaque`. ### Opaque Secret 当 Secret 配置文件中未作显式设定时,默认的 Secret 类型是 `Opaque`。 -当你使用 `kubectl` 来创建一个 Secret 时,你会使用 `generic` 子命令来标明 -要创建的是一个 `Opaque` 类型 Secret。 +当你使用 `kubectl` 来创建一个 Secret 时,你会使用 `generic` +子命令来标明要创建的是一个 `Opaque` 类型 Secret。 例如,下面的命令会创建一个空的 `Opaque` 类型 Secret 对象: ```shell @@ -1626,11 +1626,10 @@ server doesn't validate if the JSON actually is a Docker config file. When you do not have a Docker config file, or you want to use `kubectl` to create a Secret for accessing a container registry, you can do: --> -当你使用清单文件来创建这两类 Secret 时,API 服务器会检查 `data` 字段中是否 -存在所期望的主键,并且验证其中所提供的键值是否是合法的 JSON 数据。 +当你使用清单文件来创建这两类 Secret 时,API 服务器会检查 `data` 字段中是否存在所期望的主键, +并且验证其中所提供的键值是否是合法的 JSON 数据。 不过,API 服务器不会检查 JSON 数据本身是否是一个合法的 Docker 配置文件内容。 - 当你没有 Docker 配置文件,或者你想使用 `kubectl` 创建一个 Secret 来访问容器倉庫时,你可以这样做: @@ -1750,8 +1749,8 @@ key authentication: --> ### SSH 身份认证 Secret {#ssh-authentication-secrets} -Kubernetes 所提供的内置类型 `kubernetes.io/ssh-auth` 用来存放 SSH 身份认证中 -所需要的凭据。使用这种 Secret 类型时,你就必须在其 `data` (或 `stringData`) +Kubernetes 所提供的内置类型 `kubernetes.io/ssh-auth` 用来存放 SSH 身份认证中所需要的凭据。 +使用这种 Secret 类型时,你就必须在其 `data` (或 `stringData`) 字段中提供一个 `ssh-privatekey` 键值对,作为要使用的 SSH 凭据。 下面的清单是一个 SSH 公钥/私钥身份认证的 Secret 示例: @@ -1900,8 +1899,8 @@ well-known ConfigMaps. --> ### 启动引导令牌 Secret {#bootstrap-token-secrets} -通过将 Secret 的 `type` 设置为 `bootstrap.kubernetes.io/token` 可以创建 -启动引导令牌类型的 Secret。这种类型的 Secret 被设计用来支持节点的启动引导过程。 +通过将 Secret 的 `type` 设置为 `bootstrap.kubernetes.io/token` +可以创建启动引导令牌类型的 Secret。这种类型的 Secret 被设计用来支持节点的启动引导过程。 其中包含用来为周知的 ConfigMap 签名的令牌。 启动引导令牌 Secret 通常创建于 `kube-system` 名字空间内,并以 -`bootstrap-token-<令牌 ID>` 的形式命名;其中 `<令牌 ID>` 是一个由 6 个字符组成 -的字符串,用作令牌的标识。 +`bootstrap-token-<令牌 ID>` 的形式命名; +其中 `<令牌 ID>` 是一个由 6 个字符组成的字符串,用作令牌的标识。 以 Kubernetes 清单文件的形式,某启动引导令牌 Secret 可能看起来像下面这样: From 47023485874a7e2909c4603baebf1e97380e9f85 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sat, 23 Jul 2022 21:06:09 +0800 Subject: [PATCH 164/292] [zh-cn] updated /concepts/cluster-administration/logging.md --- .../cluster-administration/logging.md | 43 ++++++++----------- 1 file changed, 19 insertions(+), 24 deletions(-) diff --git a/content/zh-cn/docs/concepts/cluster-administration/logging.md b/content/zh-cn/docs/concepts/cluster-administration/logging.md index e233529ba2b34..d56eb04b0a24a 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/logging.md +++ b/content/zh-cn/docs/concepts/cluster-administration/logging.md @@ -28,7 +28,7 @@ In a cluster, logs should have a separate storage and lifecycle independent of n 但是,由容器引擎或运行时提供的原生功能通常不足以构成完整的日志记录方案。 例如,如果发生容器崩溃、Pod 被逐出或节点宕机等情况,你可能想访问应用日志。 在集群中,日志应该具有独立的存储和生命周期,与节点、Pod 或容器的生命周期相独立。 -这个概念叫 _集群级的日志_ 。 +这个概念叫 **集群级的日志**。 @@ -175,11 +175,11 @@ The two kubelet parameters [`containerLogMaxSize` and `containerLogMaxFiles`](/d in [kubelet config file](/docs/tasks/administer-cluster/kubelet-config-file/) can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively. --> -当使用某 *CRI 容器运行时* 时,kubelet 要负责对日志进行轮换,并 -管理日志目录的结构。kubelet 将此信息发送给 CRI 容器运行时,后者 -将容器日志写入到指定的位置。在 [kubelet 配置文件](/docs/tasks/administer-cluster/kubelet-config-file/) +当使用某 **CRI 容器运行时** 时,kubelet 要负责对日志进行轮换,并管理日志目录的结构。 +kubelet 将此信息发送给 CRI 容器运行时,后者将容器日志写入到指定的位置。 +在 [kubelet 配置文件](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/) 中的两个 kubelet 参数 -[`containerLogMaxSize` 和 `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +[`containerLogMaxSize` 和 `containerLogMaxFiles`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) 可以用来配置每个日志文件的最大长度和每个容器可以生成的日志文件个数上限。 {{< note >}} -如果有外部系统执行日志轮转或者使用了 CRI 容器运行时,那么 `kubectl logs` +如果有外部系统执行日志轮转或者使用了 CRI 容器运行时,那么 `kubectl logs` 仅可查询到最新的日志内容。 比如,对于一个 10MB 大小的文件,通过 `logrotate` 执行轮转后生成两个文件, -一个 10MB 大小,一个为空,`kubectl logs` 返回最新的日志文件,而该日志文件 -在这个例子中为空。 +一个 10MB 大小,一个为空,`kubectl logs` 返回最新的日志文件,而该日志文件在这个例子中为空。 {{< /note >}} -你可以通过在每个节点上使用 _节点级的日志记录代理_ 来实现集群级日志记录。 +你可以通过在每个节点上使用 **节点级的日志记录代理** 来实现集群级日志记录。 日志记录代理是一种用于暴露日志或将日志推送到后端的专用工具。 通常,日志记录代理程序是一个容器,它可以访问包含该节点上所有应用程序容器的日志文件的目录。 @@ -294,8 +293,7 @@ Node-level logging creates only one agent per node, and doesn't require any chan Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation. --> 容器向标准输出和标准错误输出写出数据,但在格式上并不统一。 -节点级代理 -收集这些日志并将其进行转发以完成汇总。 +节点级代理收集这些日志并将其进行转发以完成汇总。 这种方法允许你将日志流从应用程序的不同部分分离开,其中一些可能缺乏对写入 `stdout` 或 `stderr` 的支持。重定向日志背后的逻辑是最小的,因此它的开销几乎可以忽略不计。 -另外,因为 `stdout`、`stderr` 由 kubelet 处理,你可以使用内置的工具 `kubectl logs`。 +另外,因为 `stdout` 和 `stderr` 由 kubelet 处理,所以你可以使用内置的工具 `kubectl logs`。 例如,某 Pod 中运行一个容器,该容器向两个文件写不同格式的日志。 -下面是这个 pod 的配置文件: +下面是这个 Pod 的配置文件: {{< codenew file="admin/logging/two-files-counter-pod.yaml" >}} @@ -361,9 +359,9 @@ the container. Instead, you can create two sidecar containers. Each sidecar container could tail a particular log file from a shared volume and then redirect the logs to its own `stdout` stream. --> -不建议在同一个日志流中写入不同格式的日志条目,即使你成功地将其重定向到容器的 -`stdout` 流。相反,你可以创建两个边车容器。每个边车容器可以从共享卷 -跟踪特定的日志文件,并将文件内容重定向到各自的 `stdout` 流。 +不建议在同一个日志流中写入不同格式的日志条目,即使你成功地将其重定向到容器的 `stdout` 流。 +相反,你可以创建两个边车容器。每个边车容器可以从共享卷跟踪特定的日志文件, +并将文件内容重定向到各自的 `stdout` 流。 应用本身如果不具备轮转日志文件的功能,可以通过边车容器实现。 该方式的一个例子是运行一个小的、定期轮转日志的容器。 -然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略 -交给 kubelet。 +然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略交给 kubelet。 -如果节点级日志记录代理程序对于你的场景来说不够灵活,你可以创建一个 -带有单独日志记录代理的边车容器,将代理程序专门配置为与你的应用程序一起运行。 +如果节点级日志记录代理程序对于你的场景来说不够灵活, +你可以创建一个带有单独日志记录代理的边车容器,将代理程序专门配置为与你的应用程序一起运行。 {{< note >}} -在示例配置中,你可以将 fluentd 替换为任何日志代理,从应用容器内 -的任何来源读取数据。 +在示例配置中,你可以将 fluentd 替换为任何日志代理,从应用容器内的任何来源读取数据。 -从各个应用中直接暴露和推送日志数据的集群日志机制 -已超出 Kubernetes 的范围。 +从各个应用中直接暴露和推送日志数据的集群日志机制已超出 Kubernetes 的范围。 From 35285abc006fdada85c30ecf8133cdba5ca45ca6 Mon Sep 17 00:00:00 2001 From: Sean Wei Date: Sat, 23 Jul 2022 22:10:00 +0800 Subject: [PATCH 165/292] [zh-cn] Resync apparmor.md --- .../zh-cn/docs/tutorials/security/apparmor.md | 226 +++++++++++------- 1 file changed, 138 insertions(+), 88 deletions(-) diff --git a/content/zh-cn/docs/tutorials/security/apparmor.md b/content/zh-cn/docs/tutorials/security/apparmor.md index 00048de976112..a1669d2a5d60b 100644 --- a/content/zh-cn/docs/tutorials/security/apparmor.md +++ b/content/zh-cn/docs/tutorials/security/apparmor.md @@ -4,6 +4,8 @@ content_type: tutorial weight: 10 --- AppArmor 是一个 Linux 内核安全模块, 它补充了基于标准 Linux 用户和组的权限,将程序限制在一组有限的资源中。 @@ -31,7 +33,7 @@ AppArmor 可以配置为任何应用程序减少潜在的攻击面,并且提 *强制(enforcing)* 模式(阻止访问不允许的资源)或 *投诉(complain)* 模式(仅报告冲突)下运行。 - * 查看如何在节点上加载配置文件示例 * 了解如何在 Pod 上强制执行配置文件 @@ -61,10 +63,12 @@ AppArmor 可以通过限制允许容器执行的操作, ## {{% heading "prerequisites" %}} - + 确保: - 只要 Kubelet 版本包含 AppArmor 支持(>=v1.4), 如果不满足这些先决条件,Kubelet 将拒绝带有 AppArmor 选项的 Pod。 @@ -201,11 +205,13 @@ gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor e - + ## 保护 Pod {#securing-a-pod} {{< note >}} - AppArmor 配置文件是按 *逐个容器* 的形式来设置的。 要指定用来运行 Pod 容器的 AppArmor 配置文件,请向 Pod 的 metadata 添加注解: @@ -226,38 +232,38 @@ AppArmor 配置文件是按 *逐个容器* 的形式来设置的。 container.apparmor.security.beta.kubernetes.io/: ``` - `` 的名称是配置文件所针对的容器的名称,`` 则设置要应用的配置文件。 `` 可以是以下取值之一: - * `runtime/default` 应用运行时的默认配置 * `localhost/` 应用在主机上加载的名为 `` 的配置文件 * `unconfined` 表示不加载配置文件 - -有关注解和配置文件名称格式的详细信息,请参阅[API 参考](#api-reference)。 +有关注解和配置文件名称格式的详细信息,请参阅 [API 参考](#api-reference)。 - Kubernetes AppArmor 强制执行机制首先检查所有先决条件都已满足, 然后将所选的配置文件转发到容器运行时进行强制执行。 如果未满足先决条件,Pod 将被拒绝,并且不会运行。 - 要验证是否应用了配置文件,可以在容器创建事件中查找所列出的 AppArmor 安全选项: @@ -268,8 +274,8 @@ kubectl get events | grep Created 22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write] ``` - 你还可以通过检查容器的 proc attr,直接验证容器的根进程是否以正确的配置文件运行: @@ -280,14 +286,18 @@ kubectl exec cat /proc/1/attr/current k8s-apparmor-example-deny-write (enforce) ``` - + ## 举例 {#example} - -*本例假设你已经设置了一个集群使用 AppArmor 支持。* + +**本例假设你已经设置了一个集群使用 AppArmor 支持。** - 首先,我们需要将要使用的配置文件加载到节点上。配置文件拒绝所有文件写入: @@ -304,10 +314,10 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { } ``` - 由于我们不知道 Pod 将被调度到哪里,我们需要在所有节点上加载配置文件。 在本例中,我们将使用 SSH 来安装概要文件, @@ -334,7 +344,9 @@ EOF' done ``` - + 接下来,我们将运行一个带有拒绝写入配置文件的简单 “Hello AppArmor” Pod: {{< codenew file="pods/security/hello-apparmor.yaml" >}} @@ -343,9 +355,9 @@ done kubectl create -f ./hello-apparmor.yaml ``` - 如果我们查看 Pod 事件,我们可以看到 Pod 容器是用 AppArmor 配置文件 “k8s-apparmor-example-deny-write” 所创建的: @@ -361,7 +373,9 @@ kubectl get events | grep hello-apparmor 13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Started {kubelet gke-test-default-pool-239f5d02-gyn2} Started container with docker id 06b6cd1c0989 ``` - + 我们可以通过检查该配置文件的 proc attr 来验证容器是否实际使用该配置文件运行: ```shell @@ -371,7 +385,9 @@ kubectl exec hello-apparmor -- cat /proc/1/attr/current k8s-apparmor-example-deny-write (enforce) ``` - + 最后,我们可以看到,如果我们尝试通过写入文件来违反配置文件会发生什么: ```shell @@ -382,7 +398,9 @@ touch: /tmp/test: Permission denied error: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1 ``` - + 最后,让我们看看如果我们试图指定一个尚未加载的配置文件会发生什么: ```shell @@ -456,35 +474,39 @@ Events: 23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded ``` - 注意 Pod 呈现 Pending 状态,并且显示一条有用的错误信息: `Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded`。 还用相同的消息记录了一个事件。 - + ## 管理 {#administration} - + ### 使用配置文件设置节点 {#setting-up-nodes-with-profiles} - Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载到节点上。 有很多方法可以设置配置文件,例如: - * 通过在每个节点上运行 Pod 的 [DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/)来确保加载了正确的配置文件。 @@ -492,23 +514,25 @@ Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载 * 在节点初始化时,使用节点初始化脚本(例如 Salt、Ansible 等)或镜像。 * 通过将配置文件复制到每个节点并通过 SSH 加载它们,如[示例](#example)。 - 调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。 另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签, -并使用[节点选择器](/zh-cn/docs/concepts/configuration/assign-pod-node/)确保 +并使用[节点选择器](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)确保 Pod 在具有所需配置文件的节点上运行。 - + ### 使用 PodSecurityPolicy 限制配置文件 {#restricting-profiles-with-the-podsecuritypolicy} {{< note >}} - @@ -516,9 +540,9 @@ PodSecurityPolicy 在 Kubernetes v1.21 版本中已被废弃,将在 v1.25 版 查看 [PodSecurityPolicy](/zh-cn/docs/concepts/security/pod-security-policy/) 文档获取更多信息。 {{< /note >}} - 如果启用了 PodSecurityPolicy 扩展,则可以应用集群范围的 AppArmor 限制。 要启用 PodSecurityPolicy,必须在 `apiserver` 上设置以下标志: @@ -527,7 +551,9 @@ enable the PodSecurityPolicy, the following flag must be set on the `apiserver`: --enable-admission-plugins=PodSecurityPolicy[,others...] ``` - + AppArmor 选项可以指定为 PodSecurityPolicy 上的注解: ```yaml @@ -535,31 +561,35 @@ apparmor.security.beta.kubernetes.io/defaultProfileName: apparmor.security.beta.kubernetes.io/allowedProfileNames: [,others...] ``` - 默认配置文件名选项指定默认情况下在未指定任何配置文件时应用于容器的配置文件。 所允许的配置文件名称选项指定允许 Pod 容器运行期间所对应的配置文件列表。 如果同时提供了这两个选项,则必须允许默认值。 配置文件的指定格式与容器上的相同。有关完整规范,请参阅 [API 参考](#api-reference)。 - + ### 禁用 AppArmor {#disabling-apparmor} - + 如果你不希望 AppArmor 在集群上可用,可以通过命令行标志禁用它: ``` --feature-gates=AppArmor=false ``` - 禁用时,任何包含 AppArmor 配置文件的 Pod 都将导致验证失败,且返回 “Forbidden” 错误。 @@ -575,21 +605,23 @@ availability (GA). {{}} - + ## 编写配置文件 {#authoring-profiles} - 获得正确指定的 AppArmor 配置文件可能是一件棘手的事情。幸运的是,有一些工具可以帮助你做到这一点: - * `aa-genprof` 和 `aa-logprof` 通过监视应用程序的活动和日志并准许它所执行的操作来生成配置文件规则。 @@ -597,41 +629,49 @@ tools to help with that: * [bane](https://github.com/jfrazelle/bane) 是一个用于 Docker的 AppArmor 配置文件生成器,它使用一种简化的画像语言(profile language) - 想要调试 AppArmor 的问题,你可以检查系统日志,查看具体拒绝了什么。 AppArmor 将详细消息记录到 `dmesg`, 错误通常可以在系统日志中或通过 `journalctl` 找到。 更多详细信息见 [AppArmor 失败](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Failures)。 - + ## API 参考 {#api-reference} - + ### Pod 注解 {#pod-annotation} - + 指定容器将使用的配置文件: - - **键名**: `container.apparmor.security.beta.kubernetes.io/` ,其中 `` 与 Pod 中某容器的名称匹配。 可以为 Pod 中的每个容器指定单独的配置文件。 - **键值**: 对配置文件的引用,如下所述 - + ### 配置文件引用 {#profile-reference} - - `runtime/default`: 指默认运行时配置文件。 - 等同于不指定配置文件(没有 PodSecurityPolicy 默认值),只是它仍然需要启用 AppArmor。 @@ -650,30 +690,38 @@ AppArmor 将详细消息记录到 `dmesg`, - 可能的配置文件名在[核心策略参考](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Core_Policy_Reference#profile-names-and-attachment-specifications)。 - `unconfined`: 这相当于为容器禁用 AppArmor。 - + 任何其他配置文件引用格式无效。 - + ### PodSecurityPolicy 注解 {#podsecuritypolicy-annotations} - + 指定在未提供容器时应用于容器的默认配置文件: - * **键名**: `apparmor.security.beta.kubernetes.io/defaultProfileName` * **键值**: 如上述文件参考所述 - + 上面描述的指定配置文件,Pod 容器列表的配置文件引用允许指定: - * **键名**: `apparmor.security.beta.kubernetes.io/allowedProfileNames` * **键值**: 配置文件引用的逗号分隔列表(如上所述) @@ -681,12 +729,14 @@ AppArmor 将详细消息记录到 `dmesg`, ## {{% heading "whatsnext" %}} - + 其他资源: - * [Apparmor 配置文件语言快速指南](https://gitlab.com/apparmor/apparmor/wikis/QuickProfileLanguage) * [Apparmor 核心策略参考](https://gitlab.com/apparmor/apparmor/wikis/Policy_Layout) From babc0bc934084e1aad22881df67bd7867214bc9c Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sat, 23 Jul 2022 22:32:24 +0800 Subject: [PATCH 166/292] [zh-cn] updated blog/2022-05-03-kubernetes-release-1.24.md --- .../2022-05-03-kubernetes-release-1.24.md | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/content/zh-cn/blog/_posts/2022-05-03-kubernetes-release-1.24.md b/content/zh-cn/blog/_posts/2022-05-03-kubernetes-release-1.24.md index 887c5b9f8ee99..78d2489495532 100644 --- a/content/zh-cn/blog/_posts/2022-05-03-kubernetes-release-1.24.md +++ b/content/zh-cn/blog/_posts/2022-05-03-kubernetes-release-1.24.md @@ -44,10 +44,10 @@ see [this guide](/blog/2022/03/31/ready-for-dockershim-removal/). ### 从 kubelet 中删除 Dockershim 在 v1.20 版本中被废弃后,dockershim 组件已被从 Kubernetes v1.24 版本的 kubelet 中移除。 -从v1.24开始,如果你依赖 Docker Engine 作为容器运行时, +从 v1.24 开始,如果你依赖 Docker Engine 作为容器运行时, 则需要使用其他[受支持的运行时](/zh-cn/docs/setup/production-environment/container-runtimes/)之一 (如 containerd 或 CRI-O)或使用 CRI dockerd。 -有关确保群集已准备好进行此删除的更多信息,请参阅[本指南](/zh-cn/blog/2022/03/31/ready-for-dockershim-removal/)。 +有关确保集群已准备好进行此删除的更多信息,请参阅[本指南](/zh-cn/blog/2022/03/31/ready-for-dockershim-removal/)。 -### 从 Kubelet 中删除动态 Kubelet 配置 +### 从 Kubelet 中移除动态 Kubelet 配置 -在 Kubernetes 1.22 中被弃用后,动态 Kubelet 配置已从 kubelet 中删除。 -该功能将从 Kubernetes 1.26 的 API 服务器中删除。 +在 Kubernetes 1.22 中被弃用后,动态 Kubelet 配置已从 kubelet 中移除。 +该功能将从 Kubernetes 1.26 的 API 服务器中移除。 当 CNI 插件尚未升级和/或 CNI 配置版本未在 CNI 配置文件中声明时,在 containerd v1.6.0–v1.6.3 -中存在 pod CNI 网络设置和拆除的服务问题。containerd 团队报告说,“这些问题在 containerd v1.6.4 中得到解决。” +中存在 Pod CNI 网络设置和拆除的服务问题。containerd 团队报告说,“这些问题在 containerd v1.6.4 中得到解决。” 在 containerd v1.6.0-v1.6.3 版本中,如果你不升级 CNI 插件和/或声明 CNI 配置版本, 你可能会遇到以下 “Incompatible CNI versions” 或 “Failed to destroy network for sandbox” 的错误情况。 @@ -256,7 +256,7 @@ Volume snapshot and restore functionality for Kubernetes and the Container Stora [VolumeSnapshot v1beta1 CRD 已被移除](https://github.com/kubernetes/enhancements/issues/177)。 Kubernetes 和容器存储接口 (CSI) 的卷快照和恢复功能,提供标准化的 API 设计 (CRD) 并添加了对 CSI 卷驱动程序的 -PV 快照/恢复支持,在 v1.20 中移至 GA。VolumeSnapshot v1beta1 在 v1.20 中被弃用,现在不受支持。 +PV 快照/恢复支持,在 v1.20 中升级至 GA。VolumeSnapshot v1beta1 在 v1.20 中被弃用,现在不受支持。 有关详细信息,请参阅 [KEP-177: CSI 快照](https://git.k8s.io/enhancements/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) 和[卷快照 GA 博客](/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/)。 @@ -269,7 +269,7 @@ This release saw fourteen enhancements promoted to stable: --> ## 其他更新 -### 毕业到稳定 +### 毕业到稳定版 在此版本中,有 14 项增强功能升级为稳定版: @@ -297,7 +297,7 @@ This release saw fourteen enhancements promoted to stable: * [高效的监视恢复](https://github.com/kubernetes/enhancements/issues/1904): kube-apiserver 重新启动后,可以高效地恢复监视。 * [Service Type=LoadBalancer 类字段](https://github.com/kubernetes/enhancements/issues/1959): - 引入新的服务注解 `service.kubernetes.io/load-balancer-class` , + 引入新的服务注解 `service.kubernetes.io/load-balancer-class`, 允许在同一个集群中提供 `type: LoadBalancer` 服务的多个实现。 * [带索引的 Job](https://github.com/kubernetes/enhancements/issues/2214):为带有固定完成计数的 Job 的 Pod 添加完成索引。 * [在 Job API 中增加 suspend 字段](https://github.com/kubernetes/enhancements/issues/2232): @@ -317,12 +317,12 @@ This release saw two major changes: * [Dockershim Removal](https://github.com/kubernetes/enhancements/issues/2221) * [Beta APIs are off by Default](https://github.com/kubernetes/enhancements/issues/3136) --> -### 主要变化 +### 主要变更 -此版本有两个主要变化: +此版本有两个主要变更: -* [Dockershim 移除](https://github.com/kubernetes/enhancements/issues/2221) -* [Beta APIs 默认关闭](https://github.com/kubernetes/enhancements/issues/3136) +* [移除 Dockershim](https://github.com/kubernetes/enhancements/issues/2221) +* [默认关闭 Beta API](https://github.com/kubernetes/enhancements/issues/3136) ### 发布团队 -如果没有组成 Kubernetes 1.24 发布团队的坚定个人的共同努力,这个版本是不可能实现的。 +如果没有 Kubernetes 1.24 发布团队每个人做出的共同努力,这个版本是不可能实现的。 该团队齐心协力交付每个 Kubernetes 版本中的所有组件,包括代码、文档、发行说明等。 特别感谢我们的发布负责人 James Laverack 指导我们完成了一个成功的发布周期, @@ -394,7 +394,7 @@ is the work of hundreds of contributors across the globe and thousands of end-us applications that serve millions. Every one is a star in our sky, helping us chart our course. --> 古代天文学家到建造 James Webb 太空望远镜的科学家,几代人都怀着敬畏和惊奇的心情仰望星空。 -星星启发了我们,点燃了我们的想象力,并引导我们在艰难的海上度过了漫长的夜晚。 +是这些星辰启发了我们,点燃了我们的想象力,引导我们在艰难的海上度过了漫长的夜晚。 通过此版本,我们向上凝视,当我们的社区聚集在一起时可能发生的事情。 Kubernetes 是全球数百名贡献者和数千名最终用户支持的成果, @@ -450,7 +450,7 @@ all the stargazers out there. ✨ --> ### 生态系统更新 -* KubeCon + CloudNativeCon Europe 2022 将于 2022 年 5 月 16 日至 20 日在西班牙巴伦西亚举行! +* KubeCon + CloudNativeCon Europe 2022 于 2022 年 5 月 16 日至 20 日在西班牙巴伦西亚举行! 你可以在[活动网站](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)上找到有关会议和注册的更多信息。 * 在 [2021 年云原生调查](https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/) 中,CNCF 看到了创纪录的 Kubernetes 和容器采用。参阅[调查结果](https://www.cncf.io/reports/cncf-annual-survey-2021/)。 @@ -509,7 +509,7 @@ Have something you’d like to broadcast to the Kubernetes community? Share your --> ## 参与进来 -参与 Kubernetes 的最简单方法是加入符合你兴趣的众多 [特别兴趣组](https://git.k8s.io/community/sig-list.md)(SIG) 之一。 +参与 Kubernetes 的最简单方法是加入符合你兴趣的众多[特别兴趣组](https://git.k8s.io/community/sig-list.md)(SIG)之一。 你有什么想向 Kubernetes 社区广播的内容吗? 在我们的每周的[社区会议](https://git.k8s.io/community/communication)上分享你的声音,并通过以下渠道: From 574dc44cfe3acaf5957908cfc1ef51c04e9ff92d Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sat, 23 Jul 2022 23:04:03 +0800 Subject: [PATCH 167/292] [zh-cn] sync comments in /examples/examples_test.go --- content/zh-cn/examples/examples_test.go | 38 ++++++++++++------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/content/zh-cn/examples/examples_test.go b/content/zh-cn/examples/examples_test.go index 27eae2eadf0cf..550b33f319308 100644 --- a/content/zh-cn/examples/examples_test.go +++ b/content/zh-cn/examples/examples_test.go @@ -61,7 +61,7 @@ import ( "k8s.io/kubernetes/pkg/capabilities" "k8s.io/kubernetes/pkg/registry/batch/job" - // initialize install packages + // 初始化安装包 _ "k8s.io/kubernetes/pkg/apis/apps/install" _ "k8s.io/kubernetes/pkg/apis/autoscaling/install" _ "k8s.io/kubernetes/pkg/apis/batch/install" @@ -77,18 +77,18 @@ var ( serializer runtime.SerializerInfo ) -// TestGroup contains GroupVersion to uniquely identify the API +// TestGroup 包含 GroupVersion 以唯一地标识该 API type TestGroup struct { externalGroupVersion schema.GroupVersion } -// GroupVersion makes copy of schema.GroupVersion +// GroupVersion 制作 schema.GroupVersion 的副本 func (g TestGroup) GroupVersion() *schema.GroupVersion { copyOfGroupVersion := g.externalGroupVersion return ©OfGroupVersion } -// Codec returns the codec for the API version to test against +// Codec 为要测试的 API 版本返回编解码器 func (g TestGroup) Codec() runtime.Codec { if serializer.Serializer == nil { return legacyscheme.Codecs.LegacyCodec(g.externalGroupVersion) @@ -136,7 +136,7 @@ func getCodecForObject(obj runtime.Object) (runtime.Codec, error) { return group.Codec(), nil } } - // Codec used for unversioned types + // 还未版本化的类别所用的 Codec if legacyscheme.Scheme.Recognizes(kind) { serializer, ok := runtime.SerializerInfoForMediaType(legacyscheme.Codecs.SupportedMediaTypes(), runtime.ContentTypeJSON) if !ok { @@ -160,7 +160,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { AllowPodAffinityNamespaceSelector: true, } - // Enable CustomPodDNS for testing + // 为测试启用 CustomPodDNS // feature.DefaultFeatureGate.Set("CustomPodDNS=true") switch t := obj.(type) { case *api.ConfigMap: @@ -230,7 +230,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - // handle clusterIPs, logic copied from service strategy + // 处理几个 ClusterIP,根据服务策略进行逻辑复制 if len(t.Spec.ClusterIP) > 0 && len(t.Spec.ClusterIPs) == 0 { t.Spec.ClusterIPs = []string{t.Spec.ClusterIP} } @@ -258,8 +258,8 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - // Job needs generateSelector called before validation, and job.Validate does this. - // See: https://github.com/kubernetes/kubernetes/issues/20951#issuecomment-187787040 + // Job 需要在校验前调用 generateSelector,然后 job.Validate 执行校验。 + // 请参阅:https://github.com/kubernetes/kubernetes/issues/20951#issuecomment-187787040 t.ObjectMeta.UID = types.UID("fakeuid") if strings.Index(t.ObjectMeta.Name, "$") > -1 { t.ObjectMeta.Name = "skip-for-good" @@ -315,13 +315,13 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { } errors = policy_validation.ValidatePodDisruptionBudget(t) case *rbac.ClusterRole: - // clusterole does not accept namespace + // ClusterRole 不接受名字空间 errors = rbac_validation.ValidateClusterRole(t) case *rbac.ClusterRoleBinding: - // clusterolebinding does not accept namespace + // ClusterRoleBinding 不接受名字空间 errors = rbac_validation.ValidateClusterRoleBinding(t) case *storage.StorageClass: - // storageclass does not accept namespace + // StorageClass 不接受名字空间 errors = storage_validation.ValidateStorageClass(t) default: errors = field.ErrorList{} @@ -330,8 +330,8 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { return errors } -// Walks inDir for any json/yaml files. Converts yaml to json, and calls fn for -// each file found with the contents in data. +// 遍历 inDir 目录查找所有 json/yaml 文件。将 yaml 转换为 json, +// 并根据 data 中的内容找到的每个文件来调用 fn。 func walkConfigFiles(inDir string, t *testing.T, fn func(name, path string, data [][]byte)) error { return filepath.Walk(inDir, func(path string, info os.FileInfo, err error) error { if err != nil { @@ -352,7 +352,7 @@ func walkConfigFiles(inDir string, t *testing.T, fn func(name, path string, data var docs [][]byte if ext == ".yaml" { - // YAML can contain multiple documents. + // YAML 可以包含多个文档。 splitter := yaml.NewYAMLReader(bufio.NewReader(bytes.NewBuffer(data))) for { doc, err := splitter.Read() @@ -366,7 +366,7 @@ func walkConfigFiles(inDir string, t *testing.T, fn func(name, path string, data if err != nil { return fmt.Errorf("%s: %v", path, err) } - // deal with "empty" document (e.g. pure comments) + // 处理 "空白" 文档(例如纯注释) if string(out) != "null" { docs = append(docs, out) } @@ -385,7 +385,7 @@ func walkConfigFiles(inDir string, t *testing.T, fn func(name, path string, data func TestExampleObjectSchemas(t *testing.T) { initGroups() - // Please help maintain the alphabeta order in the map + // 请帮助保持映射图中的 alphabeta 顺序 cases := map[string]map[string][]runtime.Object{ "admin": { "namespace-dev": {&api.Namespace{}}, @@ -691,7 +691,7 @@ func TestExampleObjectSchemas(t *testing.T) { }, } - // Note a key in the following map has to be complete relative path + // 请注意,以下映射中的某个键必须是完整的相对路径 filesIgnore := map[string]map[string]bool{ "audit": { "audit-policy": true, @@ -705,7 +705,7 @@ func TestExampleObjectSchemas(t *testing.T) { tested := 0 numExpected := 0 path := dir - // Test if artifacts do exist + // 测试这些工件是否存在 for name := range expected { fn := path + "/" + name _, err1 := os.Stat(fn + ".yaml") From 7fd2fee01fca1c7c69cc6ff1fc15c3bb23975eca Mon Sep 17 00:00:00 2001 From: yanrongshi Date: Sat, 23 Jul 2022 23:15:12 +0800 Subject: [PATCH 168/292] Update deployment.md --- .../zh-cn/docs/concepts/workloads/controllers/deployment.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md index 50e3ef0b89e81..180a62b83c6d0 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md @@ -31,7 +31,7 @@ A _Deployment_ provides declarative updates for {{< glossary_tooltip text="Pods" -你负责描述 Deployment 中的 _目标状态_,而 Deployment {{< glossary_tooltip term_id="controller" >}} +你负责描述 Deployment 中的 **目标状态**,而 Deployment {{< glossary_tooltip term_id="controller" >}} 以受控速率更改实际状态, 使其变为期望状态。你可以定义 Deployment 以创建新的 ReplicaSet,或删除现有 Deployment, 并通过新的 Deployment 收养其资源。 @@ -1470,7 +1470,7 @@ Kubernetes marks a Deployment as _progressing_ when one of the following tasks i --> ### 进行中的 Deployment {#progressing-deployment} -执行下面的任务期间,Kubernetes 标记 Deployment 为 _进行中(Progressing)_: +执行下面的任务期间,Kubernetes 标记 Deployment 为**进行中**(Progressing)_: ### 完成的 Deployment {#complete-deployment} -当 Deployment 具有以下特征时,Kubernetes 将其标记为 _完成(Complete)_: +当 Deployment 具有以下特征时,Kubernetes 将其标记为**完成(Complete)**; ### 示例:调试关闭/无法访问的节点 {#example-debugging-a-down-unreachable-node} -有时在调试时查看节点的状态很有用——例如,因为你注意到在节点上运行的 Pod 的奇怪行为, +有时在调试时查看节点的状态很有用 —— 例如,因为你注意到在节点上运行的 Pod 的奇怪行为, 或者找出为什么 Pod 不会调度到节点上。与 Pod 一样,你可以使用 `kubectl describe node` 和 `kubectl get node -o yaml` 来检索有关节点的详细信息。 例如,如果节点关闭(与网络断开连接,或者 kubelet 进程挂起并且不会重新启动等), @@ -260,28 +260,30 @@ of the relevant log files. On systemd-based systems, you may need to use `journ ### 控制平面节点 {#control-plane-nodes} - * `/var/log/kube-apiserver.log` —— API 服务器 API - * `/var/log/kube-scheduler.log` —— 调度器,负责制定调度决策 - * `/var/log/kube-controller-manager.log` —— 运行大多数 Kubernetes - 内置{{}}的组件,除了调度(kube-scheduler 处理调度)。 +* `/var/log/kube-apiserver.log` —— API 服务器,负责提供 API 服务 +* `/var/log/kube-scheduler.log` —— 调度器,负责制定调度决策 +* `/var/log/kube-controller-manager.log` —— 运行大多数 Kubernetes + 内置{{}}的组件,除了调度(kube-scheduler 处理调度)。 ### 工作节点 {#worker-nodes} - * `/var/log/kubelet.log` —— 来自 `kubelet` 的日志,负责在节点运行容器 - * `/var/log/kube-proxy.log` —— 来自 `kube-proxy` 的日志,负责将流量转发到服务端点 +* `/var/log/kubelet.log` —— 来自 `kubelet` 的日志,负责在节点运行容器 +* `/var/log/kube-proxy.log` —— 来自 `kube-proxy` 的日志,负责将流量转发到服务端点 -### 造成原因 {#contributing-causes} +### 故障原因 {#contributing-causes} - - 虚拟机关闭 - - 集群内或集群与用户之间的网络分区 - - Kubernetes 软件崩溃 - - 持久存储(例如 GCE PD 或 AWS EBS 卷)的数据丢失或不可用 - - 操作员错误,例如配置错误的 Kubernetes 软件或应用程序软件 +- 虚拟机关闭 +- 集群内或集群与用户之间的网络分区 +- Kubernetes 软件崩溃 +- 持久存储(例如 GCE PD 或 AWS EBS 卷)的数据丢失或不可用 +- 操作员错误,例如配置错误的 Kubernetes 软件或应用程序软件 ### 具体情况 {#specific-scenarios} @@ -334,16 +336,17 @@ This is an incomplete list of things that could go wrong, and how to adjust your - kubelet 将不能访问 API 服务器,但是能够继续运行之前的 Pod 和提供相同的服务代理 - 在 API 服务器重启之前,需要手动恢复或者重建 API 服务器的状态 - Kubernetes 服务组件(节点控制器、副本控制器管理器、调度器等)所在的 VM 关机或者崩溃 - 当前,这些控制器是和 API 服务器在一起运行的,它们不可用的现象是与 API 服务器类似的 @@ -357,18 +360,18 @@ This is an incomplete list of things that could go wrong, and how to adjust your - 分区 A 认为分区 B 中所有的节点都已宕机;分区 B 认为 API 服务器宕机 (假定主控节点所在的 VM 位于分区 A 内)。 - kubelet 软件故障 - 结果 @@ -380,11 +383,11 @@ This is an incomplete list of things that could go wrong, and how to adjust your - 结果 - 丢失 Pod 或服务等等 - 丢失 API 服务器的后端存储 - - 用户无法读取API + - 用户无法读取 API - 等等 -- 措施:定期对 API 服务器的 PDs/EBS 卷执行快照操作 +- 措施:定期对 API 服务器的 PD 或 EBS 卷执行快照操作 - 缓解:API 服务器后端存储丢失 - 缓解:一些操作错误的场景 - 缓解:一些 Kubernetes 软件本身故障的场景 @@ -444,16 +447,19 @@ This is an incomplete list of things that could go wrong, and how to adjust your ## {{% heading "whatsnext" %}} -* 了解[资源指标管道](resource-metrics-pipeline)中可用的指标 -* 发现用于[监控资源使用](resource-usage-monitoring)的其他工具 -* 使用节点问题检测器[监控节点健康](monitor-node-health) -* 使用 `crictl` 来[调试 Kubernetes 节点](crictl) -* 获取更多关于 [Kubernetes 审计](audit)的信息 -* 使用 `telepresence` [本地开发和调试服务](local-debugging) \ No newline at end of file +* 了解[资源指标管道](/zh-cn/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/)中可用的指标 +* 发现用于[监控资源使用](/zh-cn/docs/tasks/debug/debug-cluster/resource-usage-monitoring/)的其他工具 +* 使用节点问题检测器[监控节点健康](/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health/) +* 使用 `crictl` 来[调试 Kubernetes 节点](/zh-cn/docs/tasks/debug/debug-cluster/crictl/) +* 获取更多关于 [Kubernetes 审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/)的信息 +* 使用 `telepresence` [本地开发和调试服务](/zh-cn/docs/tasks/debug/debug-cluster/local-debugging/) \ No newline at end of file From 2213bc3da0241ebdded416935bf98cabd4f7fc36 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sun, 24 Jul 2022 12:11:55 +0800 Subject: [PATCH 171/292] [zh-cn] resync /tasks/network/validate-dual-stack.md --- .../docs/tasks/network/validate-dual-stack.md | 33 +++++++++---------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/content/zh-cn/docs/tasks/network/validate-dual-stack.md b/content/zh-cn/docs/tasks/network/validate-dual-stack.md index bf490460505ff..a38c206d44733 100644 --- a/content/zh-cn/docs/tasks/network/validate-dual-stack.md +++ b/content/zh-cn/docs/tasks/network/validate-dual-stack.md @@ -24,11 +24,9 @@ This document shares how to validate IPv4/IPv6 dual-stack enabled Kubernetes clu * A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports dual-stack networking. * [Dual-stack enabled](/docs/concepts/services-networking/dual-stack/) cluster --> -* 提供程序对双协议栈网络的支持 (云供应商或其他方式必须能够为 Kubernetes 节点 - 提供可路由的 IPv4/IPv6 网络接口) -* 一个能够支持[双协议栈](/zh-cn/docs/concepts/services-networking/dual-stack/)的 +* 驱动程序对双协议栈网络的支持 (云驱动或其他方式必须能够为 Kubernetes 节点提供可路由的 IPv4/IPv6 网络接口) +* 一个能够支持[双协议栈](/zh-cn/docs/concepts/services-networking/dual-stack/)网络的 [网络插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)。 - * [启用双协议栈](/zh-cn/docs/concepts/services-networking/dual-stack/) 集群 {{< version-check >}} @@ -79,8 +77,9 @@ Validate that the node has an IPv4 and IPv6 interface detected. Replace node nam 在此示例中,节点名称为 `k8s-linuxpool1-34450317-0`: ```shell -kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf "%s: %s \n" .type .address}}{{end}}' +kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf "%s: %s\n" .type .address}}{{end}}' ``` + ``` Hostname: k8s-linuxpool1-34450317-0 InternalIP: 10.240.0.5 @@ -98,7 +97,7 @@ Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name 在此示例中,Pod 名称为 `pod01`: ```shell -kubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf "%s \n" .ip}}{{end}}' +kubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf "%s\n" .ip}}{{end}}' ``` ``` @@ -204,7 +203,7 @@ spec: protocol: TCP targetPort: 9376 selector: - app: MyApp + app.kubernetes.io/name: MyApp sessionAffinity: None type: ClusterIP status: @@ -241,7 +240,7 @@ apiVersion: v1 kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: fd00::5118 @@ -255,7 +254,7 @@ spec: protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp sessionAffinity: None type: ClusterIP status: @@ -279,10 +278,10 @@ The `kubectl get svc` command will only show the primary IP in the `CLUSTER-IP` `kubectl get svc` 命令将仅在 `CLUSTER-IP` 字段中显示主 IP。 ```shell -kubectl get svc -l app=MyApp +kubectl get svc -l app.kubernetes.io/name=MyApp -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -my-service ClusterIP fe80:20d::d06b 80/TCP 9s +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-service ClusterIP 10.0.216.242 80/TCP 5s ``` {{< /note >}} @@ -293,15 +292,15 @@ Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks 然后你就可以通过 IP 和端口,验证对服务的访问。 ```shell -kubectl describe svc -l app=MyApp +kubectl describe svc -l app.kubernetes.io/name=MyApp ``` ``` Name: my-service Namespace: default -Labels: app=MyApp +Labels: app.kubernetes.io/name=MyApp Annotations: -Selector: app=MyApp +Selector: app.kubernetes.io/name=MyApp Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 @@ -333,7 +332,7 @@ Check the Service: 检查服务: ```shell -kubectl get svc -l app=MyApp +kubectl get svc -l app.kubernetes.io/name=MyApp ``` -你可以在 `kustomization.yaml` 中定义 `secreteGenerator`,并在定义中引用其他现成的文件,生成 Secret。 +你可以在 `kustomization.yaml` 中定义 `secreteGenerator` 字段,并在定义中引用其它本地文件生成 Secret。 例如:下面的 kustomization 文件 引用了 `./username.txt` 和 `./password.txt` 文件: ```yaml @@ -57,7 +57,7 @@ file by providing some literals. For example, the following `kustomization.yaml` file contains two literals for `username` and `password` respectively: --> -你也可以在 `kustomization.yaml` 文件中指定一些字面量定义 `secretGenerator`。 +你也可以在 `kustomization.yaml` 文件中指定一些字面量定义 `secretGenerator` 字段。 例如:下面的 `kustomization.yaml` 文件中包含了 `username` 和 `password` 两个字面量: ```yaml @@ -93,7 +93,7 @@ Note that in all cases, you don't need to base64 encode the values. ## 创建 Secret {#create-the-secret} -使用 `kubectl apply` 命令应用包含 `kustomization.yaml` 文件的目录创建 Secret。 +在包含 `kustomization.yaml` 文件的目录下使用 `kubectl apply` 命令创建 Secret。 ```shell kubectl apply -k . @@ -166,7 +166,6 @@ To check the actual content of the encoded data, please refer to `kubectl get` 和 `kubectl describe` 命令默认不显示 `Secret` 的内容。 这是为了防止 `Secret` 被意外暴露给旁观者或存储在终端日志中。 检查编码后的实际内容,请参考[解码 secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret)。 ---> From b921eb3005a2906c870c4412db69c6e86473dac5 Mon Sep 17 00:00:00 2001 From: Michael Date: Sun, 24 Jul 2022 21:02:35 +0800 Subject: [PATCH 174/292] [zh-cn] resync /concepts/scheduling-eviction/_index.md --- .../concepts/scheduling-eviction/_index.md | 28 ++++++++++++------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/_index.md b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md index 5ff8e6e6ed494..9a86ebd564944 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/_index.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md @@ -1,11 +1,11 @@ --- -title: 调度,抢占和驱逐 +title: 调度、抢占和驱逐 weight: 90 content_type: concept description: > - 在Kubernetes中,调度 (scheduling) 指的是确保 Pods 匹配到合适的节点, - 以便 kubelet 能够运行它们。抢占 (Preemption) 指的是终止低优先级的 Pods 以便高优先级的 Pods 可以 - 调度运行的过程。驱逐 (Eviction) 是在资源匮乏的节点上,主动让一个或多个 Pods 失效的过程。 + 在 Kubernetes 中,调度 (scheduling) 指的是确保 Pod 匹配到合适的节点, + 以便 kubelet 能够运行它们。抢占 (Preemption) 指的是终止低优先级的 Pod 以便高优先级的 Pod + 可以调度运行的过程。驱逐 (Eviction) 是在资源匮乏的节点上,主动让一个或多个 Pod 失效的过程。 no_list: true --- @@ -15,8 +15,8 @@ weight: 90 content_type: concept description: > In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes - so that the kubelet can run them. Preemption is the process of terminating - Pods with lower Priority so that Pods with higher Priority can schedule on + so that the kubelet can run them. Preemption is the process of terminating + Pods with lower Priority so that Pods with higher Priority can schedule on Nodes. Eviction is the process of proactively terminating one or more Pods on resource-starved Nodes. no_list: true @@ -30,6 +30,12 @@ is the process of terminating Pods with lower {{}} +匹配到合适的{{}}, +以便 {{}} 能够运行它们。 +抢占 (Preemption) 指的是终止低{{}}的 Pod +以便高优先级的 Pod 可以调度运行的过程。 +驱逐 (Eviction) 是在资源匮乏的节点上,主动让一个或多个 Pod 失效的过程。 Kubernetes 最初是为了支持在 Linux 主机上运行本机应用程序的 Docker 容器而创建的。 -从 Kubernetes 1.3中的 [rkt](https://kubernetes.io/blog/2016/07/rktnetes-brings-rkt-container-engine-to-kubernetes/) 开始,更多的运行时间开始涌现, +从 Kubernetes 1.3 中的 [rkt](https://kubernetes.io/blog/2016/07/rktnetes-brings-rkt-container-engine-to-kubernetes/) 开始,更多的运行时间开始涌现, 这导致了[容器运行时接口(Container Runtime Interface)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)(CRI)的开发。 从那时起,备用运行时集合越来越大: 为了加强工作负载隔离,[Kata Containers](https://katacontainers.io/) 和 [gVisor](https://github.com/google/gvisor) 等项目被发起, -并且 Kubernetes 对 Windows 的支持正在 [稳步发展](https://kubernetes.io/blog/2018/01/kubernetes-v19-beta-windows-support/) 。 +并且 Kubernetes 对 Windows 的支持正在[稳步发展](https://kubernetes.io/blog/2018/01/kubernetes-v19-beta-windows-support/)。 最近,RuntimeClass 在 Kubernetes 1.12 中作为 alpha 功能引入。 -最初的实现侧重于提供运行时选择 API ,并为解决其他未解决的问题铺平道路。 +最初的实现侧重于提供运行时选择 API,并为解决其他未解决的问题铺平道路。 RuntimeClass 资源是将运行时属性显示到控制平面的重要基础。 例如,要对具有支持不同运行时间的异构节点的集群实施调度程序支持,我们可以在 RuntimeClass 定义中添加 -[NodeAffinity](/zh-cn/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)条件。 +[NodeAffinity](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) 条件。 另一个需要解决的领域是管理可变资源需求以运行不同运行时的 Pod。 -[Pod Overhead 提案](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) -是一项较早的尝试,与 RuntimeClass 设计非常吻合,并且可能会进一步推广。 +[Pod Overhead 提案](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview)是一项较早的尝试,与 +RuntimeClass 设计非常吻合,并且可能会进一步推广。 -至少要到2019年,RuntimeClass 才会得到积极的开发,我们很高兴看到从 Kubernetes 1.12 中的 RuntimeClass alpha 开始,此功能得以形成。 +至少要到 2019 年,RuntimeClass 才会得到积极的开发,我们很高兴看到从 Kubernetes 1.12 中的 RuntimeClass alpha 开始,此功能得以形成。 -- 试试吧! 作为Alpha功能,还有一些其他设置步骤可以使用RuntimeClass。 - 有关如何使其运行,请参考 [RuntimeClass文档](/zh-cn/docs/concepts/containers/runtime-class/#runtime-class) 。 -- 查看 [RuntimeClass Kubernetes 增强建议](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) 以获取更多细节设计细节。 -- [沙盒隔离级别决策](https://docs.google.com/document/d/1fe7lQUjYKR0cijRmSbH_y0_l3CYPkwtQa5ViywuNo8Q/preview) - 记录了最初使 RuntimeClass 成为 Pod 级别选项的思考过程。 -- 加入讨论,并通过 [SIG-Node社区](https://github.com/kubernetes/community/tree/master/sig-node) 帮助塑造 RuntimeClass 的未来。 - +- 试试吧!作为 Alpha 功能,还有一些其他设置步骤可以使用 RuntimeClass。 + 有关如何使其运行,请参考 [RuntimeClass 文档](/zh-cn/docs/concepts/containers/runtime-class/#runtime-class)。 +- 查看 [RuntimeClass Kubernetes 增强建议](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)以获取更多细节设计细节。 +- [沙盒隔离级别决策](https://docs.google.com/document/d/1fe7lQUjYKR0cijRmSbH_y0_l3CYPkwtQa5ViywuNo8Q/preview)记录了最初使 + RuntimeClass 成为 Pod 级别选项的思考过程。 +- 加入讨论,并通过 [SIG-Node 社区](https://github.com/kubernetes/community/tree/master/sig-node)帮助塑造 RuntimeClass 的未来。 diff --git a/content/zh-cn/docs/concepts/containers/runtime-class.md b/content/zh-cn/docs/concepts/containers/runtime-class.md index 808db309ea539..f528d81e28d9e 100644 --- a/content/zh-cn/docs/concepts/containers/runtime-class.md +++ b/content/zh-cn/docs/concepts/containers/runtime-class.md @@ -16,7 +16,7 @@ weight: 20 {{< feature-state for_k8s_version="v1.20" state="stable" >}} - - 你还可以使用 RuntimeClass 运行具有相同容器运行时但具有不同设置的 Pod。 - @@ -77,12 +77,12 @@ CRI implementation for how to configure. RuntimeClass 的配置依赖于 运行时接口(CRI)的实现。 根据你使用的 CRI 实现,查阅相关的文档([下方](#cri-configuration))来了解如何配置。 +{{< note >}} -{{< note >}} RuntimeClass 假设集群中的节点配置是同构的(换言之,所有的节点在容器运行时方面的配置是相同的)。 如果需要支持异构节点,配置方法请参阅下面的 [调度](#scheduling)。 {{< /note >}} @@ -102,7 +102,7 @@ the configuration. For each handler, create a corresponding RuntimeClass object. --> ### 2. 创建相应的 RuntimeClass 资源 -在上面步骤 1 中,每个配置都需要有一个用于标识配置的 `handler`。 +在上面步骤 1 中,每个配置都需要有一个用于标识配置的 `handler`。 针对每个 handler 需要创建一个 RuntimeClass 对象。 +RuntimeClass 对象的名称必须是有效的 +[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + +{{< note >}} -{{< note >}} 建议将 RuntimeClass 写操作(create、update、patch 和 delete)限定于集群管理员使用。 通常这是默认配置。参阅[授权概述](/zh-cn/docs/reference/access-authn-authz/authorization/)了解更多信息。 {{< /note >}} @@ -172,9 +179,9 @@ error message. If no `runtimeClassName` is specified, the default RuntimeHandler will be used, which is equivalent to the behavior when the RuntimeClass feature is disabled. --> -如果未指定 `runtimeClassName` ,则将使用默认的 RuntimeHandler,相当于禁用 RuntimeClass 功能特性。 +如果未指定 `runtimeClassName`,则将使用默认的 RuntimeHandler,相当于禁用 RuntimeClass 功能特性。 - 通过 containerd 的 `/etc/containerd/config.toml` 配置文件来配置运行时 handler。 -handler 需要配置在 runtimes 块中: +handler 需要配置在 runtimes 块中: ``` [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}] @@ -203,17 +210,16 @@ for more details: --> 更详细信息,请查阅 containerd 的[配置指南](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) -#### [cri-o](https://cri-o.io/) +#### {{< glossary_tooltip term_id="cri-o" >}} -通过 cri-o 的 `/etc/crio/crio.conf` 配置文件来配置运行时 handler。 +通过 CRI-O 的 `/etc/crio/crio.conf` 配置文件来配置运行时 handler。 handler 需要配置在 -[crio.runtime 表](https://github.com/kubernetes-sigs/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table) -下面: +[crio.runtime 表](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table)之下: ``` [crio.runtime.runtimes.${HANDLER_NAME}] @@ -225,9 +231,9 @@ See CRI-O's [config documentation](https://github.com/cri-o/cri-o/blob/master/do --> 更详细信息,请查阅 CRI-O [配置文档](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md)。 - +--> ## 调度 {#scheduling} {{< feature-state for_k8s_version="v1.16" state="beta" >}} @@ -240,7 +246,7 @@ If `scheduling` is not set, this RuntimeClass is assumed to be supported by all 通过为 RuntimeClass 指定 `scheduling` 字段, 你可以通过设置约束,确保运行该 RuntimeClass 的 Pod 被调度到支持该 RuntimeClass 的节点上。 -如果未设置 `scheduling`,则假定所有节点均支持此 RuntimeClass 。 +如果未设置 `scheduling`,则假定所有节点均支持此 RuntimeClass。 -为了确保 pod 会被调度到支持指定运行时的 node 上,每个 node 需要设置一个通用的 label 用于被 +为了确保 pod 会被调度到支持指定运行时的 node 上,每个 node 需要设置一个通用的 label 用于被 `runtimeclass.scheduling.nodeSelector` 挑选。在 admission 阶段,RuntimeClass 的 nodeSelector 将会与 pod 的 nodeSelector 合并,取二者的交集。如果有冲突,pod 将会被拒绝。 @@ -263,22 +269,22 @@ by each. 与 `nodeSelector` 一样,tolerations 也在 admission 阶段与 pod 的 tolerations 合并,取二者的并集。 -更多有关 node selector 和 tolerations 的配置信息,请查阅 +更多有关 node selector 和 tolerations 的配置信息,请查阅 [将 Pod 分派到节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)。 - +--> ### Pod 开销 {#pod-overhead} {{< feature-state for_k8s_version="v1.24" state="stable" >}} 你可以指定与运行 Pod 相关的 _开销_ 资源。声明开销即允许集群(包括调度器)在决策 Pod 和资源时将其考虑在内。 From 8cc45e347dba5e094c1be5d40453b709b2321166 Mon Sep 17 00:00:00 2001 From: kartik494 Date: Sun, 24 Jul 2022 19:15:05 +0530 Subject: [PATCH 176/292] Add hyperlink for is-default-class annotation --- content/en/docs/concepts/storage/dynamic-provisioning.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md index 63263fb370890..c8bdf8840976d 100644 --- a/content/en/docs/concepts/storage/dynamic-provisioning.md +++ b/content/en/docs/concepts/storage/dynamic-provisioning.md @@ -116,7 +116,7 @@ can enable this behavior by: is enabled on the API server. An administrator can mark a specific `StorageClass` as default by adding the -`storageclass.kubernetes.io/is-default-class` annotation to it. +[`storageclass.kubernetes.io/is-default-class`](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) annotation to it. When a default `StorageClass` exists in a cluster and a user creates a `PersistentVolumeClaim` with `storageClassName` unspecified, the `DefaultStorageClass` admission controller automatically adds the From 97856abd67ce925d8e745b31b9ae949b99c074b3 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sat, 23 Jul 2022 16:20:03 +0800 Subject: [PATCH 177/292] [zh-cn] sync /blog/_posts/2022-07-13-gateway-api-in-beta.md --- .../_posts/2022-07-13-gateway-api-in-beta.md | 334 ++++++++++++++++++ 1 file changed, 334 insertions(+) create mode 100644 content/zh-cn/blog/_posts/2022-07-13-gateway-api-in-beta.md diff --git a/content/zh-cn/blog/_posts/2022-07-13-gateway-api-in-beta.md b/content/zh-cn/blog/_posts/2022-07-13-gateway-api-in-beta.md new file mode 100644 index 0000000000000..570b55f3015b3 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-07-13-gateway-api-in-beta.md @@ -0,0 +1,334 @@ +--- +layout: blog +title: Kubernetes Gateway API 进入 Beta 阶段 +date: 2022-07-13 +slug: gateway-api-graduates-to-beta +--- + + + +**作者:** Shane Utt (Kong)、Rob Scott (Google)、Nick Young (VMware)、Jeff Apple (HashiCorp) + + +我们很高兴地宣布 Gateway API 的 v0.5.0 版本发布。 +我们最重要的几个 Gateway API 资源首次进入 Beta 阶段。 +此外,我们正在启动一项新的倡议,探索如何将 Gateway API 用于网格,还引入了 URL 重写等新的实验性概念。 +下文涵盖了这部分内容和更多说明。 + + +## 什么是 Gateway API? + + +Gateway API 是以 [Gateway][gw] 资源(代表底层网络网关/代理服务器)为中心的资源集合, +Kubernetes 服务网络的健壮性得益于众多供应商实现、得到广泛行业支持且极具表达力、可扩展和面向角色的各个接口。 + + +Gateway API 最初被认为是知名 [Ingress][ing] API 的继任者, +Gateway API 的好处包括(但不限于)对许多常用网络协议的显式支持 +(例如 `HTTP`、`TLS`、`TCP `、`UDP`) 以及对传输层安全 (TLS) 的紧密集成支持。 +特别是 `Gateway` 资源能够实现作为 Kubernetes API 来管理网络网关的生命周期。 + + +如果你是对 Gateway API 的某些优势感兴趣的终端用户,我们邀请你加入并找到适合你的实现方式。 +值此版本发布之时,对于流行的 API 网关和服务网格有十多种[实现][impl],还提供了操作指南便于快速开始探索。 + + +[gw]:https://gateway-api.sigs.k8s.io/api-types/gateway/ +[ing]:/zh-cn/docs/reference/kubernetes-api/service-resources/ingress-v1/ +[impl]:https://gateway-api.sigs.k8s.io/implementations/ + + +### 入门 + + +Gateway API 是一个类似 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) +的正式 Kubernetes API。Gateway API 代表了 Ingress 功能的一个父集,使得一些更高级的概念成为可能。 +与 Ingress 类似,Kubernetes 中没有内置 Gateway API 的默认实现。 +相反,有许多不同的[实现][impl]可用,在提供一致且可移植体验的同时,还在底层技术方面提供了重要的选择。 + + +查看 [API 概念文档][concepts] 并查阅一些[指南][guides]以开始熟悉这些 API 及其工作方式。 +当你准备好一个实用的应用程序时, +请打开[实现页面][impl]并选择属于你可能已经熟悉的现有技术或集群提供商默认使用的技术(如果适用)的实现。 +Gateway API 是一个基于 [CRD][crd] 的 API,因此你将需要[安装 CRD][install-crds] 到集群上才能使用该 API。 + + +如果你对 Gateway API 做贡献特别有兴趣,我们非常欢迎你的加入! +你可以随时在仓库上[提一个新的 issue][issue],或[加入讨论][disc]。 +另请查阅[社区页面][community]以了解 Slack 频道和社区会议的链接。 + + +[crd]:/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/ +[concepts]:https://gateway-api.sigs.k8s.io/concepts/api-overview/ +[guides]:https://gateway-api.sigs.k8s.io/guides/getting-started/ +[impl]:https://gateway-api.sigs.k8s.io/implementations/ +[install-crds]:https://gateway-api.sigs.k8s.io/guides/getting-started/#install-the-crds +[issue]:https://github.com/kubernetes-sigs/gateway-api/issues/new/choose +[disc]:https://github.com/kubernetes-sigs/gateway-api/discussions +[community]:https://gateway-api.sigs.k8s.io/contributing/community/ + + +## 发布亮点 + +### 进入 Beta 阶段 + + +`v0.5.0` 版本特别具有历史意义,因为它标志着一些关键 API 成长至 Beta API 版本(`v1beta1`): + +- [GatewayClass](https://gateway-api.sigs.k8s.io/api-types/gatewayclass/) +- [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) +- [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) + + +这一成就的标志是达到了以下几个进入标准: + +- API 已[广泛实现][impl]。 +- 合规性测试基本覆盖了所有资源且可以让多种实现通过测试。 +- 大多数 API 接口正被积极地使用。 +- Kubernetes SIG Network API 评审团队已批准其进入 Beta 阶段。 + + +有关 Gateway API 版本控制的更多信息,请参阅[官方文档](https://gateway-api.sigs.k8s.io/concepts/versioning/)。 +要查看未来版本的计划,请查看[下一步](#next-steps)。 + +[impl]:https://gateway-api.sigs.k8s.io/implementations/ + + +### 发布渠道 + +此版本引入了 `experimental` 和 `standard` [发布渠道][ch], +这样能够更好地保持平衡,在确保稳定性的同时,还能支持实验和迭代开发。 + + +`standard` 发布渠道包括: + +- 已进入 Beta 阶段的资源 +- 已进入 standard 的字段(不再被视为 experimental) + + +`experimental` 发布渠道包括 `standard` 发布渠道的所有内容,另外还有: + +- `alpha` API 资源 +- 视为 experimental 且还未进入 `standard` 渠道的字段 + + +使用发布渠道能让内部实现快速流转的迭代开发,且能让外部实现者和最终用户标示功能稳定性。 + + +本次发布新增了以下实验性的功能特性: + +- [路由通过指定端口号可以挂接到 Gateway](https://gateway-api.sigs.k8s.io/geps/gep-957/) +- [URL 重写和路径重定向](https://gateway-api.sigs.k8s.io/geps/gep-726/) + +[ch]:https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard + + +### 其他改进 + +有关 `v0.5.0` 版本中包括的完整变更清单,请参阅 +[v0.5.0 发布说明](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.5.0)。 + + +## 适用于服务网格的 Gateway API:GAMMA 倡议 + +某些服务网格项目[已实现对 Gateway API 的支持](https://gateway-api.sigs.k8s.io/implementations/)。 +服务网格接口 (Service Mesh Interface,SMI) API 和 Gateway API 之间的显著重叠 +[已激发了 SMI 社区讨论](https://github.com/servicemeshinterface/smi-spec/issues/249)可能的集成方式。 + + +我们很高兴地宣布,来自 Cilium Service Mesh、Consul、Istio、Kuma、Linkerd、NGINX Service Mesh +和 Open Service Mesh 等服务网格社区的代表汇聚一堂组成 +[GAMMA 倡议小组](https://gateway-api.sigs.k8s.io/contributing/gamma/), +这是 Gateway API 子项目内一个专门的工作流,专注于网格管理所用的 Gateway API。 + + +这个小组将交付[增强提案](https://gateway-api.sigs.k8s.io/v1beta1/contributing/gep/), +包括对网格和网格相关用例适用的 Gateway API 规约的资源、添加和修改。 + +这项工作已从 +[探索针对服务间流量使用 Gateway API](https://docs.google.com/document/d/1T_DtMQoq2tccLAtJTpo3c0ohjm25vRS35MsestSL9QU/edit#heading=h.jt37re3yi6k5) +开始,并将继续增强身份验证和鉴权策略等领域。 + + +## 下一步 + +随着我们不断完善用于生产用例的 API,以下是我们将为下一个 Gateway API 版本所做的一些重点工作: + +- 针对 [gRPC][grpc] 流量路由的 [GRPCRoute][gep1016] +- [路由代理][pr1085] +- 4 层 API 成熟度:[TCPRoute][tcpr]、[UDPRoute][udpr] 和 [TLSRoute][tlsr] 正进入 Beta 阶段 +- [GAMMA 倡议](https://gateway-api.sigs.k8s.io/contributing/gamma/) - 针对服务网格的 Gateway API + + +如果你想参与此列表中的某些工作,或者你想倡导加入路线图的内容不在此列表中, +请通过 Kubernetes Slack 的 #sig-network-gateway-api 频道或我们每周的 +[社区电话会议](https://gateway-api.sigs.k8s.io/contributing/community/#meetings)加入我们。 + +[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/master/site-src/geps/gep-1016.md +[grpc]:https://grpc.io/ +[pr1085]:https://github.com/kubernetes-sigs/gateway-api/pull/1085 +[tcpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tcproute_types.go +[udpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/udproute_types.go +[tlsr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tlsroute_types.go +[community]:https://gateway-api.sigs.k8s.io/contributing/community/ From 697c6ded7ad9db8c77e2db97f47f28b96c088299 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Sun, 24 Jul 2022 17:30:07 +0800 Subject: [PATCH 178/292] Update rbac.md --- .../docs/reference/access-authn-authz/rbac.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/zh-cn/docs/reference/access-authn-authz/rbac.md b/content/zh-cn/docs/reference/access-authn-authz/rbac.md index a57208518c089..d6c72e156a67e 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/rbac.md +++ b/content/zh-cn/docs/reference/access-authn-authz/rbac.md @@ -64,8 +64,8 @@ or amend them, using tools such as `kubectl`, just like any other Kubernetes obj --> ## API 对象 {#api-overview} -RBAC API 声明了四种 Kubernetes 对象:_Role_、_ClusterRole_、_RoleBinding_ 和 -_ClusterRoleBinding_。你可以像使用其他 Kubernetes 对象一样,通过类似 `kubectl` +RBAC API 声明了四种 Kubernetes 对象:**Role**、**ClusterRole**、**RoleBinding** 和 +**ClusterRoleBinding**。你可以像使用其他 Kubernetes 对象一样,通过类似 `kubectl` 这类工具[描述对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects), 或修补对象。 @@ -96,7 +96,7 @@ it can't be both. --> ### Role 和 ClusterRole {#role-and-clusterole} -RBAC 的 _Role_ 或 _ClusterRole_ 中包含一组代表相关权限的规则。 +RBAC 的 **Role** 或 **ClusterRole** 中包含一组代表相关权限的规则。 这些权限是纯粹累加的(不存在拒绝某操作的规则)。 Role 总是用来在某个{{< glossary_tooltip text="名字空间" term_id="namespace" >}}内设置访问权限; @@ -108,8 +108,8 @@ Role 总是用来在某个{{< glossary_tooltip text="名字空间" term_id="name ClusterRole 有若干用法。你可以用它来: -1. 定义对某名字空间域对象的访问权限,并将在各个名字空间内完成授权; -1. 为名字空间作用域的对象设置访问权限,并跨所有名字空间执行授权; +1. 定义对某名字空间域对象的访问权限,并将在个别名字空间内被授予访问权限; +1. 为名字空间作用域的对象设置访问权限,并被授予跨所有名字空间的访问权限; 1. 为集群作用域的资源定义访问权限。 如果你希望在名字空间内定义角色,应该使用 Role; From 94365a01e1b4d453061d95eb035ced015a2f1c6d Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Mon, 25 Jul 2022 00:59:51 +0800 Subject: [PATCH 179/292] Update optional-kubectl-configs-bash-mac.md --- .../included/optional-kubectl-configs-bash-mac.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md index 6990b943ae135..e277ad3103376 100644 --- a/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md +++ b/content/zh-cn/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -12,7 +12,7 @@ headless: true -### 简介 +### 简介 {#introduction} -### 升级 Bash +### 升级 Bash {#upgrade-bash} -### 安装 bash-completion +### 安装 bash-completion {#install-bash-completion} {{< note >}} @@ -116,7 +116,7 @@ Reload your shell and verify that bash-completion v2 is correctly installed with -### 启用 kubectl 自动补全功能 +### 启用 kubectl 自动补全功能 {#enable-kubectl-autocompletion} - 如果你是用 Homebrew 安装的 kubectl(如 [此页面](/zh-cn/docs/tasks/install-with-homebrew-on-macos/#install-with-homebrew-on-macos) - 所描述),则kubectl 补全脚本应该已经安装到目录 `/usr/local/etc/bash_completion.d/kubectl` + 所描述),则 kubectl 补全脚本应该已经安装到目录 `/usr/local/etc/bash_completion.d/kubectl` 中了。这种情况下,你什么都不需要做。 {{< note >}} From ab255f8477f76acee12c12d9cb97896d7d338e25 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Mon, 25 Jul 2022 01:17:17 +0800 Subject: [PATCH 180/292] Update index.md --- .../blog/_posts/2020-09-30-writing-crl-scheduler/index.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md index d2f1fa1e407f4..c9e6f768f7b7f 100644 --- a/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md +++ b/content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md @@ -188,7 +188,7 @@ To correct the latter issue, we now employ a "hunt and peck" approach to removin ### 1. Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE. -Furthermore, [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) were still a [beta feature in 1.18](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available. +Furthermore, [pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) were still a beta feature in 1.18 which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available. The entire endeavour was concerningly reminiscent of checking [caniuse.com](https://caniuse.com/) when Internet Explorer 8 was still around. --> ## 一场头脑风暴后我们有了 3 个选择。 @@ -197,8 +197,7 @@ The entire endeavour was concerningly reminiscent of checking [caniuse.com](http 虽然这似乎是一个完美的解决方案,但在写这篇文章的时候,Kubernetes 1.18 在公有云中两个最常见的 托管 Kubernetes 服务( EKS 和 GKE )上是不可用的。 -此外,[Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/)在 -[1.18 中仍是测试版功能](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/), +此外,[Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)在 1.18 中仍是测试版功能, 这意味着即使在 v1.18 可用时,它[也不能保证在托管集群中可用](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices)。 整个努力让人联想到在 Internet Explorer 8 还存在的时候访问 [caniuse.com](https://caniuse.com/)。 From 7813d8449d5d5374d8eef018b937a23311b52eac Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Mon, 25 Jul 2022 01:23:27 +0800 Subject: [PATCH 181/292] Update _index.md --- content/zh-cn/docs/concepts/scheduling-eviction/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/_index.md b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md index 9a86ebd564944..2a95b678a5409 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/_index.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/_index.md @@ -55,7 +55,7 @@ of terminating one or more Pods on Nodes. * [Kubernetes 调度器](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/) * [将 Pod 指派到节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/) * [Pod 开销](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) -* [Pod 拓扑分布约束](/docs/concepts/scheduling-eviction/topology-spread-constraints/) +* [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/) * [污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/) * [调度框架](/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework) * [调度器性能调试](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) From bff11f7014dcac447e920e2a4ac8dddfe8bdd66b Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Mon, 25 Jul 2022 01:41:32 +0800 Subject: [PATCH 182/292] Update feature-gates.md --- .../feature-gates.md | 42 +++++++++---------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md b/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md index 8e3b0e8d12fb4..5957e2f77925e 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md @@ -36,10 +36,10 @@ Feature gates are a set of key=value pairs that describe Kubernetes features. You can turn these features on or off using the `--feature-gates` command line flag on each Kubernetes component. --> -## 概述 +## 概述 {#overview} 特性门控是描述 Kubernetes 特性的一组键值对。你可以在 Kubernetes 的各个组件中使用 -`--feature-gates` flag 来启用或禁用这些特性。 +`--feature-gates` 标志来启用或禁用这些特性。 -处于 *Alpha* 、*Beta* 、 *GA* 阶段的特性。 +处于 **Alpha** 、**Beta** 、 **GA** 阶段的特性。 -*Alpha* 特性代表: +**Alpha** 特性代表: -*Beta* 特性代表: +**Beta** 特性代表: -请试用 *Beta* 特性并提供相关反馈! +请试用 **Beta** 特性并提供相关反馈! 一旦特性结束 Beta 状态,我们就不太可能再对特性进行大幅修改。 {{< /note >}} -*General Availability* (GA) 特性也称为 *稳定* 特性,*GA* 特性代表着: +**General Availability** (GA) 特性也称为 **稳定** 特性,**GA** 特性代表着: -### 特性门控列表 +### 特性门控列表 {#feature-gates} 每个特性门控均用于启用或禁用某个特定的特性: @@ -1107,7 +1107,7 @@ Each feature gate is designed for enabling/disabling a specific feature: 参阅[对 DaemonSet 执行滚动更新](/zh-cn/docs/tasks/manage-daemon/update-daemon-set/)。 - `DefaultPodTopologySpread`: 启用 `PodTopologySpread` 调度插件来完成 - [默认的调度传播](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints). + [默认的调度传播](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints). - `DelegateFSGroupToCSIDriver`: 如果 CSI 驱动程序支持,则通过 NodeStageVolume 和 NodePublishVolume CSI 调用传递 `fsGroup` ,将应用 `fsGroup` 从 Pod 的 `securityContext` 的角色委托给驱动。 @@ -1201,7 +1201,7 @@ Each feature gate is designed for enabling/disabling a specific feature: {{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}} to running pods. - `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See - [Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). + [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/). - `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts. This feature gate exists in case any of your existing workloads depend on a now-corrected fault where Kubernetes ignored exec probe timeouts. See @@ -1211,7 +1211,7 @@ Each feature gate is designed for enabling/disabling a specific feature: {{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}} 到正在运行的 Pod 的特性。 - `EvenPodsSpread`:使 Pod 能够在拓扑域之间平衡调度。请参阅 - [Pod 拓扑扩展约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。 + [Pod 拓扑扩展约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)。 - `ExecProbeTimeout`:确保 kubelet 会遵从 exec 探针的超时值设置。 此特性门控的主要目的是方便你处理现有的、依赖于已被修复的缺陷的工作负载; 该缺陷导致 Kubernetes 会忽略 exec 探针的超时值设置。 @@ -1239,7 +1239,7 @@ Each feature gate is designed for enabling/disabling a specific feature: [调整使用中的 PersistentVolumeClaim 的大小](/zh-cn/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim)。 - `ExpandPersistentVolumes`:允许扩充持久卷。请查阅 [扩展持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)。 -- `ExperimentalCriticalPodAnnotation`:启用将特定 Pod 注解为 *critical* 的方式,用于 +- `ExperimentalCriticalPodAnnotation`:启用将特定 Pod 注解为 **critical** 的方式,用于 [确保其被调度](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)。 从 v1.13 开始已弃用此特性,转而使用 Pod 优先级和抢占功能。 * 阅读关于 [调度器性能调优](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) -* 阅读关于 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +* 阅读关于 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/) * 阅读关于 kube-scheduler 的 [参考文档](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/) * 阅读 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/) * 了解关于 [配置多个调度器](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) 的方式 From eca7a91024475d46e979057cdf98a610af5966f9 Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Mon, 25 Jul 2022 03:09:24 +0800 Subject: [PATCH 184/292] [zh-cn]Update content/zh-cn/docs/concepts/workloads/pods/_index.md --- content/zh-cn/docs/concepts/workloads/pods/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/docs/concepts/workloads/pods/_index.md b/content/zh-cn/docs/concepts/workloads/pods/_index.md index 0d09c4e435e21..818dcda9bf6aa 100644 --- a/content/zh-cn/docs/concepts/workloads/pods/_index.md +++ b/content/zh-cn/docs/concepts/workloads/pods/_index.md @@ -604,17 +604,16 @@ in the Pod Lifecycle documentation. * Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/). * Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to configure different Pods with different container runtime configurations. -* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). * Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. * Pod is a top-level resource in the Kubernetes REST API. The {{< api-reference page="workload-resources/pod-v1" >}} object definition describes the object in detail. * [The Distributed System Toolkit: Patterns for Composite Containers](/blog/2015/06/the-distributed-system-toolkit-patterns/) explains common layouts for Pods with more than one container. +* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). --> * 了解 [Pod 生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 * 了解 [RuntimeClass](/zh-cn/docs/concepts/containers/runtime-class/),以及如何使用它 来配置不同的 Pod 使用不同的容器运行时配置。 -* 了解 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。 * 了解 [PodDisruptionBudget](/zh-cn/docs/concepts/workloads/pods/disruptions/),以及你 如何可以利用它在出现干扰因素时管理应用的可用性。 * Pod 在 Kubernetes REST API 中是一个顶层资源。 @@ -622,6 +621,7 @@ in the Pod Lifecycle documentation. 对象的定义中包含了更多的细节信息。 * 博客 [分布式系统工具箱:复合容器模式](/blog/2015/06/the-distributed-system-toolkit-patterns/) 中解释了在同一 Pod 中包含多个容器时的几种常见布局。 +* 了解 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。 通过将 `--cascade=orphan` 传递给 `kubectl delete`,在删除 StatefulSet 对象之后, -StatefulSet 管理的 Pod 会被保留下来。如果 Pod 具有标签 `app=myapp`,则可以按照 +StatefulSet 管理的 Pod 会被保留下来。如果 Pod 具有标签 `app.kubernetes.io/name=MyApp`,则可以按照 如下方式删除它们: ```shell -kubectl delete pods -l app=myapp +kubectl delete pods -l app.kubernetes.io/name=MyApp ``` @@ -120,15 +120,15 @@ To simply delete everything in a StatefulSet, including the associated pods, you ```shell grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}') -kubectl delete statefulset -l app=myapp +kubectl delete statefulset -l app.kubernetes.io/name=MyApp sleep $grace -kubectl delete pvc -l app=myapp +kubectl delete pvc -l app.kubernetes.io/name=MyApp ``` -在上面的例子中,Pod 的标签为 `app=myapp`;适当地替换你自己的标签。 +在上面的例子中,Pod 的标签为 `app.kubernetes.io/name=MyApp`;适当地替换你自己的标签。 @@ -10,18 +10,18 @@ Os Bootstrap tokens são usados para estabelecer uma relação de confiança bid O `kubeadm init` cria um token inicial com um TTL de 24 horas. Os comandos a seguir permitem que você gerencie esse token e também crie e gerencie os novos. -## Criar um token kubeadm {#cmd-token-create} +## kubeadm token create {#cmd-token-create} {{< include "generated/kubeadm_token_create.md" >}} -## Excluir um token kubeadm {#cmd-token-delete} +## kubeadm token delete {#cmd-token-delete} {{< include "generated/kubeadm_token_delete.md" >}} -## Gerar um token kubeadm {#cmd-token-generate} +## kubeadm token generate {#cmd-token-generate} {{< include "generated/kubeadm_token_generate.md" >}} -## Listar um token kubeadm {#cmd-token-list} +## kubeadm token list {#cmd-token-list} {{< include "generated/kubeadm_token_list.md" >}} ## {{% heading "whatsnext" %}} -* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join) para inicializar um nó `worker` do Kubernetes e associá-lo ao cluster +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join) para inicializar um nó de carga de trabalho do Kubernetes e associá-lo ao cluster From d0a3611553b2471db7a45c2969bfb1e81a490c3d Mon Sep 17 00:00:00 2001 From: "Mr. Erlison" Date: Sun, 24 Jul 2022 18:03:53 -0300 Subject: [PATCH 188/292] Update translation Signed-off-by: Mr. Erlison --- .../pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md index c990af8c5fc3c..ca2a6a84ce619 100644 --- a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-version.md @@ -1,6 +1,6 @@ --- title: kubeadm version -content_type: conceito +content_type: concept weight: 80 --- From 9ec8b998305de4636e77e828252eadb4df860281 Mon Sep 17 00:00:00 2001 From: Michael Date: Mon, 25 Jul 2022 08:30:22 +0800 Subject: [PATCH 189/292] [zh-cn] resync /concepts/scheduling-eviction/assign-pod-node.md --- .../scheduling-eviction/assign-pod-node.md | 97 +++++++++++++------ 1 file changed, 67 insertions(+), 30 deletions(-) diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md index 4eedc0c9edf36..f3bb98c2b2123 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -17,40 +17,45 @@ weight: 20 你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} -只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行。 +以便 **限制** 其只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行, +或优先在特定的节点上运行。 有几种方法可以实现这点,推荐的方法都是用 [标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来进行选择。 通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上, 而不是将 Pod 放置在可用资源不足的节点上等等)。但在某些情况下,你可能需要进一步控制 -Pod 被部署到的节点。例如,确保 Pod 最终落在连接了 SSD 的机器上, +Pod 被部署到哪个节点。例如,确保 Pod 最终落在连接了 SSD 的机器上, 或者将来自两个不同的服务且有大量通信的 Pods 被放置在同一个可用区。 你可以使用下列方法中的任何一种来选择 Kubernetes 对特定 Pod 的调度: * 与[节点标签](#built-in-node-labels)匹配的 [nodeSelector](#nodeSelector) * [亲和性与反亲和性](#affinity-and-anti-affinity) * [nodeName](#nodename) 字段 +* [Pod 拓扑分布约束](#pod-topology-spread-constraints) -这里的 `addedAffinity` 除遵从 Pod 规约中设置的节点亲和性之外,还 -适用于将 `.spec.schedulerName` 设置为 `foo-scheduler`。 +这里的 `addedAffinity` 除遵从 Pod 规约中设置的节点亲和性之外, +还适用于将 `.spec.schedulerName` 设置为 `foo-scheduler`。 换言之,为了匹配 Pod,节点需要满足 `addedAffinity` 和 Pod 的 `.spec.NodeAffinity`。 用户也可以使用 `namespaceSelector` 选择匹配的名字空间,`namespaceSelector` 是对名字空间集合进行标签查询的机制。 -亲和性条件会应用到 `namespaceSelector` 所选择的名字空间和 `namespaces` 字段中 -所列举的名字空间之上。 +亲和性条件会应用到 `namespaceSelector` 所选择的名字空间和 `namespaces` 字段中所列举的名字空间之上。 注意,空的 `namespaceSelector`(`{}`)会匹配所有名字空间,而 null 或者空的 `namespaces` 列表以及 null 值 `namespaceSelector` 意味着“当前 Pod 的名字空间”。 - #### 更实际的用例 Pod 间亲和性与反亲和性在与更高级别的集合(例如 ReplicaSet、StatefulSet、 Deployment 等)一起使用时,它们可能更加有用。 -这些规则使得你可以配置一组工作负载,使其位于相同定义拓扑(例如,节点)中。 +这些规则使得你可以配置一组工作负载,使其位于所定义的同一拓扑中; +例如优先将两个相关的 Pod 置于相同的节点上。 -以一个三节点的集群为例,该集群运行一个带有 Redis 这种内存缓存的 Web 应用程序。 -你可以使用节点间的亲和性和反亲和性来尽可能地将 Web 服务器与缓存并置。 +以一个三节点的集群为例。你使用该集群运行一个带有内存缓存(例如 Redis)的 Web 应用程序。 +在此例中,还假设 Web 应用程序和内存缓存之间的延迟应尽可能低。 +你可以使用 Pod 间的亲和性和反亲和性来尽可能地将该 Web 服务器与缓存并置。 -下面的 Deployment 用来提供 Web 服务器服务,会创建带有标签 `app=web-store` 的副本。 -Pod 亲和性规则告诉调度器将副本放到运行有标签包含 `app=store` Pod 的节点上。 -Pod 反亲和性规则告诉调度器不要在同一节点上放置多个 `app=web-store` 的服务器。 +下例的 Deployment 为 Web 服务器创建带有标签 `app=web-store` 的副本。 +Pod 亲和性规则告诉调度器将每个副本放到存在标签为 `app=store` 的 Pod 的节点上。 +Pod 反亲和性规则告诉调度器决不要在单个节点上放置多个 `app=web-store` 服务器。 ```yaml apiVersion: apps/v1 @@ -756,11 +763,20 @@ where each web server is co-located with a cache, on three separate nodes. | *webserver-1* | *webserver-2* | *webserver-3* | | *cache-1* | *cache-2* | *cache-3* | + +总体效果是每个缓存实例都非常可能被在同一个节点上运行的某个客户端访问。 +这种方法旨在最大限度地减少偏差(负载不平衡)和延迟。 + +你可能还有使用 Pod 反亲和性的一些其他原因。 参阅 [ZooKeeper 教程](/zh-cn/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) 了解一个 StatefulSet 的示例,该 StatefulSet 配置了反亲和性以实现高可用, 所使用的是与此例相同的技术。 @@ -820,6 +836,27 @@ The above Pod will only run on the node `kube-01`. --> 上面的 Pod 只能运行在节点 `kube-01` 之上。 + +## Pod 拓扑分布约束 {#pod-topology-spread-constraints} + +你可以使用 **拓扑分布约束(Topology Spread Constraints)** 来控制 +{{< glossary_tooltip text="Pod" term_id="Pod" >}} 在集群内故障域之间的分布, +故障域的示例有区域(Region)、可用区(Zone)、节点和其他用户自定义的拓扑域。 +这样做有助于提升性能、实现高可用或提升资源利用率。 + +阅读 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/) +以进一步了解这些约束的工作方式。 + ## {{% heading "whatsnext" %}} -在 Kubernetes 中,_调度_ 是指将 {{< glossary_tooltip text="Pod" term_id="pod" >}} 放置到合适的 -{{< glossary_tooltip text="Node" term_id="node" >}} 上,然后对应 Node 上的 -{{< glossary_tooltip term_id="kubelet" >}} 才能够运行这些 pod。 +在 Kubernetes 中,**调度** 是指将 {{< glossary_tooltip text="Pod" term_id="pod" >}} +放置到合适的{{< glossary_tooltip text="节点" term_id="node" >}}上,以便对应节点上的 +{{< glossary_tooltip term_id="kubelet" >}} 能够运行这些 Pod。 -调度器通过 kubernetes 的监测(Watch)机制来发现集群中新创建且尚未被调度到 Node 上的 Pod。 -调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行。 +调度器通过 Kubernetes 的监测(Watch)机制来发现集群中新创建且尚未被调度到节点上的 Pod。 +调度器会将所发现的每一个未调度的 Pod 调度到一个合适的节点上来运行。 调度器会依据下文的调度原则来做出调度选择。 -如果你想要理解 Pod 为什么会被调度到特定的 Node 上,或者你想要尝试实现 -一个自定义的调度器,这篇文章将帮助你了解调度。 +如果你想要理解 Pod 为什么会被调度到特定的节点上, +或者你想要尝试实现一个自定义的调度器,这篇文章将帮助你了解调度。 -对每一个新创建的 Pod 或者是未被调度的 Pod,kube-scheduler 会选择一个最优的 -Node 去运行这个 Pod。然而,Pod 内的每一个容器对资源都有不同的需求,而且 -Pod 本身也有不同的资源需求。因此,Pod 在被调度到 Node 上之前, -根据这些特定的资源调度需求,需要对集群中的 Node 进行一次过滤。 +对每一个新创建的 Pod 或者是未被调度的 Pod,kube-scheduler 会选择一个最优的节点去运行这个 Pod。 +然而,Pod 内的每一个容器对资源都有不同的需求, +而且 Pod 本身也有不同的需求。因此,Pod 在被调度到节点上之前, +根据这些特定的调度需求,需要对集群中的节点进行一次过滤。 -在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度节点_。 -如果没有任何一个 Node 能满足 Pod 的资源请求,那么这个 Pod 将一直停留在 -未调度状态直到调度器能够找到合适的 Node。 +在一个集群中,满足一个 Pod 调度请求的所有节点称之为 **可调度节点**。 +如果没有任何一个节点能满足 Pod 的资源请求, +那么这个 Pod 将一直停留在未调度状态直到调度器能够找到合适的 Node。 调度器先在集群中找到一个 Pod 的所有可调度节点,然后根据一系列函数对这些可调度节点打分, -选出其中得分最高的 Node 来运行 Pod。之后,调度器将这个调度决定通知给 -kube-apiserver,这个过程叫做 _绑定_。 +选出其中得分最高的节点来运行 Pod。之后,调度器将这个调度决定通知给 +kube-apiserver,这个过程叫做 **绑定**。 -kube-scheduler 给一个 pod 做调度选择包含两个步骤: +kube-scheduler 给一个 Pod 做调度选择时包含两个步骤: 1. 过滤 2. 打分 @@ -127,17 +127,17 @@ resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn't (yet) schedulable. --> -过滤阶段会将所有满足 Pod 调度需求的 Node 选出来。 -例如,PodFitsResources 过滤函数会检查候选 Node 的可用资源能否满足 Pod 的资源请求。 -在过滤之后,得出一个 Node 列表,里面包含了所有可调度节点;通常情况下, -这个 Node 列表包含不止一个 Node。如果这个列表是空的,代表这个 Pod 不可调度。 +过滤阶段会将所有满足 Pod 调度需求的节点选出来。 +例如,PodFitsResources 过滤函数会检查候选节点的可用资源能否满足 Pod 的资源请求。 +在过滤之后,得出一个节点列表,里面包含了所有可调度节点;通常情况下, +这个节点列表包含不止一个节点。如果这个列表是空的,代表这个 Pod 不可调度。 -在打分阶段,调度器会为 Pod 从所有可调度节点中选取一个最合适的 Node。 +在打分阶段,调度器会为 Pod 从所有可调度节点中选取一个最合适的节点。 根据当前启用的打分规则,调度器会给每一个可调度节点进行打分。 -最后,kube-scheduler 会将 Pod 调度到得分最高的 Node 上。 -如果存在多个得分最高的 Node,kube-scheduler 会从中随机选取一个。 +最后,kube-scheduler 会将 Pod 调度到得分最高的节点上。 +如果存在多个得分最高的节点,kube-scheduler 会从中随机选取一个。 -1. [调度策略](/zh-cn/docs/reference/scheduling/policies) 允许你配置过滤的 _断言(Predicates)_ - 和打分的 _优先级(Priorities)_ 。 +1. [调度策略](/zh-cn/docs/reference/scheduling/policies) + 允许你配置过滤所用的 **断言(Predicates)** 和打分所用的 **优先级(Priorities)**。 2. [调度配置](/zh-cn/docs/reference/scheduling/config/#profiles) 允许你配置实现不同调度阶段的插件, - 包括:`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit` 等等。 + 包括:`QueueSort`、`Filter`、`Score`、`Bind`、`Reserve`、`Permit` 等等。 你也可以配置 kube-scheduler 运行不同的配置文件。 ## {{% heading "whatsnext" %}} @@ -177,11 +177,19 @@ of the scheduler: * Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) * Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/) * Learn about [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) ---> -* 阅读关于 [调度器性能调优](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) -* 阅读关于 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/) -* 阅读关于 kube-scheduler 的 [参考文档](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/) +* Learn about scheduling of Pods that use volumes in: + * [Volume Topology Support](/docs/concepts/storage/storage-classes/#volume-binding-mode) + * [Storage Capacity Tracking](/docs/concepts/storage/storage-capacity/) + * [Node-specific Volume Limits](/docs/concepts/storage/storage-limits/) +--> +* 阅读关于[调度器性能调优](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) +* 阅读关于 [Pod 拓扑分布约束](/docs/concepts/scheduling-eviction/topology-spread-constraints/) +* 阅读关于 kube-scheduler 的[参考文档](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/) * 阅读 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/) -* 了解关于 [配置多个调度器](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) 的方式 -* 了解关于 [拓扑结构管理策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/) -* 了解关于 [Pod 额外开销](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) +* 了解关于[配置多个调度器](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) 的方式 +* 了解关于[拓扑结构管理策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/) +* 了解关于 [Pod 开销](/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/) +* 了解关于如何在以下情形使用卷来调度 Pod: + * [卷拓扑支持](/zh-cn/docs/concepts/storage/storage-classes/#volume-binding-mode) + * [存储容量跟踪](/zh-cn/docs/concepts/storage/storage-capacity/) + * [特定于节点的卷数限制](/zh-cn/docs/concepts/storage/storage-limits/) From 5782342f6dd06f0e6dffbd937ab6829b1c62e1b4 Mon Sep 17 00:00:00 2001 From: ydFu Date: Mon, 18 Jul 2022 14:40:14 +0800 Subject: [PATCH 192/292] [zh-cn] resync workload-resources/pod-v1.md Signed-off-by: ydFu Co-authored-by: Qiming Teng --- .../workload-resources/pod-v1.md | 6944 +++++++++++++++++ 1 file changed, 6944 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1.md new file mode 100644 index 0000000000000..1e64fd7b0ae50 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1.md @@ -0,0 +1,6944 @@ +--- +api_metadata: + apiVersion: "v1" + import: "k8s.io/api/core/v1" + kind: "Pod" +content_type: "api_reference" +description: "Pod 是可以在主机上运行的容器的集合。" +title: "Pod" +weight: 1 +--- + + +`apiVersion: v1` + +`import "k8s.io/api/core/v1"` + +## Pod {#Pod} + + +Pod 是可以在主机上运行的容器的集合。此资源由客户端创建并调度到主机上。 + +
+ +- **apiVersion**: v1 + +- **kind**: Pod + + +- **metadata** (}}">ObjectMeta) + + 标准的对象元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + +- **spec** (}}">PodSpec) + + 对 Pod 预期行为的规约。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + +- **status** (}}">PodStatus) + + 最近观察到的 Pod 状态。这些数据可能不是最新的。由系统填充。只读。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +## PodSpec {#PodSpec} + + +PodSpec 是对 Pod 的描述。 + +
+ + +### 容器 {#containers} + + +- **containers** ([]}}">Container),必需 + + **补丁策略:基于 `name` 键合并** + + 属于 Pod 的容器列表。当前无法添加或删除容器。Pod 中必须至少有一个容器。无法更新。 + + +- **initContainers** ([]}}">Container) + + **补丁策略:基于 `name` 键合并** + + 属于 Pod 的 Init 容器列表。Init 容器在容器启动之前按顺序执行。 + 如果任何一个 Init 容器发生故障,则认为该 Pod 失败,并根据其 restartPolicy 处理。 + Init 容器或普通容器的名称在所有容器中必须是唯一的。 + Init 容器不可以有生命周期操作、就绪态探针、存活态探针或启动探针。 + 在调度过程中会考虑 Init 容器的资源需求,方法是查找每种资源类型的最高请求/限制, + 然后使用该值的最大值或正常容器的资源请求的总和。 + 对资源限制以类似的方式应用于 Init 容器。当前无法添加或删除 Init 容器。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/init-containers/ + + +- **imagePullSecrets** ([]}}">LocalObjectReference) + + **补丁策略:基于 `name` 键合并** + + imagePullSecrets 是对同一名字空间中 Secret 的引用的列表,用于拉取此 Pod 规约中使用的任何镜像,此字段可选。 + 如果指定,这些 Secret 将被传递给各个镜像拉取组件(Puller)实现供其使用。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod + + +- **enableServiceLinks** (boolean) + + enableServiceLinks 指示是否应将有关服务的信息注入到 Pod 的环境变量中,服务连接的语法与 + Docker links 的语法相匹配。可选。默认为 true。 + + +- **os** (PodOS) + + 指定 Pod 中容器的操作系统。如果设置了此属性,则某些 Pod 和容器字段会受到限制。 + + 如果 os 字段设置为 `linux`,则必须不能设置以下字段: + + - `securityContext.windowsOptions` + + + 如果 os 字段设置为 `windows`,则必须不能设置以下字段: + + - `spec.hostPID` + - `spec.hostIPC` + - `spec.securityContext.seLinuxOptions` + - `spec.securityContext.seccompProfile` + - `spec.securityContext.fsGroup` + - `spec.securityContext.fsGroupChangePolicy` + - `spec.securityContext.sysctls` + - `spec.shareProcessNamespace` + - `spec.securityContext.runAsUser` + - `spec.securityContext.runAsGroup` + - `spec.securityContext.supplementalGroups` + - `spec.containers[*].securityContext.seLinuxOptions` + - `spec.containers[*].securityContext.seccompProfile` + - `spec.containers[*].securityContext.capabilities` + - `spec.containers[*].securityContext.readOnlyRootFilesystem` + - `spec.containers[*].securityContext.privileged` + - `spec.containers[*].securityContext.allowPrivilegeEscalation` + - `spec.containers[*].securityContext.procMount` + - `spec.containers[*].securityContext.runAsUser` + - `spec.containers[*].securityContext.runAsGroup` + + 此字段为 Beta 字段,需要启用 `IdentifyPodOS` 特性门控。 + + + + **PodOS 定义一个 Pod 的操作系统参数。** + + + + - **os.name** (string),必需 + + name 是操作系统的名称。当前支持的值是 `linux` 和 `windows`。 + 将来可能会定义附加值,并且可以是以下之一: + https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration + 客户端应该期望处理附加值并将此字段无法识别时视其为 `os: null`。 + + +### 卷 + + +- **volumes** ([]}}">Volume) + + **补丁策略:retainKeys,基于键 `name` 合并** + + 可以由属于 Pod 的容器挂载的卷列表。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/storage/volumes + + +### 调度 + + +- **nodeSelector** (map[string]string) + + nodeSelector 是一个选择算符,这些算符必须取值为 true 才能认为 Pod 适合在节点上运行。 + 选择算符必须与节点的标签匹配,以便在该节点上调度 Pod。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/configuration/assign-pod-node/ + + +- **nodeName** (string) + + nodeName 是将此 Pod 调度到特定节点的请求。 + 如果字段值不为空,调度器只是直接将这个 Pod 调度到所指定节点上,假设节点符合资源要求。 + + +- **affinity** (Affinity) + + 如果指定了,则作为 Pod 的调度约束。 + + + **Affinity 是一组亲和性调度规则。** + + + + - **affinity.nodeAffinity** (}}">NodeAffinity) + + 描述 Pod 的节点亲和性调度规则。 + + - **affinity.podAffinity** (}}">PodAffinity) + + 描述 Pod 亲和性调度规则(例如,将此 Pod 与其他一些 Pod 放在同一节点、区域等)。 + + - **affinity.podAntiAffinity** (}}">PodAntiAffinity) + + 描述 Pod 反亲和性调度规则(例如,避免将此 Pod 与其他一些 Pod 放在相同的节点、区域等)。 + + +- **tolerations** ([]Toleration) + + 如果设置了此字段,则作为 Pod 的容忍度。 + + + **这个 Toleration 所附加到的 Pod 能够容忍任何使用匹配运算符 `` 匹配三元组 `` 所得到的污点。** + + + + - **tolerations.key** (string) + + key 是容忍度所适用的污点的键名。此字段为空意味着匹配所有的污点键。 + 如果 key 为空,则 operator 必须为 `Exists`;这种组合意味着匹配所有值和所有键。 + + + + - **tolerations.operator** (string) + + operator 表示 key 与 value 之间的关系。有效的 operator 取值是 `Exists` 和 `Equal`。默认为 `Equal`。 + `Exists` 相当于 value 为某种通配符,因此 Pod 可以容忍特定类别的所有污点。 + + + + - **tolerations.value** (string) + + value 是容忍度所匹配的污点值。如果 operator 为 `Exists`,则此 value 值应该为空, + 否则 value 值应该是一个正常的字符串。 + + + + - **tolerations.effect** (string) + + effect 指示要匹配的污点效果。空值意味著匹配所有污点效果。如果要设置此字段,允许的值为 + `NoSchedule`、`PreferNoSchedule` 和 `NoExecute` 之一。 + + + + - **tolerations.tolerationSeconds** (int64) + + tolerationSeconds 表示容忍度(effect 必须是 `NoExecute`,否则此字段被忽略)容忍污点的时间长度。 + 默认情况下,此字段未被设置,这意味着会一直能够容忍对应污点(不会发生驱逐操作)。 + 零值和负值会被系统当做 0 值处理(立即触发驱逐)。 + + +- **schedulerName** (string) + + 如果设置了此字段,则 Pod 将由指定的调度器调度。如果未指定,则使用默认调度器来调度 Pod。 + + +- **runtimeClassName** (string) + + runtimeClassName 引用 `node.k8s.io` 组中的一个 RuntimeClass 对象,该 RuntimeClass 将被用来运行这个 Pod。 + 如果没有 RuntimeClass 资源与所设置的类匹配,则 Pod 将不会运行。 + 如果此字段未设置或为空,将使用 "旧版" RuntimeClass。 + "旧版" RuntimeClass 可以视作一个隐式的运行时类,其定义为空,会使用默认运行时处理程序。 + 更多信息: + https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class + + +- **priorityClassName** (string) + + 如果设置了此字段,则用来标明 Pod 的优先级。 + `"system-node-critical"` 和 `"system-cluster-critical"` 是两个特殊关键字, + 分别用来表示两个最高优先级,前者优先级更高一些。 + 任何其他名称都必须通过创建具有该名称的 PriorityClass 对象来定义。 + 如果未指定此字段,则 Pod 优先级将为默认值。如果没有默认值,则为零。 + + +- **priority** (int32) + + 优先级值。各种系统组件使用该字段来确定 Pod 的优先级。当启用 Priority 准入控制器时, + 该控制器会阻止用户设置此字段。准入控制器基于 priorityClassName 设置来填充此字段。 + 字段值越高,优先级越高。 + + +- **topologySpreadConstraints** ([]TopologySpreadConstraint) + + **补丁策略:基于 `topologyKey` 键合并** + + **映射:`topologyKey, whenUnsatisfiable` 键组合的唯一值 將在合并期间保留** + + TopologySpreadConstraints 描述一组 Pod 应该如何跨拓扑域来分布。调度器将以遵从此约束的方式来调度 Pod。 + 所有 topologySpreadConstraints 条目会通过逻辑与操作进行组合。 + + + **TopologySpreadConstraint 指定如何在规定的拓扑下分布匹配的 Pod。** + + + + - **topologySpreadConstraints.maxSkew** (int32),必需 + + maxSkew 描述 Pod 可能分布不均衡的程度。当 `whenUnsatisfiable=DoNotSchedule` 时, + 此字段值是目标拓扑中匹配的 Pod 数量与全局最小值之间的最大允许差值。 + 全局最小值是候选域中匹配 Pod 的最小数量,如果候选域的数量小于 `minDomains`,则为零。 + 例如,在一个包含三个可用区的集群中,maxSkew 设置为 1,具有相同 `labelSelector` 的 Pod 分布为 2/2/1: + 在这种情况下,全局最小值为 1。 + + ``` + | zone1 | zone2 | zone3 | + | PP | PP | P | + ``` + + - 如果 maxSkew 为 1,传入的 Pod 只能调度到 "zone3",变成 2/2/2; + 将其调度到 "zone1"("zone2")将使"zone1"("zone2")上的实际偏差(Actual Skew)为 3-1,进而违反 + maxSkew 限制(1)。 + - 如果 maxSkew 为 2,则可以将传入的 Pod 调度到任何区域。 + + 当 `whenUnsatisfiable=ScheduleAnyway` 时,此字段被用来给满足此约束的拓扑域更高的优先级。 + + 此字段是一个必填字段。默认值为 1,不允许为 0。 + + + + - **topologySpreadConstraints.topologyKey** (string),必需 + + topologyKey 是节点标签的键名。如果节点的标签中包含此键名且键值亦相同,则被认为在相同的拓扑域中。 + 我们将每个 `<键, 值>` 视为一个 "桶(Bucket)",并尝试将数量均衡的 Pod 放入每个桶中。 + 我们定义域(Domain)为拓扑域的特定实例。此外,我们定义候选域(Eligible Domain)为其节点与节点选择算符匹配的域。 + 例如,如果 topologyKey 是 `"kubernetes.io/hostname"`,则每个 Node 都是该拓扑的域。 + 而如果 topologyKey 是 `"topology.kubernetes.io/zone"`,则每个区域都是该拓扑的一个域。 + 这是一个必填字段。 + + + + - **topologySpreadConstraints.whenUnsatisfiable** (string),必需 + + whenUnsatisfiable 表示如果 Pod 不满足分布约束,如何处理它。 + + - `DoNotSchedule`(默认):告诉调度器不要调度它。 + - `ScheduleAnyway`:告诉调度器将 Pod 调度到任何位置,但给予能够降低偏差的拓扑更高的优先级。 + + 当且仅当该 Pod 的每个可能的节点分配都会违反某些拓扑对应的 "maxSkew" 时, + 才认为传入 Pod 的约束是 "不可满足的"。 + + 例如,在一个包含三个区域的集群中,maxSkew 设置为 1,具有相同 labelSelector 的 Pod 分布为 3/1/1: + + ``` + | zone1 | zone2 | zone3 | + | P P P | P | P | + ``` + + 如果 whenUnsatisfiable 设置为 `DoNotSchedule`,则传入的 Pod 只能调度到 "zone2"("zone3"), + Pod 分布变成 3/2/1(3/1/2),因为 "zone2"("zone3")上的实际偏差(Actual Skew) 为 2-1, + 满足 maxSkew 约束(1)。 + 换句话说,集群仍然可以不平衡,但调度器不会使其**更加地**不平衡。 + + 这是一个必填字段。 + + + + - **topologySpreadConstraints.labelSelector** (}}">LabelSelector) + + labelSelector 用于识别匹配的 Pod。对匹配此标签选择算符的 Pod 进行计数, + 以确定其相应拓扑域中的 Pod 数量。 + + + + - **topologySpreadConstraints.minDomains** (int32) + + minDomains 表示符合条件的域的最小数量。当符合拓扑键的候选域个数小于 minDomains 时, + Pod 拓扑分布特性会将 "全局最小值" 视为 0,然后进行偏差的计算。 + 当匹配拓扑键的候选域的数量等于或大于 minDomains 时,此字段的值对调度没有影响。 + 因此,当候选域的数量少于 minDomains 时,调度程序不会将超过 maxSkew 个 Pods 调度到这些域。 + 如果字段值为 nil,所表达的约束为 minDomains 等于 1。 + 字段的有效值为大于 0 的整数。当字段值不为 nil 时,whenUnsatisfiable 必须为 `DoNotSchedule`。 + + 例如,在一个包含三个区域的集群中,maxSkew 设置为 2,minDomains 设置为 5,具有相同 labelSelector + 的 Pod 分布为 2/2/2: + + ``` + | zone1 | zone2 | zone3 | + | PP | PP | PP | + ``` + + 域的数量小于 5(minDomains 取值),因此"全局最小值"被视为 0。 + 在这种情况下,无法调度具有相同 labelSelector 的新 Pod,因为如果基于新 Pod 计算的偏差值将为 + 3(3-0)。将这个 Pod 调度到三个区域中的任何一个,都会违反 maxSkew 约束。 + + 此字段是一个 Alpha 字段,需要启用 MinDomainsInPodTopologySpread 特性门控。 + + + +### 生命周期 + + +- **restartPolicy** (string) + + Pod 内所有容器的重启策略。`Always`、`OnFailure`、`Never` 之一。默认为 `Always`。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy + + +- **terminationGracePeriodSeconds** (int64) + + 可选字段,表示 Pod 需要体面终止的所需的时长(以秒为单位)。字段值可以在删除请求中减少。 + 字段值必须是非负整数。零值表示收到 kill 信号则立即停止(没有机会关闭)。 + 如果此值为 nil,则将使用默认宽限期。 + 宽限期是从 Pod 中运行的进程收到终止信号后,到进程被 kill 信号强制停止之前,Pod 可以继续存在的时间(以秒为单位)。 + 应该将此值设置为比你的进程的预期清理时间更长。默认为 30 秒。 + + +- **activeDeadlineSeconds** (int64) + + 在系统将主动尝试将此 Pod 标记为已失败并杀死相关容器之前,Pod 可能在节点上活跃的时长; + 市场计算基于 startTime 计算间(以秒为单位)。字段值必须是正整数。 + + +- **readinessGate** ([]PodReadinessGate) + + 如果设置了此字段,则将评估所有就绪门控(Readiness Gate)以确定 Pod 就绪状况。 + 当所有容器都已就绪,并且就绪门控中指定的所有状况的 status 都为 "true" 时,Pod 被视为就绪。 + 更多信息: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates + + + **PodReadinessGate 包含对 Pod 状况的引用** + + + + - **readinessGates.conditionType** (string),必需 + + conditionType 是指 Pod 的状况列表中类型匹配的状况。 + + +### 主机名和名称解析 + + +- **hostname** (string) + + 指定 Pod 的主机名。如果此字段未指定,则 Pod 的主机名将设置为系统定义的值。 + + +- **setHostnameAsFQDN** (boolean) + + 如果为 true,则 Pod 的主机名将配置为 Pod 的 FQDN,而不是叶名称(默认值)。 + 在 Linux 容器中,这意味着将内核的 hostname 字段(struct utsname 的 nodename 字段)设置为 FQDN。 + 在 Windows 容器中,这意味着将注册表项 `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters` + 的 hostname 键设置为 FQDN。如果 Pod 没有 FQDN,则此字段不起作用。 + 默认为 false。 + + +- **subdomain** (string) + + 如果设置了此字段,则完全限定的 Pod 主机名将是 `...svc.<集群域名>`。 + 如果未设置此字段,则该 Pod 将没有域名。 + + +- **hostAliases** ([]HostAlias) + + **补丁策略:基于 `ip` 键合并** + + hostAliases 是一个可选的列表属性,包含要被注入到 Pod 的 hosts 文件中的主机和 IP 地址。 + 这仅对非 hostNetwork Pod 有效。 + + + **HostAlias 结构保存 IP 和主机名之间的映射,这些映射将作为 Pod 的 hosts 文件中的条目注入。** + + + + - **hostAliases.hostnames** ([]string) + + 指定 IP 地址对应的主机名。 + + - **hostAliases.ip** (string) + + 主机文件条目的 IP 地址。 + + +- **dnsConfig** (PodDNSConfig) + + 指定 Pod 的 DNS 参数。此处指定的参数将被合并到基于 dnsPolicy 生成的 DNS 配置中。 + + + + **PodDNSConfig 定义 Pod 的 DNS 参数,这些参数独立于基于 dnsPolicy 生成的参数。** + + + + - **dnsConfig.nameservers** ([]string) + + DNS 名字服务器的 IP 地址列表。此列表将被追加到基于 dnsPolicy 生成的基本名字服务器列表。 + 重复的名字服务器将被删除。 + + + + - **dnsConfig.options** ([]PodDNSConfigOption) + + DNS 解析器选项列表。此处的选项将与基于 dnsPolicy 所生成的基本选项合并。重复的条目将被删除。 + options 中所给出的解析选项将覆盖基本 dnsPolicy 中出现的对应选项。 + + + + + **PodDNSConfigOption 定义 Pod 的 DNS 解析器选项。** + + + + - **dnsConfig.options.name** (string) + + 必需字段。 + + - **dnsConfig.options.value** (string) + + 选项取值。 + + + + - **dnsConfig.searches** ([]string) + + 用于主机名查找的 DNS 搜索域列表。这一列表将被追加到基于 dnsPolicy 生成的基本搜索路径列表。 + 重复的搜索路径将被删除。 + + +- **dnsPolicy** (string) + + 为 Pod 设置 DNS 策略。默认为 `"ClusterFirst"`。 + 有效值为 `"ClusterFirstWithHostNet"`、`"ClusterFirst"`、`"Default"` 或 `"None"`。 + dnsConfig 字段中给出的 DNS 参数将与使用 dnsPolicy 字段所选择的策略合并。 + 要针对 hostNetwork 的 Pod 设置 DNS 选项,你必须将 DNS 策略显式设置为 `"ClusterFirstWithHostNet"`。 + + +### 主机名字空间 + + +- **hostNetwork** (boolean) + + 为此 Pod 请求主机层面联网支持。使用主机的网络名字空间。 + 如果设置了此选项,则必须指定将使用的端口。默认为 false。 + +- **hostPID** (boolean) + + 使用主机的 PID 名字空间。可选:默认为 false。 + + +- **hostIPC** (boolean) + + 使用主机的 IPC 名字空间。可选:默认为 false。 + +- **shareProcessNamespace** (boolean) + + 在 Pod 中的所有容器之间共享单个进程名字空间。设置了此字段之后,容器将能够查看来自同一 Pod 中其他容器的进程并发出信号, + 并且每个容器中的第一个进程不会被分配 PID 1。`hostPID` 和 `shareProcessNamespace` 不能同时设置。 + 可选:默认为 false。 + + +### 服务账号 + + +- **serviceAccountName** (string) + + serviceAccountName 是用于运行此 Pod 的服务账号的名称。更多信息: + https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-service-account/ + +- **automountServiceAccountToken** (boolean) + + automountServiceAccountToken 指示是否应自动挂载服务帐户令牌。 + + +### 安全上下文 + + + +- **securityContext** (PodSecurityContext) + + SecurityContext 包含 Pod 级别的安全属性和常见的容器设置。 + 可选:默认为空。每个字段的默认值见类型描述。 + + + + + PodSecurityContext 包含 Pod 级别的安全属性和常用容器设置。 + 一些字段也存在于 `container.securityContext` 中。`container.securityContext` + 中的字段值优先于 PodSecurityContext 的字段值。 + + + + - **securityContext.runAsUser** (int64) + + 运行容器进程入口点(Entrypoint)的 UID。如果未指定,则默认为镜像元数据中指定的用户。 + 也可以在 SecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在对应容器中所设置的 SecurityContext 值优先。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + - **securityContext.runAsNonRoot** (boolean) + + 指示容器必须以非 root 用户身份运行。如果为 true,kubelet 将在运行时验证镜像, + 以确保它不会以 UID 0(root)身份运行。如果镜像中确实使用 root 账号启动,则容器无法被启动。 + 如果此字段未设置或为 false,则不会执行此类验证。也可以在 SecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + + + + - **securityContext.runAsGroup** (int64) + + 运行容器进程入口点(Entrypoint)的 GID。如果未设置,则使用运行时的默认值。 + 也可以在 SecurityContext 中设置。如果同时在 SecurityContext 和 PodSecurityContext 中设置, + 则在对应容器中设置的 SecurityContext 值优先。 + 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.supplementalGroups** ([]int64) + + 在容器的主 GID 之外,应用于每个容器中运行的第一个进程的组列表。 + 如果未设置此字段,则不会向任何容器添加额外的组。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + - **securityContext.fsGroup** (int64) + + 应用到 Pod 中所有容器的特殊补充组。某些卷类型允许 kubelet 将该卷的所有权更改为由 Pod 拥有: + + 1. 文件系统的属主 GID 将是 fsGroup 字段值 + 2. `setgid` 位已设置(在卷中创建的新文件将归 fsGroup 所有) + 3. 权限位将与 `rw-rw----` 进行按位或操作 + + 如果未设置此字段,kubelet 不会修改任何卷的所有权和权限。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + - **securityContext.fsGroupChangePolicy** (string) + + fsGroupChangePolicy 定义了在卷被在 Pod 中暴露之前更改其属主和权限的行为。 + 此字段仅适用于支持基于 fsGroup 的属主权(和权限)的卷类型。它不会影响临时卷类型, + 例如:`secret`、`configmap` 和 `emptydir`。 + 有效值为 `"OnRootMismatch"` 和 `"Always"`。如果未设置,则使用 `"Always"`。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + - **securityContext.seccompProfile** (SeccompProfile) + + 此 Pod 中的容器使用的 seccomp 选项。注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + **SeccompProfile 定义 Pod 或容器的 seccomp 配置文件设置。只能设置一个配置文件源。** + + + + - **securityContext.seccompProfile.type** (string),必需 + + type 标明将应用哪种 seccomp 配置文件。有效的选项有: + + - `Localhost` - 应使用在节点上的文件中定义的配置文件。 + - `RuntimeDefault` - 应使用容器运行时默认配置文件。 + - `Unconfined` - 不应应用任何配置文件。 + + + + - **securityContext.seccompProfile.localhostProfile** (string) + + localhostProfile 指示应使用在节点上的文件中定义的配置文件。该配置文件必须在节点上预先配置才能工作。 + 必须是相对于 kubelet 配置的 seccomp 配置文件位置的下降路径。 + 仅当 type 为 `"Localhost"` 时才必须设置。 + + + + - **securityContext.seLinuxOptions** (SELinuxOptions) + + 应用于所有容器的 SELinux 上下文。如果未设置,容器运行时将为每个容器分配一个随机 SELinux 上下文。 + 也可以在 SecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在对应容器中设置的 SecurityContext 值优先。 + 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + + **SELinuxOptions 是要应用于容器的标签** + + + + - **securityContext.seLinuxOptions.level** (string) + + level 是应用于容器的 SELinux 级别标签。 + + - **securityContext.seLinuxOptions.role** (string) + + role 是应用于容器的 SELinux 角色标签。 + + - **securityContext.seLinuxOptions.type** (string) + + type 是适用于容器的 SELinux 类型标签。 + + - **securityContext.seLinuxOptions.user** (string) + + user 是应用于容器的 SELinux 用户标签。 + + + + - **securityContext.sysctls** ([]Sysctl) + + sysctls 包含用于 Pod 的名字空间 sysctl 列表。具有不受(容器运行时)支持的 sysctl 的 Pod 可能无法启动。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + + **Sysctl 定义要设置的内核参数** + + + + - **securityContext.sysctls.name** (string),必需 + + 要设置的属性的名称。 + + - **securityContext.sysctls.value** (string),必需 + + 要设置的属性值。 + + + + - **securityContext.windowsOptions** (WindowsSecurityContextOptions) + + 要应用到所有容器上的、特定于 Windows 的设置。 + 如果未设置此字段,将使用容器的 SecurityContext 中的选项。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + 注意,`spec.os.name` 为 "linux" 时不能设置该字段。 + + + + + **WindowsSecurityContextOptions 包含特定于 Windows 的选项和凭据。** + + + + - **securityContext.windowsOptions.gmsaCredentialSpec** (string) + + gmsaCredentialSpec 是 [GMSA 准入 Webhook](https://github.com/kubernetes-sigs/windows-gmsa) + 内嵌由 gmsaCredentialSpecName 字段所指定的 GMSA 凭证规约内容的地方。 + + - **securityContext.windowsOptions.gmsaCredentialSpecName** (string) + + gmsaCredentialSpecName 是要使用的 GMSA 凭证规约的名称。 + + + + - **securityContext.windowsOptions.hostProcess** (boolean) + + hostProcess 确定容器是否应作为"主机进程"容器运行。 + 此字段是 Alpha 级别的,只有启用 WindowsHostProcessContainers 特性门控的组件才会理解此字段。 + 在不启用该功能门控的前提下设置了此字段,将导致验证 Pod 时发生错误。 + 一个 Pod 的所有容器必须具有相同的有效 hostProcess 值(不允许混合设置了 hostProcess + 的容器和未设置 hostProcess 容器)。 + 此外,如果 hostProcess 为 true,则 hostNetwork 也必须设置为 true。 + + + + - **securityContext.windowsOptions.runAsUserName** (string) + + Windows 中用来运行容器进程入口点的用户名。如果未设置,则默认为镜像元数据中指定的用户。 + 也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + + + +### Beta 级别 + + + +- **ephemeralContainers** ([]}}">EphemeralContainer) + + **补丁策略:基于 `name` 键合并** + + 在此 Pod 中运行的临时容器列表。临时容器可以在现有的 Pod 中运行,以执行用户发起的操作,例如调试。 + 此列表在创建 Pod 时不能指定,也不能通过更新 Pod 规约来修改。 + 要将临时容器添加到现有 Pod,请使用 Pod 的 `ephemeralcontainers` 子资源。 + 此字段是 Beta 级别的,可在尚未禁用 EphemeralContainers 特性门控的集群上使用。 + + + +- **preemptionPolicy** (string) + + PreemptionPolicy 是用来抢占优先级较低的 Pod 的策略。取值为 `"Never"`、`"PreemptLowerPriority"` 之一。 + 如果未设置,则默认为 `"PreemptLowerPriority"`。 + + + +- **overhead** (map[string]}}">Quantity) + + overhead 表示与用指定 RuntimeClass 运行 Pod 相关的资源开销。该字段将由 RuntimeClass 准入控制器在准入时自动填充。 + 如果启用了 RuntimeClass 准入控制器,则不得在 Pod 创建请求中设置 overhead 字段。 + RuntimeClass 准入控制器将拒绝已设置 overhead 字段的 Pod 创建请求。 + 如果在 Pod 规约中配置并选择了 RuntimeClass,overhead 字段将被设置为对应 RuntimeClass + 中定义的值,否则将保持未设置并视为零。更多信息: + https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md + + + +### 已弃用的 + +- **serviceAccount** (string) + + deprecatedServiceAccount 是 serviceAccountName 的弃用别名。此字段已被弃用:应改用 serviceAccountName。 + + + +## Container + +要在 Pod 中运行的单个应用容器。 + +
+ +- **name** (string),必需 + + 指定为 DNS_LABEL 的容器的名称。Pod 中的每个容器都必须有一个唯一的名称 (DNS_LABEL)。无法更新。 + + +### 镜像 {#image} + + + +- **image** (string) + + 容器镜像名称。更多信息: https://kubernetes.io/zh-cn/docs/concepts/containers/images。 + 此字段是可选的,以允许更高层的配置管理进行默认设置或覆盖工作负载控制器(如 Deployment 和 StatefulSets) + 中的容器镜像。 + +- **imagePullPolicy** (string) + + 镜像拉取策略。`"Always"`、`"Never"`、`"IfNotPresent"` 之一。如果指定了 `:latest` 标签,则默认为 `"Always"`, + 否则默认为 `"IfNotPresent"`。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/images#updating-images + + + +### Entrypoint + + + +- **command** ([]string) + + 入口点数组。不在 Shell 中执行。如果未提供,则使用容器镜像的 `ENTRYPOINT`。 + 变量引用 `$(VAR_NAME)` 使用容器的环境进行扩展。如果无法解析变量,则输入字符串中的引用将保持不变。 + `$$` 被简化为 `$`,这允许转义 `$(VAR_NAME)` 语法:即 `"$$(VAR_NAME)" ` 将产生字符串字面值 `"$(VAR_NAME)"`。 + 无论变量是否存在,转义引用都不会被扩展。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + + + +- **args** ([]string) + + entrypoint 的参数。如果未提供,则使用容器镜像的 `CMD` 设置。变量引用 `$(VAR_NAME)` 使用容器的环境进行扩展。 + 如果无法解析变量,则输入字符串中的引用将保持不变。`$$` 被简化为 `$`,这允许转义 `$(VAR_NAME)` 语法: + 即 `"$$(VAR_NAME)"` 将产生字符串字面值 `"$(VAR_NAME)"`。无论变量是否存在,转义引用都不会被扩展。无法更新。 + 更多信息: + https://kubernetes.io/zh-cn/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + + + +- **workingDir** (string) + + 容器的工作目录。如果未指定,将使用容器运行时的默认值,默认值可能在容器镜像中配置。无法更新。 + + + +### 端口 + + + +- **ports**([]ContainerPort) + + **补丁策略:基于 `containerPort` 键合并** + + **映射:键 `containerPort, protocol` 组合的唯一值将在合并期间保留** + + 要从容器公开的端口列表。在此处公开端口可为系统提供有关容器使用的网络连接的附加信息,但主要是信息性的。 + 此处不指定端口不会阻止该端口被暴露。 + 任何侦听容器内默认 `"0.0.0.0"` 地址的端口都可以从网络访问。无法更新。 + + + **ContainerPort 表示单个容器中的网络端口。** + + + + - **ports.containerPort** (int32),必需 + + 要在 Pod 的 IP 地址上公开的端口号。这必须是有效的端口号,0 \< x \< 65536。 + + - **ports.hostIP** (string) + + 绑定外部端口的主机 IP。 + + + + - **ports.hostPort** (int32) + + 要在主机上公开的端口号。如果指定,此字段必须是一个有效的端口号,0 \< x \< 65536。 + 如果设置了 hostNetwork,此字段值必须与 containerPort 匹配。大多数容器不需要设置此字段。 + + - **ports.name** (string) + + 如果设置此字段,这必须是 IANA_SVC_NAME 并且在 Pod 中唯一。 + Pod 中的每个命名端口都必须具有唯一的名称。服务可以引用的端口的名称。 + + + + - **ports.protocol** (string) + + 端口协议。必须是 `UDP`、`TCP` 或 `SCTP`。默认为 `TCP`。 + + + +### 环境变量 + + + +- **env**([]EnvVar) + + **补丁策略:基于 `name` 键合并** + + 要在容器中设置的环境变量列表。无法更新。 + + + + **EnvVar 表示容器中存在的环境变量。** + + + + - **env.name** (string),必需 + + 环境变量的名称。必须是 C_IDENTIFIER。 + + + + - **env.value** (string) + + 变量引用 `$(VAR_NAME)` 使用容器中先前定义的环境变量和任何服务环境变量进行扩展。 + 如果无法解析变量,则输入字符串中的引用将保持不变。 + `$$` 会被简化为 `$`,这允许转义 `$(VAR_NAME)` 语法:即 `"$$(VAR_NAME)"` 将产生字符串字面值 `"$(VAR_NAME)"`。 + 无论变量是否存在,转义引用都不会被扩展。默认为 ""。 + + + + - **env.valueFrom** (EnvVarSource) + + 环境变量值的来源。如果 value 值不为空,则不能使用。 + + + + **EnvVarSource 表示 envVar 值的来源。** + + + + - **env.valueFrom.configMapKeyRef** (ConfigMapKeySelector) + + 选择某个 ConfigMap 的一个主键。 + + + + - **env.valueFrom.configMapKeyRef.key** (string),必需 + + 要选择的主键。 + + - **env.valueFrom.configMapKeyRef.name** (string) + + 被引用者的名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + - **env.valueFrom.configMapKeyRef.optional** (boolean) + + 指定 ConfigMap 或其主键是否必须已经定义。 + + + + - **env.valueFrom.fieldRef** (}}">ObjectFieldSelector) + + 选择 Pod 的一个字段:支持 `metadata.name`、`metadata.namespace`、`metadata.labels['']`、 + `metadata.annotations['']`、`spec.nodeName`、`spec.serviceAccountName`、`status.hostIP` + `status.podIP`、`status.podIPs`。 + + + + - **env.valueFrom.resourceFieldRef** (}}">ResourceFieldSelector) + + 选择容器的资源:目前仅支持资源限制和请求(`limits.cpu`、`limits.memory`、`limits.ephemeral-storage`、 + `requests.cpu`、`requests.memory` 和 `requests.ephemeral-storage`)。 + + + + - **env.valueFrom.secretKeyRef** (SecretKeySelector) + + 在 Pod 的名字空间中选择 Secret 的主键。 + + + + + **SecretKeySelector 选择一个 Secret 的主键。** + + + + - **env.valueFrom.secretKeyRef.key** (string),必需 + + 要选择的 Secret 的主键。必须是有效的主键。 + + - **env.valueFrom.secretKeyRef.name** (string) + + 被引用 Secret 的名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + - **env.valueFrom.secretKeyRef.optional** (boolean) + + 指定 Secret 或其主键是否必须已经定义。 + + +- **envFrom** ([]EnvFromSource) + + 用来在容器中填充环境变量的数据源列表。在源中定义的键必须是 C_IDENTIFIER。 + 容器启动时,所有无效主键都将作为事件报告。 + 当一个键存在于多个源中时,与最后一个来源关联的值将优先。 + 由 env 定义的条目中,与此处键名重复者,以 env 中定义为准。无法更新。 + + + + **EnvFromSource 表示一组 ConfigMaps 的来源** + + + + - **envFrom.configMapRef** (ConfigMapEnvSource) + + 要从中选择主键的 ConfigMap。 + + + ConfigMapEnvSource 选择一个 ConfigMap 来填充环境变量。目标 ConfigMap 的 + data 字段的内容将键值对表示为环境变量。 + + + + - **envFrom.configMapRef.name** (string) + + 被引用的 ConfigMap 的名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + - **envFrom.configMapRef.optional** (boolean) + + 指定 ConfigMap 是否必须已经定义。 + + + + - **envFrom.prefix** (string) + + 附加到 ConfigMap 中每个键名之前的可选标识符。必须是 C_IDENTIFIER。 + + - **envFrom.secretRef** (SecretEnvSource) + + 要从中选择主键的 Secret。 + + SecretEnvSource 选择一个 Secret 来填充环境变量。 + 目标 Secret 的 data 字段的内容将键值对表示为环境变量。 + + + + - **envFrom.secretRef.name** (string) + + 被引用 Secret 的名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + - **envFrom.secretRef.optional** (boolean) + + 指定 Secret 是否必须已经定义。 + + + +### 卷 + + + +- **volumeMounts** ([]VolumeMount) + + **补丁策略:基于 `mountPath` 键合并** + + 要挂载到容器文件系统中的 Pod 卷。无法更新。 + + VolumeMount 描述在容器中安装卷。 + + + + - **volumeMounts.mountPath** (string),必需 + + 容器内卷的挂载路径。不得包含 ':'。 + + - **volumeMounts.name** (string),必需 + + 此字段必须与卷的名称匹配。 + + - **volumeMounts.mountPropagation** (string) + + mountPropagation 确定挂载如何从主机传播到容器,及如何反向传播。 + 如果未设置,则使用 `MountPropagationNone`。该字段在 1.10 中是 Beta 版。 + + + + - **volumeMounts.readOnly** (boolean) + + 如果为 true,则以只读方式挂载,否则(false 或未设置)以读写方式挂载。默认为 false。 + + - **volumeMounts.subPath** (boolean) + + 卷中的路径,容器中的卷应该这一路径安装。默认为 ""(卷的根)。 + + - **volumeMounts.subPathExpr** (string) + + 应安装容器卷的卷内的扩展路径。行为类似于 subPath,但环境变量引用 `$(VAR_NAME)` + 使用容器的环境进行扩展。默认为 ""(卷的根)。`subPathExpr` 和 `subPath` 是互斥的。 + + + +- **volumeDevices** ([]VolumeDevice) + + **补丁策略:基于 `devicePath` 键合并** + + volumeDevices 是容器要使用的块设备列表。 + + + volumeDevice 描述了容器内原始块设备的映射。 + + + + - **volumeDevices.devicePath** (string),必需 + + devicePath 是设备将被映射到的容器内的路径。 + + - **volumeDevices.name** (string),必需 + + name 必须与 Pod 中的 persistentVolumeClaim 的名称匹配 + + + +### 资源 + + + +- **resources**(ResourceRequirements) + + 此容器所需的计算资源。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers/ + + ResourceRequirements 描述计算资源需求。 + + + + - **resources.limits** (map[string]}}">Quantity) + + limits 描述所允许的最大计算资源用量。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers/ + + - **resources.requests** (map[string]}}">Quantity) + + requests 描述所需的最小计算资源量。如果容器省略了 requests,但明确设定了 limits, + 则 requests 默认值为 limits 值,否则为实现定义的值。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers/ + + +### 生命周期 + + + +- **lifecycle** (Lifecycle) + + 管理系统应对容器生命周期事件采取的行动。无法更新。 + + Lifecycle 描述管理系统为响应容器生命周期事件应采取的行动。 + 对于 postStart 和 preStop 生命周期处理程序,容器的管理会阻塞,直到操作完成, + 除非容器进程失败,在这种情况下处理程序被中止。 + + + + - **lifecycle.postStart** (}}">LifecycleHandler) + + 创建容器后立即调用 postStart。如果处理程序失败,则容器将根据其重新启动策略终止并重新启动。 + 容器的其他管理阻塞直到钩子完成。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + + + + - **lifecycle.preStop** (}}">LifecycleHandler) + + preStop 在容器因 API 请求或管理事件(如存活态探针/启动探针失败、抢占、资源争用等)而终止之前立即调用。 + 如果容器崩溃或退出,则不会调用处理程序。Pod 的终止宽限期倒计时在 preStop 钩子执行之前开始。 + 无论处理程序的结果如何,容器最终都会在 Pod 的终止宽限期内终止(除非被终结器延迟)。 + 容器的其他管理会阻塞,直到钩子完成或达到终止宽限期。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + + + +- **terminationMessagePath** (string) + + 可选字段。挂载到容器文件系统的一个路径,容器终止消息写入到该路径下的文件中。 + 写入的消息旨在成为简短的最终状态,例如断言失败消息。如果大于 4096 字节,将被节点截断。 + 所有容器的总消息长度将限制为 12 KB。默认为 `/dev/termination-log`。无法更新。 + + +- **terminationMessagePolicy** (string) + + 指示应如何填充终止消息。字段值 `File` 将使用 terminateMessagePath 的内容来填充成功和失败的容器状态消息。 + 如果终止消息文件为空并且容器因错误退出,`FallbackToLogsOnError` 将使用容器日志输出的最后一块。 + 日志输出限制为 2048 字节或 80 行,以较小者为准。默认为 `File`。无法更新。 + + +- **livenessProbe** (}}">Probe) + + 定期探针容器活跃度。如果探针失败,容器将重新启动。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#container-probes + + +- **readinessProbe** (}}">Probe) + + 定期探测容器服务就绪情况。如果探针失败,容器将被从服务端点中删除。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#container-probes + + +- **startupProbe** (}}">Probe) + + startupProbe 表示 Pod 已成功初始化。如果设置了此字段,则此探针成功完成之前不会执行其他探针。 + 如果这个探针失败,Pod 会重新启动,就像存活态探针失败一样。 + 这可用于在 Pod 生命周期开始时提供不同的探针参数,此时加载数据或预热缓存可能需要比稳态操作期间更长的时间。 + 这无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#container-probes + + +### 安全上下文 + + +- **securityContext** (SecurityContext) + + SecurityContext 定义了容器应该运行的安全选项。如果设置,SecurityContext 的字段将覆盖 + PodSecurityContext 的等效字段。更多信息: + https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/security-context/ + + SecurityContext 保存将应用于容器的安全配置。某些字段在 SecurityContext 和 PodSecurityContext 中都存在。 + 当两者都设置时,SecurityContext 中的值优先。 + + + + - **securityContext.runAsUser** (int64) + + 运行容器进程入口点的 UID。如果未指定,则默认为镜像元数据中指定的用户。 + 也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.runAsNonRoot** (boolean) + + 指示容器必须以非 root 用户身份运行。 + 如果为 true,kubelet 将在运行时验证镜像,以确保它不会以 UID 0(root)身份运行,如果是,则无法启动容器。 + 如果未设置或为 false,则不会执行此类验证。也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + + + + - **securityContext.runAsGroup** (int64) + + 运行容器进程入口点的 GID。如果未设置,则使用运行时默认值。也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.readOnlyRootFilesystem** (boolean) + + 此容器是否具有只读根文件系统。默认为 false。注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.procMount** (string) + + procMount 表示用于容器的 proc 挂载类型。默认值为 `DefaultProcMount`, + 它针对只读路径和掩码路径使用容器运行时的默认值。此字段需要启用 ProcMountType 特性门控。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + - **securityContext.privileged** (boolean) + + 以特权模式运行容器。特权容器中的进程本质上等同于主机上的 root。默认为 false。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + + - **securityContext.allowPrivilegeEscalation** (boolean) + + allowPrivilegeEscalation 控制进程是否可以获得比其父进程更多的权限。此布尔值直接控制是否在容器进程上设置 + `no_new_privs` 标志。allowPrivilegeEscalation 在容器处于以下状态时始终为 true: + + 1. 以特权身份运行 + 2. 具有 `CAP_SYS_ADMIN` + + 请注意,当 `spec.os.name` 为 "windows" 时,无法设置此字段。 + + + + - **securityContext.capabilities** (Capabilities) + + 运行容器时添加或放弃的权能(Capabilities)。默认为容器运行时所授予的权能集合。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + **在运行中的容器中添加和放弃 POSIX 权能。** + + - **securityContext.capabilities.add** ([]string) + + 新增权能。 + + - **securityContext.capabilities.drop** ([]string) + + 放弃权能。 + + + + - **securityContext.seccompProfile** (SeccompProfile) + + 此容器使用的 seccomp 选项。如果在 Pod 和容器级别都提供了 seccomp 选项,则容器级别的选项会覆盖 Pod 级别的选项设置。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + **SeccompProfile 定义 Pod 或容器的 seccomp 配置文件设置。只能设置一个配置文件源。** + + + + - **securityContext.seccompProfile.type** (string),必需 + + type 指示应用哪种 seccomp 配置文件。有效的选项有: + + - `Localhost` - 应使用在节点上的文件中定义的配置文件。 + - `RuntimeDefault` - 应使用容器运行时的默认配置文件。 + - `Unconfined` - 不应用任何配置文件。 + + + + - **securityContext.seccompProfile.localhostProfile** (string) + + localhostProfile 指示应使用的在节点上的文件,文件中定义了配置文件。 + 该配置文件必须在节点上先行配置才能使用。 + 必须是相对于 kubelet 所配置的 seccomp 配置文件位置下的下级路径。 + 仅当 type 为 "Localhost" 时才必须设置。 + + + + - **securityContext.seLinuxOptions** (SELinuxOptions) + + 要应用到容器上的 SELinux 上下文。如果未设置此字段,容器运行时将为每个容器分配一个随机的 SELinux 上下文。 + 也可以在 PodSecurityContext 中设置。如果同时在 SecurityContext 和 PodSecurityContext 中设置, + 则在 SecurityContext 中指定的值优先。注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + **SELinuxOptions 是要应用到容器上的标签。** + + + + - **securityContext.seLinuxOptions.level** (string) + + level 是应用于容器的 SELinux 级别标签。 + + - **securityContext.seLinuxOptions.role** (string) + + role 是应用于容器的 SELinux 角色标签。 + + - **securityContext.seLinuxOptions.type** (string) + + type 是适用于容器的 SELinux 类型标签。 + + - **securityContext.seLinuxOptions.user** (string) + + user 是应用于容器的 SELinux 用户标签。 + + + + - **securityContext.windowsOptions** (WindowsSecurityContextOptions) + + 要应用于所有容器上的特定于 Windows 的设置。如果未指定,将使用 PodSecurityContext 中的选项。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + 注意,`spec.os.name` 为 "linux" 时不能设置此字段。 + + + **WindowsSecurityContextOptions 包含特定于 Windows 的选项和凭据。** + + + + - **securityContext.windowsOptions.gmsaCredentialSpec** (string) + + gmsaCredentialSpec 是 [GMSA 准入 Webhook](https://github.com/kubernetes-sigs/windows-gmsa) + 内嵌由 gmsaCredentialSpecName 字段所指定的 GMSA 凭证规约的内容的地方。 + + + + - **securityContext.windowsOptions.hostProcess** (boolean) + + hostProcess 确定容器是否应作为 "主机进程" 容器运行。 + 此字段是 Alpha 级别的,只有启用 WindowsHostProcessContainers 特性门控的组件才会处理。 + 设置此字段而不启用特性门控是,在验证 Pod 时将发生错误。 + 一个 Pod 的所有容器必须具有相同的有效 hostProcess 值(不允许混合设置了 hostProcess 容器和未设置 hostProcess 的容器)。 + 此外,如果 hostProcess 为 true,则 hostNetwork 也必须设置为 true。 + + + + - **securityContext.windowsOptions.runAsUserName** (string) + + Windows 中运行容器进程入口点的用户名。如果未指定,则默认为镜像元数据中指定的用户。 + 也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext 中指定的值优先。 + + +### 调试 + + +- **stdin** (boolean) + + 此容器是否应在容器运行时为 stdin 分配缓冲区。如果未设置,从容器中的 stdin 读取将始终导致 EOF。 + 默认为 false。 + + +- **stdinOnce** (boolean) + + 容器运行时是否应在某个 attach 打开 stdin 通道后关闭它。当 stdin 为 true 时,stdin 流将在多个 attach 会话中保持打开状态。 + 如果 stdinOnce 设置为 true,则 stdin 在容器启动时打开,在第一个客户端连接到 stdin 之前为空, + 然后保持打开并接受数据,直到客户端断开连接,此时 stdin 关闭并保持关闭直到容器重新启动。 + 如果此标志为 false,则从 stdin 读取的容器进程将永远不会收到 EOF。 默认为 false。 + +## EphemeralContainer {#EphemeralContainer} + + +EphemeralContainer 是一个临时容器,你可以将其添加到现有 Pod 以用于用户发起的活动,例如调试。 +临时容器没有资源或调度保证,它们在退出或 Pod 被移除或重新启动时不会重新启动。 +如果临时容器导致 Pod 超出其资源分配,kubelet 可能会驱逐 Pod。 + +要添加临时容器,请使用现有 Pod 的 `ephemeralcontainers` 子资源。临时容器不能被删除或重新启动。 + +这是未禁用 EphemeralContainers 特性门控的集群上可用的 Beta 功能。 + +
+ + +- **name** (string),必需 + + 以 DNS_LABEL 形式设置的临时容器的名称。此名称在所有容器、Init 容器和临时容器中必须是唯一的。 + + +- **targetContainerName** (string) + + 如果设置,则为 Pod 规约中此临时容器所针对的容器的名称。临时容器将在该容器的名字空间(IPC、PID 等)中运行。 + 如果未设置,则临时容器使用 Pod 规约中配置的名字空间。 + + 容器运行时必须实现对此功能的支持。如果运行时不支持名字空间定位,则设置此字段的结果是未定义的。 + + +### 镜像 + + +- **image** (string) + + 容器镜像名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/images + + +- **imagePullPolicy** (string) + + 镜像拉取策略。取值为 `Always`、`Never`、`IfNotPresent` 之一。 + 如果指定了 `:latest` 标签,则默认为 `Always`,否则默认为 `IfNotPresent`。 + 无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/images#updating-images + + +### 入口点 + + +- **command** ([]string) + + 入口点数组。不在 Shell 中执行。如果未提供,则使用镜像的 `ENTRYPOINT`。 + 变量引用 `$(VAR_NAME)` 使用容器的环境进行扩展。如果无法解析变量,则输入字符串中的引用将保持不变。 + `$$` 被简化为 `$`,这允许转义 `$(VAR_NAME)` 语法:即 `"$$(VAR_NAME)"` 将产生字符串字面值 `"$(VAR_NAME)"`。 + 无论变量是否存在,转义引用都不会被扩展。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + + +- **args** ([]string) + + entrypoint 的参数。如果未提供,则使用镜像的 `CMD`。 + 变量引用 `$(VAR_NAME)` 使用容器的环境进行扩展。如果无法解析变量,则输入字符串中的引用将保持不变。 + `$$` 被简化为 `$`,这允许转义 `$(VAR_NAME)` 语法:即 `"$$(VAR_NAME)"` 将产生字符串字面值 `"$(VAR_NAME)"`。 + 无论变量是否存在,转义引用都不会被扩展。无法更新。更多信息: + https://kubernetes.io/zh-cn/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + + +- **workingDir** (string) + + 容器的工作目录。如果未指定,将使用容器运行时的默认值,默认值可能在容器镜像中配置。无法更新。 + + +### 环境变量 + + +- **env**([]EnvVar) + + **补丁策略:基于 `name` 键合并** + + 要在容器中设置的环境变量列表。无法更新。 + + + **EnvVar 表示容器中存在的环境变量。** + + + + - **env.name** (string),必需 + + 环境变量的名称。必须是 C_IDENTIFIER。 + + - **env.value** (string) + + 变量引用 `$(VAR_NAME)` 使用容器中先前定义的环境变量和任何服务环境变量进行扩展。 + 如果无法解析变量,则输入字符串中的引用将保持不变。 + `$$` 被简化为 `$`,这允许转义 `$(VAR_NAME)` 语法:即 `"$$(VAR_NAME)"` 将产生字符串字面值 `"$(VAR_NAME)"`。 + 无论变量是否存在,转义引用都不会被扩展。默认为 ""。 + + + + - **env.valueFrom** (EnvVarSource) + + 环境变量值的来源。如果取值不为空,则不能使用。 + + **EnvVarSource 表示 envVar 值的源。** + + + + - **env.valueFrom.configMapKeyRef** (ConfigMapKeySelector) + + 选择 ConfigMap 的主键。 + + + **选择 ConfigMap 的主键。** + + + + - **env.valueFrom.configMapKeyRef.key** (string),必需 + + 选择的主键。 + + - **env.valueFrom.configMapKeyRef.name**(string) + + 所引用 ConfigMap 的名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + + + - **env.valueFrom.configMapKeyRef.optional** (boolean) + + 指定是否 ConfigMap 或其键必须已经被定义。 + + + + - **env.valueFrom.fieldRef** (}}">ObjectFieldSelector) + + 选择 Pod 的一个字段:支持 `metadata.name`、`metadata.namespace`、`metadata.labels['']`、 + `metadata.annotations['']`、`spec.nodeName`、`spec.serviceAccountName`、`status.hostIP`、 + `status.podIP`、`status.podIPs`。 + + + + - **env.valueFrom.resourceFieldRef** (}}">ResourceFieldSelector) + + 选择容器的资源:当前仅支持资源限制和请求(`limits.cpu`、`limits.memory`、`limits.ephemeral-storage`、 + `requests.cpu`、`requests.memory` 和 `requests.ephemeral-storage`)。 + + + + - **env.valueFrom.secretKeyRef** (SecretKeySelector) + + 在 Pod 的名字空间中选择某 Secret 的主键。 + + + **SecretKeySelector 选择某 Secret 的主键。** + + + + - **env.valueFrom.secretKeyRef.key** (string),必需 + + 要从 Secret 中选择的主键。必须是有效的主键。 + + - **env.valueFrom.secretKeyRef.name**(string) + + 被引用 Secret 名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + - **env.valueFrom.secretKeyRef.optional** (boolean) + + 指定 Secret 或其主键是否必须已经定义。 + + +- **envFrom** ([]EnvFromSource) + + 在容器中填充环境变量的来源列表。在来源中定义的键名必须是 C_IDENTIFIER。 + 容器启动时,所有无效键都将作为事件报告。当一个键存在于多个来源中时,与最后一个来源关联的值将优先。 + 如果有重复主键,env 中定义的值将优先。无法更新。 + + + **EnvFromSource 表示一组 ConfigMap 来源** + + + + - **envFrom.configMapRef** (ConfigMapEnvSource) + + 要从中选择的 ConfigMap。 + + + **ConfigMapEnvSource 选择一个 ConfigMap 来填充环境变量。目标 ConfigMap 的 data 字段的内容将键值对表示为环境变量。** + + + + - **envFrom.configMapRef.name**(string) + + 被引用的 ConfigMap 名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + - **envFrom.configMapRef.optional** (boolean) + + 指定所引用的 ConfigMap 是否必须已经定义。 + + + + - **envFrom.prefix** (string) + + 要在 ConfigMap 中的每个键前面附加的可选标识符。必须是C_IDENTIFIER。 + + + + - **envFrom.secretRef** (SecretEnvSource) + + 可供选择的 Secret。 + + + **SecretEnvSource 选择一个 Secret 来填充环境变量。目标 Secret 的 data 字段的内容将键值对表示为环境变量。** + + + + - **envFrom.secretRef.name**(string) + + 被引用 ConfigMap 的名称。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names + + - **envFrom.secretRef.optional** (boolean) + + 指定是否 Secret 必须已经被定义。 + + +### 卷 + + + +- **volumeMounts** ([]VolumeMount) + + **补丁策略:基于 `mountPath` 键合并** + + 要挂载到容器文件系统中的 Pod 卷。临时容器不允许子路径挂载。无法更新。 + + **VolumeMount 描述在容器中卷的挂载。** + + + + - **volumeMounts.mountPath** (string),必需 + + 容器内应安装卷的路径。不得包含 ':'。 + + - **volumeMounts.name** (string),必需 + + 此字段必须与卷的名称匹配。 + + + + - **volumeMounts.mountPropagation** (string) + + mountPropagation 确定装载如何从主机传播到容器,及反向传播选项。 + 如果未设置,则使用 `None`。此字段在 1.10 中为 Beta 字段。 + + - **volumeMounts.readOnly** (boolean) + + 如果为 true,则挂载卷为只读,否则为读写(false 或未指定)。默认值为 false。 + + + + - **volumeMounts.subPath** (string) + + 卷中的路径名,应该从该路径挂在容器的卷。默认为 "" (卷的根)。 + + - **volumeMounts.subPathExpr** (string) + + 应安装容器卷的卷内的扩展路径。行为类似于 `subPath`,但环境变量引用 `$(VAR_NAME)` + 使用容器的环境进行扩展。默认为 ""(卷的根)。`subPathExpr` 和 `SubPath` 是互斥的。 + + +- **volumeDevices** ([]VolumeDevice) + + **补丁策略:基于 `devicePath` 键合并** + + volumeDevices 是容器要使用的块设备列表。 + + + **volumeDevice 描述容器内原始块设备的映射。** + + + + - **volumeDevices.devicePath** (string),必需 + + devicePath 是设备将被映射到的容器内的路径。 + + - **volumeDevices.name** (string),必需 + + name 必须与 Pod 中的 persistentVolumeClaim 的名称匹配。 + + +### 生命周期 + + +- **terminationMessagePath** (string) + + 可选字段。挂载到容器文件系统的路径,用于写入容器终止消息的文件。 + 写入的消息旨在成为简短的最终状态,例如断言失败消息。如果超出 4096 字节,将被节点截断。 + 所有容器的总消息长度将限制为 12 KB。 默认为 `/dev/termination-log`。无法更新。 + + +- **terminationMessagePolicy** (string) + + 指示应如何填充终止消息。字段值为 `File` 表示将使用 `terminateMessagePath` + 的内容来填充成功和失败的容器状态消息。 + 如果终止消息文件为空并且容器因错误退出,字段值 `FallbackToLogsOnError` + 表示将使用容器日志输出的最后一块。日志输出限制为 2048 字节或 80 行,以较小者为准。 + 默认为 `File`。无法更新。 + + + +### 调试 + + +- **stdin** (boolean) + + 是否应在容器运行时内为此容器 stdin 分配缓冲区。 + 如果未设置,从容器中的 stdin 读数据将始终导致 EOF。默认为 false。 + + +- **stdinOnce** (boolean) + + 容器运行时是否应在某个 attach 操作打开 stdin 通道后关闭它。 + 当 stdin 为 true 时,stdin 流将在多个 attach 会话中保持打开状态。 + 如果 stdinOnce 设置为 true,则 stdin 在容器启动时打开,在第一个客户端连接到 stdin 之前为空, + 然后保持打开并接受数据,直到客户端断开连接,此时 stdin 关闭并保持关闭直到容器重新启动。 + 如果此标志为 false,则从 stdin 读取的容器进程将永远不会收到 EOF。默认为 false。 + + +- **tty** (boolean) + + 这个容器是否应该为自己分配一个 TTY,也需要 stdin 为 true。默认为 false。 + + +### 安全上下文 + + +- **securityContext** (SecurityContext) + + 可选字段。securityContext 定义了运行临时容器的安全选项。 + 如果设置了此字段,SecurityContext 的字段将覆盖 PodSecurityContext 的等效字段。 + + SecurityContext 保存将应用于容器的安全配置。 + 一些字段在 SecurityContext 和 PodSecurityContext 中都存在。 + 当两者都设置时,SecurityContext 中的值优先。 + + + + - **securityContext.runAsUser** (int64) + + 运行容器进程入口点的 UID。如果未指定,则默认为镜像元数据中指定的用户。 + 也可以在 PodSecurityContext 中设置。如果同时在 SecurityContext 和 PodSecurityContext + 中设置,则在 SecurityContext 中指定的值优先。 + 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.runAsNonRoot** (boolean) + + 指示容器必须以非 root 用户身份运行。如果为 true,Kubelet 将在运行时验证镜像, + 以确保它不会以 UID 0(root)身份运行,如果是,则无法启动容器。 + 如果未设置或为 false,则不会执行此类验证。也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext + 中指定的值优先。 + + + + - **securityContext.runAsGroup** (int64) + + 运行容器进程入口点的 GID。如果未设置,则使用运行时默认值。也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext + 中指定的值优先。注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.readOnlyRootFilesystem** (boolean) + + 此容器是否具有只读根文件系统。 + 默认为 false。 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.procMount** (string) + + procMount 表示用于容器的 proc 挂载类型。默认值为 DefaultProcMount, + 它将容器运行时默认值用于只读路径和掩码路径。这需要启用 ProcMountType 特性门控。 + 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.privileged** (boolean) + + 以特权模式运行容器。特权容器中的进程本质上等同于主机上的 root。 默认为 false。 + 注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + + + - **securityContext.allowPrivilegeEscalation** (boolean) + + allowPrivilegeEscalation 控制进程是否可以获得比其父进程更多的权限。 + 此布尔值直接控制是否在容器进程上设置 `no_new_privs` 标志。 allowPrivilegeEscalation + 在容器处于以下状态时始终为 true: + + 1. 以特权身份运行 + 2. 具有 `CAP_SYS_ADMIN` 权能 + + 请注意,当 `spec.os.name` 为 "windows" 时,无法设置此字段。 + + + + - **securityContext.capabilities** (Capabilities) + + 运行容器时添加/放弃的权能。默认为容器运行时授予的默认权能集。 + 注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + **在运行中的容器中添加和放弃 POSIX 权能。** + + + + - **securityContext.capabilities.add** ([]string) + + 新增的权能。 + + - **securityContext.capabilities.drop** ([]string) + + 放弃的权能。 + + + + - **securityContext.seccompProfile** (SeccompProfile) + + 此容器使用的 seccomp 选项。如果在 Pod 和容器级别都提供了 seccomp 选项, + 则容器选项会覆盖 Pod 选项。注意,`spec.os.name` 为 "windows" 时不能设置该字段。 + + **SeccompProfile 定义 Pod 或容器的 seccomp 配置文件设置。只能设置一个配置文件源。** + + + + - **securityContext.seccompProfile.type** (string),必需 + + type 指示将应用哪种 seccomp 配置文件。有效的选项是: + + - `Localhost` - 应使用在节点上的文件中定义的配置文件。 + - `RuntimeDefault` - 应使用容器运行时默认配置文件。 + - `Unconfined` - 不应应用任何配置文件。 + + + + - **securityContext.seccompProfile.localhostProfile** (string) + + localhostProfile 指示应使用在节点上的文件中定义的配置文件。 + 该配置文件必须在节点上预先配置才能工作。 + 必须是相对于 kubelet 配置的 seccomp 配置文件位置下的子路径。 + 仅当 type 为 "Localhost" 时才必须设置。 + + + + - **securityContext.seLinuxOptions** (SELinuxOptions) + + 要应用于容器的 SELinux 上下文。如果未指定,容器运行时将为每个容器分配一个随机 + SELinux 上下文。也可以在 PodSecurityContext 中设置。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext + 中指定的值优先。注意,`spec.os.name` 为 "windows" 时不能设置此字段。 + + + **SELinuxOptions 是要应用于容器的标签** + + + + - **securityContext.seLinuxOptions.level** (string) + + level 是应用于容器的 SELinux 级别标签。 + + - **securityContext.seLinuxOptions.role** (string) + + role 是应用于容器的 SELinux 角色标签。 + + - **securityContext.seLinuxOptions.type** (string) + + type 是适用于容器的 SELinux 类型标签。 + + - **securityContext.seLinuxOptions.user** (string) + + user 是应用于容器的 SELinux 用户标签。 + + + + - **securityContext.windowsOptions** (WindowsSecurityContextOptions) + + 要应用到所有容器上的特定于 Windows 的设置。如果未指定,将使用 PodSecurityContext 中的选项。 + 如果同时在 SecurityContext 和 PodSecurityContext 中设置,则在 SecurityContext + 中指定的值优先。注意,`spec.os.name` 为 "linux" 时不能设置此字段。 + + + **WindowsSecurityContextOptions 包含特定于 Windows 的选项和凭据。** + + + + - **securityContext.windowsOptions.gmsaCredentialSpec** (string) + + gmsaCredentialSpec 是 [GMSA 准入 Webhook](https://github.com/kubernetes-sigs/windows-gmsa) + 内嵌由 gmsaCredentialSpecName 字段所指定的 GMSA 凭证规约内容的地方。 + + - **securityContext.windowsOptions.gmsaCredentialSpecName** (string) + + gmsaCredentialSpecName 是要使用的 GMSA 凭证规约的名称。 + + + + - **securityContext.windowsOptions.hostProcess** (boolean) + + hostProcess 确定容器是否应作为 "主机进程" 容器运行。此字段是 Alpha 级别的,只有启用了 + WindowsHostProcessContainers 特性门控的组件才会处理此字段。 + 设置此字段而未启用特性门控的话,在验证 Pod 时将引发错误。 + 一个 Pod 的所有容器必须具有相同的有效 hostProcess 值 + (不允许混合设置了 hostProcess 的容器和未设置 hostProcess 的容器)。 + 此外,如果 hostProcess 为 true,则 hostNetwork 也必须设置为 true。 + + + + - **securityContext.windowsOptions.runAsUserName** (string) + + Windows 中运行容器进程入口点的用户名。如果未指定,则默认为镜像元数据中指定的用户。 + 也可以在 PodSecurityContext 中设置。如果同时在 SecurityContext 和 PodSecurityContext + 中设置,则在 SecurityContext 中指定的值优先。 + + +### 不允许 + + + +- **ports**([]ContainerPort) + + **补丁策略:基于 `containerPort` 键合并** + + **映射:键 `containerPort, protocol` 组合的唯一值将在合并期间保留** + + 临时容器不允许使用端口。 + + + **ContainerPort 表示单个容器中的网络端口。** + + + + - **ports.containerPort** (int32),必需 + + 要在容器的 IP 地址上公开的端口号。这必须是有效的端口号 0 \< x \< 65536。 + + - **ports.hostIP** (string) + + 要将外部端口绑定到的主机 IP。 + + + + - **ports.hostPort** (int32) + + 要在主机上公开的端口号。如果设置了,则作为必须是一个有效的端口号,0 \< x \< 65536。 + 如果指定了 hostNetwork,此值必须与 containerPort 匹配。大多数容器不需要这个配置。 + + + + - **ports.name**(string) + + 如果指定了,则作为端口的名称。必须是 IANA_SVC_NAME 并且在 Pod 中是唯一的。 + Pod 中的每个命名端口都必须具有唯一的名称。服务可以引用的端口的名称。 + + - **ports.protocol** (string) + + 端口协议。必须是 `UDP`、`TCP` 或 `SCTP` 之一。 默认为 `TCP`。 + + +- **resources** (ResourceRequirements) + + 临时容器不允许使用资源。临时容器使用已分配给 Pod 的空闲资源。 + + **ResourceRequirements 描述计算资源的需求。** + + + + - **resources.limits** (map[string]}}">Quantity) + + limits 描述所允许的最大计算资源量。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers/ + + + + - **resources.requests** (map[string]}}">Quantity) + + requests 描述所需的最小计算资源量。如果对容器省略了 requests,则默认其资源请求值为 limits + (如果已显式指定)的值,否则为实现定义的值。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers/ + + +- **lifecycle** (Lifecycle) + + 临时容器不允许使用生命周期。 + + 生命周期描述了管理系统为响应容器生命周期事件应采取的行动。 + 对于 postStart 和 preStop 生命周期处理程序,容器的管理会阻塞,直到操作完成, + 除非容器进程失败,在这种情况下处理程序被中止。 + + + + - **lifecycle.postStart** (}}">LifecycleHandler) + + 创建容器后立即调用 postStart。如果处理程序失败,则容器将根据其重新启动策略终止并重新启动。 + 容器的其他管理阻塞直到钩子完成。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + + + + - **lifecycle.preStop** (}}">LifecycleHandler) + + preStop 在容器因 API 请求或管理事件(例如:存活态探针/启动探针失败、抢占、资源争用等) + 而终止之前立即调用。如果容器崩溃或退出,则不会调用处理程序。 + Pod 的终止宽限期倒计时在 preStop 钩子执行之前开始。 + 无论处理程序的结果如何,容器最终都会在 Pod 的终止宽限期内终止(除非被终结器延迟)。 + 容器的其他管理会阻塞,直到钩子完成或达到终止宽限期。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + + + +- **livenessProbe** (}}">Probe) + + 临时容器不允许使用探针。 + +- **readyProbe** (}}">Probe) + + 临时容器不允许使用探针。 + +- **startupProbe** (}}">Probe) + + 临时容器不允许使用探针。 + + +## LifecycleHandler {#LifecycleHandler} + + +LifecycleHandler 定义了应在生命周期挂钩中执行的特定操作。 +必须指定一个且只能指定一个字段,tcpSocket 除外。 + +
+ + +- **exec** (execAction) + + Exec 指定要执行的操作。 + + + **ExecAction 描述了 "在容器中运行" 操作。** + + + + - **exec.command** ([]string) + + command 是要在容器内执行的命令行,命令的工作目录是容器文件系统中的根目录('/')。 + 该命令只是被通过 `exec` 执行,而不会单独启动一个 Shell 来运行,因此传统的 + Shell 指令('|' 等)将不起作用。要使用某 Shell,你需要显式调用该 Shell。 + 退出状态 0 被视为活动/健康,非零表示不健康。 + + + +- **httpGet** (HTTPGetAction) + + HTTPGet 指定要执行的 HTTP 请求。 + + + **HTTPGetAction 描述基于 HTTP Get 请求的操作。** + + + + - **httpGet.port** (IntOrString),必需 + + 要在容器上访问的端口的名称或编号。数字必须在 1 到 65535 的范围内。名称必须是 IANA_SVC_NAME。 + + + **IntOrString 是一种可以包含 int32 或字符串值的类型。在 JSON 或 YAML 封组和取消编组时, + 它会生成或使用内部类型。例如,这允许你拥有一个可以接受名称或数字的 JSON 字段。** + + + + - **httpGet.host** (string) + + 要连接的主机名,默认为 Pod IP。你可能想在 `httpHeaders` 中设置 "Host"。 + + - **httpGet.httpHeaders** ([]HTTPHeader) + + 要在请求中设置的自定义标头。HTTP 允许重复的标头。 + + + **HTTPHeader 描述了在 HTTP 探针中使用的自定义标头** + + + + - **httpGet.httpHeaders.name** (string),必需 + + HTTP 头部字段名称。 + + - **httpGet.httpHeaders.value** (string),必需 + + HTTP 头部字段取值。 + + + + - **httpGet.path** (string) + + HTTP 服务器上的访问路径。 + + - **httpGet.scheme** (string) + + 用于连接到主机的方案。默认为 `HTTP`。 + + +- **tcpSocket** (TCPSocketAction) + + 已弃用。不再支持 `tcpSocket` 作为 LifecycleHandler,但为向后兼容保留之。 + 当指定 `tcp` 处理程序时,此字段不会被验证,而生命周期回调将在运行时失败。 + + + **TCPSocketAction 描述基于打开套接字的动作。** + + + + - **tcpSocket.port** (IntOrString),必需 + + 容器上要访问的端口的编号或名称。端口号必须在 1 到 65535 的范围内。 + 名称必须是 IANA_SVC_NAME。 + + + **IntOrString 是一种可以保存 int32 或字符串值的类型。在 JSON 或 YAML 编组和解组中使用时, + 会生成或使用内部类型。例如,这允许你拥有一个可以接受名称或数字的 JSON 字段。** + + + + - **tcpSocket.host** (string) + + 可选字段。要连接的主机名,默认为 Pod IP。 + +## NodeAffinity {#NodeAffinity} + + +节点亲和性是一组节点亲和性调度规则。 + +
+ + + +- **preferredDuringSchedulingIgnoredDuringExecution** ([]PreferredSchedulingTerm) + + 调度程序会更倾向于将 Pod 调度到满足该字段指定的亲和性表达式的节点, + 但它可能会选择违反一个或多个表达式的节点。最优选的节点是权重总和最大的节点, + 即对于满足所有调度要求(资源请求、requiredDuringScheduling 亲和表达式等)的每个节点, + 通过迭代该字段的元素来计算总和如果节点匹配相应的 matchExpressions,则将 "权重" 添加到总和中; + 具有最高总和的节点是最优选的。 + + 空的首选调度条件匹配所有具有隐式权重 0 的对象(即它是一个 no-op 操作)。 + null 值的首选调度条件不匹配任何对象(即也是一个 no-op 操作)。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.preference** (NodeSelectorTerm),必需 + + 与相应权重相关联的节点选择条件。 + + null 值或空值的节点选择条件不会匹配任何对象。这些条件的请求按逻辑与操作组合。 + TopologySelectorTerm 类型实现了 NodeSelectorTerm 的一个子集。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.preference.matchExpressions** ([]}}">NodeSelectorRequirement) + + 按节点标签列出的节点选择条件列表。 + + - **preferredDuringSchedulingIgnoredDuringExecution.preference.matchFields** ([]}}">NodeSelectorRequirement) + + 按节点字段列出的节点选择要求列表。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.weight** (int32),必需 + + 与匹配相应的 nodeSelectorTerm 相关的权重,范围为 1-100。 + + + +- **requiredDuringSchedulingIgnoredDuringExecution** (NodeSelector) + + 如果在调度时不满足该字段指定的亲和性要求,则不会将 Pod 调度到该节点上。 + 如果在 Pod 执行期间的某个时间点不再满足此字段指定的亲和性要求(例如:由于更新), + 系统可能会或可能不会尝试最终将 Pod 从其节点中逐出。 + + + **一个节点选择器代表一个或多个标签查询结果在一组节点上的联合;换言之, + 它表示由节点选择器项表示的选择器的逻辑或组合。** + + + + - **requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms** ([]NodeSelectorTerm),必需 + + 必需的字段。节点选择条件列表。这些条件按逻辑或操作组合。 + + null 值或空值的节点选择器条件不匹配任何对象。这里的条件是按逻辑与操作组合的。 + TopologySelectorTerm 类型实现了 NodeSelectorTerm 的一个子集。 + + + + - **requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions** ([]}}">NodeSelectorRequirement) + + 按节点标签列出的节点选择器需求列表。 + + - **requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields** ([]}}">NodeSelectorRequirement) + + 按节点字段列出的节点选择器要求列表。 + +## PodAntiAffinity {#PodAntiAffinity} + + +Pod 反亲和性是一组 Pod 间反亲和性调度规则。 + +
+ + +- **preferredDuringSchedulingIgnoredDuringExecution** ([]WeightedPodAffinityTerm) + + 调度器更倾向于将 Pod 调度到满足该字段指定的反亲和性表达式的节点, + 但它可能会选择违反一个或多个表达式的节点。 + 最优选的节点是权重总和最大的节点,即对于满足所有调度要求(资源请求、`requiredDuringScheduling` + 反亲和性表达式等)的每个节点,通过遍历元素来计算总和如果节点具有与相应 `podAffinityTerm` + 匹配的 Pod,则此字段并在总和中添加"权重";具有最高加和的节点是最优选的。 + + + **所有匹配的 WeightedPodAffinityTerm 字段的权重都是按节点添加的,以找到最优选的节点。** + + + + - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm** (PodAffinityTerm),必需 + + 必需的字段。 一个 Pod 亲和性条件,与相应的权重相关联。 + + + 定义一组 Pod(即那些与给定名字空间相关的标签选择算符匹配的 Pod 集合), + 当前 Pod 应该与所选 Pod 集合位于同一位置(亲和性)或不位于同一位置(反亲和性), + 其中 "在同一位置" 意味着运行在一个节点上,其键 `topologyKey` 的标签值与运行所选 Pod + 集合中的某 Pod 的任何节点上的标签值匹配。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey** (string),必需 + + 此 Pod 应与指定名字空间中与标签选择算符匹配的 Pod 集合位于同一位置(亲和性) + 或不位于同一位置(反亲和性),这里的 "在同一位置" 意味着运行在一个节点上,其键名为 + `topologyKey` 的标签值与运行所选 Pod 集合中的某 Pod 的任何节点上的标签值匹配。 + 不允许使用空的 `topologyKey`。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector** (}}">LabelSelector) + + 对一组资源的标签查询,在这里资源为 Pod。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector** (}}">LabelSelector) + + 对条件所适用的名字空间集合的标签查询。 + 此条件会被应用到此字段所选择的名字空间和 namespaces 字段中列出的名字空间的组合之上。 + 选择算符为 null 和 namespaces 列表为 null 值或空表示 "此 Pod 的名字空间"。 + 空的选择算符 ({}) 可用来匹配所有名字空间。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaces** ([]string) + + namespaces 指定此条件所适用的名字空间,是一个静态列表。 + 此条件会被应用到 namespaces 字段中列出的名字空间和由 namespaceSelector 选中的名字空间上。 + namespaces 列表为 null 或空,以及 namespaceSelector 值为 null 均表示 "此 Pod 的名字空间"。 + + + + - **preferredDuringSchedulingIgnoredDuringExecution.weight** (int32),必需 + + weight 是匹配相应 `podAffinityTerm` 条件的权重,范围为 1-100。 + + + +- **requiredDuringSchedulingIgnoredDuringExecution** ([]PodAffinityTerm) + + 如果在调度时不满足该字段指定的反亲和性要求,则该 Pod 不会被调度到该节点上。 + 如果在 Pod 执行期间的某个时间点不再满足此字段指定的反亲和性要求(例如:由于 Pod 标签更新), + 系统可能会或可能不会尝试最终将 Pod 从其节点中逐出。 + 当有多个元素时,每个 `podAffinityTerm` 对应的节点列表是取其交集的,即必须满足所有条件。 + + + 定义一组 Pod(即那些与给定名字空间相关的标签选择算符匹配的 Pod 集合),当前 Pod 应该与该 + Pod 集合位于同一位置(亲和性)或不位于同一位置(反亲和性)。 + 这里的 "位于同一位置" 含义是运行在一个节点上。基于 `topologyKey` 字段所给的标签键名, + 检查所选 Pod 集合中各个 Pod 所在的节点上的标签值,标签值相同则认作 "位于同一位置"。 + + + + - **requiredDuringSchedulingIgnoredDuringExecution.topologyKey** (string),必需 + + 此 Pod 应与指定名字空间中与标签选择算符匹配的 Pod 集合位于同一位置(亲和性) + 或不位于同一位置(反亲和性), + 这里的 "位于同一位置" 含义是运行在一个节点上。基于 `topologyKey` 字段所给的标签键名, + 检查所选 Pod 集合中各个 Pod 所在的节点上的标签值,标签值相同则认作 "位于同一位置"。 + 不允许使用空的 `topologyKey`。 + + + + - **requiredDuringSchedulingIgnoredDuringExecution.labelSelector** (}}">LabelSelector) + + 对一组资源的标签查询,在这里资源为 Pod。 + + + + - **requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector** (}}">LabelSelector) + + 对条件所适用的名字空间集合的标签查询。 + 当前条件将应用于此字段选择的名字空间和 namespaces 字段中列出的名字空间。 + 选择算符为 null 和 namespaces 列表为 null 或空值表示 “此 Pod 的名字空间”。 + 空选择算符 ({}) 能够匹配所有名字空间。 + + + + + - **requiredDuringSchedulingIgnoredDuringExecution.namespaces** ([]string) + + namespaces 指定当前条件所适用的名字空间名称的静态列表。 + 当前条件适用于此字段中列出的名字空间和由 namespaceSelector 选中的名字空间。 + namespaces 列表为 null 或空,以及 namespaceSelector 为 null 表示 “此 Pod 的名字空间”。 + + + +## 探针 {#Probe} + + +探针描述了要对容器执行的健康检查,以确定它是否处于活动状态或准备好接收流量。 + +
+ + +- **exec** (execAction) + + exec 指定要执行的操作。 + + + **ExecAction 描述了 "在容器中运行" 操作。** + + + + - **exec.command** ([]string) + + command 是要在容器内执行的命令行,命令的工作目录是容器文件系统中的根目录('/')。 + 该命令只是通过 `exec` 执行,而不会启动 Shell,因此传统的 Shell 指令('|' 等)将不起作用。 + 要使用某 Shell,你需要显式调用该 Shell。 + 退出状态 0 被视为存活/健康,非零表示不健康。 + + +- **httpGet** (HTTPGetAction) + + httpGet 指定要执行的 HTTP 请求。 + + + **HTTPGetAction 描述基于 HTTP Get 请求的操作。** + + + + - **httpGet.port** (IntOrString),必需 + + 容器上要访问的端口的名称或端口号。端口号必须在 1 到 65535 内。名称必须是 IANA_SVC_NAME。 + + + `IntOrString` 是一种可以保存 int32 或字符串值的类型。 在 JSON 或 YAML 编组和解组时, + 它会生成或使用内部类型。例如,这允许你拥有一个可以接受名称或数字的 JSON 字段。 + + + + - **httpGet.host** (string) + + 要连接的主机名,默认为 Pod IP。 你可能想在 `httpHeaders` 中设置 "Host"。 + + + + - **httpGet.httpHeaders** ([]HTTPHeader) + + 要在请求中设置的自定义 HTTP 标头。HTTP 允许重复的标头。 + + + **HTTPHeader 描述了在 HTTP 探针中使用的自定义标头。** + + + + - **httpGet.httpHeaders.name** (string),必需 + + HTTP 头部域名称。 + + - **httpGet.httpHeaders.value** (string),必需 + + HTTP 头部域值。 + + + + - **httpGet.path** (string) + + HTTP 服务器上的访问路径。 + + - **httpGet.scheme** (string) + + 用于连接到主机的方案。默认为 HTTP。 + + + +- **tcpSocket** (TCPSocketAction) + + tcpSocket 指定涉及 TCP 端口的操作。 + + + **`TCPSocketAction` 描述基于打开套接字的动作。** + + + + - **tcpSocket.port** (IntOrString),必需 + + 容器上要访问的端口的端口号或名称。端口号必须在 1 到 65535 内。名称必须是 IANA_SVC_NAME。 + + + IntOrString 是一种可以保存 int32 或字符串的类型。在 JSON 或 YAML 编组和解组时, + 它会生成或使用内部类型。例如,这允许你拥有一个可以接受名称或数字的 JSON 字段。 + + + + - **tcpSocket.host** (string) + + 可选字段。要连接的主机名,默认为 Pod IP。 + + +- **初始延迟秒** (int32) + + 容器启动后启动存活态探针之前的秒数。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#container-probes + + + +- **terminationGracePeriodSeconds** (int64) + + Pod 需要在探针失败时体面终止所需的时间长度(以秒为单位),为可选字段。 + 宽限期是 Pod 中运行的进程收到终止信号后,到进程被终止信号强制停止之前的时间长度(以秒为单位)。 + 你应该将此值设置为比你的进程的预期清理时间更长。 + 如果此值为 nil,则将使用 Pod 的 `terminateGracePeriodSeconds`。 + 否则,此值将覆盖 Pod 规约中设置的值。字段值值必须是非负整数。 + 零值表示收到终止信号立即停止(没有机会关闭)。 + 这是一个 Beta 字段,需要启用 ProbeTerminationGracePeriod 特性门控。最小值为 1。 + 如果未设置,则使用 `spec.terminationGracePeriodSeconds`。 + + +- **periodSeconds** (int32) + + 探针的执行周期(以秒为单位)。默认为 10 秒。最小值为 1。 + + +- **timeoutSeconds** (int32) + + 探针超时的秒数。默认为 1 秒。最小值为 1。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#container-probes + + +- **failureThreshold** (int32) + + 探针成功后的最小连续失败次数,超出此阈值则认为探针失败。默认为 3。最小值为 1。 + + +- **successThreshold** (int32) + + 探针失败后最小连续成功次数,超过此阈值才会被视为探针成功。默认为 1。 + 存活性探针和启动探针必须为 1。最小值为 1。 + + +- **grpc** (GRPCAction) + + GRPC 指定涉及 GRPC 端口的操作。这是一个 Beta 字段,需要启用 GRPCContainerProbe 特性门控。 + + + + + + - **grpc.port** (int32),必需 + + gRPC 服务的端口号。数字必须在 1 到 65535 的范围内。 + + - **grpc.service** (string) + + service 是要放置在 gRPC 运行状况检查请求中的服务的名称 + (请参见 https://github.com/grpc/grpc/blob/master/doc/health-checking.md)。 + + 如果未指定,则默认行为由 gRPC 定义。 + + +## PodStatus {#PodStatus} + + +PodStatus 表示有关 Pod 状态的信息。状态内容可能会滞后于系统的实际状态, +尤其是在托管 Pod 的节点无法联系控制平面的情况下。 + +
+ + +- **nominatedNodeName** (string) + + 仅当此 Pod 抢占节点上的其他 Pod 时才设置 `nominatedNodeName`, + 但抢占操作的受害者会有体面终止期限,因此此 Pod 无法立即被调度。 + 此字段不保证 Pod 会在该节点上调度。 + 如果其他节点更早进入可用状态,调度器可能会决定将 Pod 放置在其他地方。 + 调度器也可能决定将此节点上的资源分配给优先级更高的、在抢占操作之后创建的 Pod。 + 因此,当 Pod 被调度时,该字段可能与 Pod 规约中的 nodeName 不同。 + + +- **hostIP** (string) + + Pod 被调度到的主机的 IP 地址。如果尚未被调度,则为字段为空。 + + +- **startTime** (Time) + + kubelet 确认 Pod 对象的日期和时间,格式遵从 RFC 3339。 + 此时间点处于 kubelet 为 Pod 拉取容器镜像之前。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + +- **phase** (string) + + Pod 的 phase 是对 Pod 在其生命周期中所处位置的简单、高级摘要。 + conditions 数组、reason 和 message 字段以及各个容器的 status 数组包含有关 Pod + 状态的进一步详细信息。phase 的取值有五种可能性: + + - `Pending`:Pod 已被 Kubernetes 系统接受,但尚未创建容器镜像。 + 这包括 Pod 被调度之前的时间以及通过网络下载镜像所花费的时间。 + - `Running`:Pod 已经被绑定到某个节点,并且所有的容器都已经创建完毕。至少有一个容器仍在运行,或者正在启动或重新启动过程中。 + - `Succeeded`:Pod 中的所有容器都已成功终止,不会重新启动。 + - `Failed`:Pod 中的所有容器都已终止,并且至少有一个容器因故障而终止。 + 容器要么以非零状态退出,要么被系统终止。 + - `Unknown`:由于某种原因无法获取 Pod 的状态,通常是由于与 Pod 的主机通信时出错。 + + 更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-phase + + +- **message** (string) + + 一条人类可读的消息,标示有关 Pod 为何处于这种情况的详细信息。 + +- **reason** (string) + + 一条简短的驼峰式命名的消息,指示有关 Pod 为何处于此状态的详细信息。例如 'Evicted'。 + + +- **podIP** (string) + + 分配给 Pod 的 IP 地址。至少在集群内可路由。如果尚未分配则为空。 + + +- **podIPs** ([]PodIP) + + **补丁策略:基于 `ip` 键合并** + + podIPs 保存分配给 Pod 的 IP 地址。如果指定了该字段,则第 0 个条目必须与 podIP 字段值匹配。 + Pod 最多可以为 IPv4 和 IPv6 各分配 1 个值。如果尚未分配 IP,则此列表为空。 + + + podIPs 字段中每个条目的 IP 地址信息。每个条目都包含 `ip` 字段,给出分配给 Pod 的 IP 地址。 + 该 IP 地址至少在集群内可路由。 + + + + - **podIP.ip** (string) + + ip 是分配给 Pod 的 IP 地址(IPv4 或 IPv6)。 + + +- **conditions** ([]PodCondition) + + **补丁策略:基于 `ip` 键合并** + + Pod 的当前服务状态。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions + + + **PodCondition 包含此 Pod 当前状况的详细信息。** + + + - **conditions.status** (string),必需 + + status 是 condition 的状态。可以是 `True`、`False`、`Unknown` 之一。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions + + - **conditions.type** (string),必需 + + type 是 condition 的类型。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions + + + + - **conditions.lastProbeTime** (Time) + + 上次探测 Pod 状况的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **conditions.lastTransitionTime** (Time) + + 上次 Pod 状况从一种状态变为另一种状态的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **conditions.message** (string) + + 标示有关上次状况变化的详细信息的、人类可读的消息。 + + - **conditions.reason** (string) + + condition 最近一次变化的唯一、一个单词、驼峰式命名原因。 + + +- **qosClass** (string) + + 根据资源要求分配给 Pod 的服务质量 (QOS) 分类。有关可用的 QOS 类,请参阅 PodQOSClass 类型。 + 更多信息: https://git.k8s.io/design-proposals-archive/node/resource-qos.md + + + +- **initContainerStatuses** ([]ContainerStatus) + + 该列表在清单中的每个 Init 容器中都有一个条目。最近成功的 Init 容器会将 ready 设置为 true, + 最近启动的容器将设置 startTime。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status + + **ContainerStatus 包含此容器当前状态的详细信息。** + + + + - **initContainerStatuses.name** (string),必需 + + 此字段值必须是 DNS_LABEL。Pod 中的每个容器都必须具有唯一的名称。无法更新。 + + - **initContainerStatuses.image** (string),必需 + + 容器中正在运行的镜像。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/images。 + + + + - **initContainerStatuses.imageID** (string),必需 + + 容器镜像的镜像 ID。 + + - **initContainerStatuses.containerID** (string) + + 格式为 `://` 的容器 ID。 + + + + - **initContainerStatuses.state** (ContainerState) + + 有关容器当前状况的详细信息。 + + ContainerState 中保存容器的可能状态。只能设置其成员之一。如果其中所有字段都未设置, + 则默认为 ContainerStateWaiting。 + + + + - **initContainerStatuses.state.running** (ContainerStateRunning) + + 有关正在运行的容器的详细信息。 + + **ContainerStateRunning 是容器的运行状态。** + + + + - **initContainerStatuses.state.running.startedAt** (Time) + + 容器上次(重新)启动的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **initContainerStatuses.state.terminated** (ContainerStateTerminated) + + 有关已终止容器的详细信息。 + + **ContainerStateTerminated 是容器的终止状态。** + + + + - **initContainerStatuses.state.terminated.containerID** (string) + + 容器的 ID,格式为 `"<类型>://"`。 + + - **initContainerStatuses.state.terminated.exitCode** (int32),必需 + + 容器上次终止时的退出状态 + + + + - **initContainerStatuses.state.terminated.startedAt** (Time) + + 容器上次执行时的开始时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **initContainerStatuses.state.terminated.finishedAt** (Time) + + 容器上次终止的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **initContainerStatuses.state.terminated.message** (string) + + 有关容器上次终止的消息。 + + - **initContainerStatuses.state.terminated.reason** (string) + + 容器最后一次终止的(简要)原因。 + + - **initContainerStatuses.state.terminated.signal** (int32) + + 容器最后一次终止的信号。 + + + + - **initContainerStatuses.state.waiting** (ContainerStateWaiting) + + 有关等待状态容器的详细信息。 + + **容器状态等待是容器的等待状态。** + + + + - **initContainerStatuses.state.waiting.message** (string) + + 有关容器尚未运行的原因的消息。 + + - **initContainerStatuses.state.waiting.reason** (string) + + 容器尚未运行的(简要)原因。 + + + + - **initContainerStatuses.lastState** (ContainerState) + + 有关容器上次终止状况的详细信息。 + + ContainerState 保存容器的可能状态。只能设置其成员之一。如果未设置任何成员, + 则默认为 ContainerStateWaiting。 + + + + - **initContainerStatuses.lastState.running** (ContainerStateRunning) + + 有关正在运行的容器的详细信息 + + **ContainerStateRunning 是容器的运行状态。** + + + + - **initContainerStatuses.lastState.running.startedAt** (Time) + + 容器上次(重新)启动的时间 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **initContainerStatuses.lastState.terminated** (ContainerStateTerminated) + + 有关已终止容器的详细信息。 + + **ContainerStateTerminated 是容器的终止状态。** + + + + - **initContainerStatuses.lastState.terminated.containerID** (string) + + 容器的 ID,格式为 `"<类型>://"`。 + + - **initContainerStatuses.lastState.terminated.exitCode** (int32),必需 + + 容器上次终止的退出状态码。 + + + + - **initContainerStatuses.lastState.terminated.startedAt** (Time) + + 容器上次执行的开始时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **initContainerStatuses.lastState.terminated.finishedAt** (Time) + + 容器上次终止的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **initContainerStatuses.lastState.terminated.message** (string) + + 有关容器上次终止的消息。 + + - **initContainerStatuses.lastState.terminated.reason** (string) + + 容器最后一次终止的(简要)原因。 + + - **initContainerStatuses.lastState.terminated.signal** (int32) + + 容器最后一次终止的信号。 + + + + - **initContainerStatuses.lastState.waiting** (ContainerStateWaiting) + + 有关等待状态的容器的详细信息。 + + **ContainerStateWaiting 是容器的等待状态。** + + + + - **initContainerStatuses.lastState.waiting.message** (string) + + 关于容器尚未运行的原因的消息。 + + - **initContainerStatuses.lastState.waiting.reason** (string) + + 容器尚未运行的(简要)原因。 + + + + - **initContainerStatuses.ready** (boolean),必需 + + 指定容器是否已通过其就绪态探测。 + + - **initContainerStatuses.restartCount** (int32),必需 + + 容器重新启动的次数。 + + + + - **initContainerStatuses.started** (boolean) + + 指定容器是否已通过其启动探测。初始化为 false,在 startupProbe 成功之后变为 true。 + 在容器重新启动时,或者如果 kubelet 暂时失去状态时重置为 false。 + 在未定义启动探测器时始终为 true。 + + +- **containerStatuses** ([]ContainerStatus) + + 该列表中针对清单中的每个容器都有一个条目。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status + + **ContainerStatus 包含此容器当前状态的详细信息。** + + + + - **containerStatuses.name**(string),必需 + + 此字段必须是一个 DNS_LABEL。Pod 中的每个容器都必须具有唯一的名称。无法更新。 + + - **containerStatuses.image** (string),必需 + + 容器正在运行的镜像。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/images。 + + + + - **containerStatuses.imageID** (string),必需 + + 容器镜像的镜像 ID。 + + - **containerStatuses.containerID** (string) + + 容器的 ID,格式为 `"<类型>://"`。 + + + + - **containerStatuses.state** (ContainerState) + + 有关容器当前状况的详细信息。 + + ContainerStatuses 保存容器的可能状态。只能设置其中一个成员。如果所有成员都未设置, + 则默认为 ContainerStateWaiting。 + + + + - **containerStatuses.state.running** (ContainerStateRunning) + + 有关正在运行的容器的详细信息。 + + **ContainerStateRunning 是容器的运行状态。** + + + + - **containerStatuses.state.running.startedAt** (Time) + + 容器上次(重新)启动的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **containerStatuses.state.terminated** (ContainerStateTerminated) + + 有关已终止容器的详细信息。 + + **ContainerStateTerminated 是容器的终止状态。** + + + + - **containerStatuses.state.terminated.containerID** (string) + + 容器的 ID,格式为 `"<类型>://"`。 + + - **containerStatuses.state.terminated.exitCode** (int32),必需 + + 容器上次终止的退出状态码。 + + + + - **containerStatuses.state.terminated.startedAt** (Time) + + 容器上次执行的开始时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **containerStatuses.state.terminated.message** (string) + + 有关容器上次终止的消息。 + + - **containerStatuses.state.terminated.reason** (string) + + 容器最后一次终止的(简要)原因 + + - **containerStatuses.state.terminated.signal** (int32) + + 容器最后一次终止的信号。 + + + + - **containerStatuses.state.waiting** (ContainerStateWaiting) + + 有关等待容器的详细信息。 + + **ContainerStateWaiting 是容器的等待状态。** + + + + - **containerStatuses.state.waiting.message** (string) + + 关于容器尚未运行的原因的消息。 + + - **containerStatuses.state.waiting.reason** (string) + + 容器尚未运行的(简要)原因。 + + + + - **containerStatuses.lastState** (ContainerState) + + 有关容器上次终止状况的详细信息。 + + 容器状态保存容器的可能状态。只能设置一个成员。如果所有成员都未设置, + 则默认为 ContainerStateWaiting。 + + + + - **containerStatuses.lastState.running** (ContainerStateRunning) + + 有关正在运行的容器的详细信息。 + + **ContainerStateRunning 是容器的运行状态。** + + + + - **containerStatuses.lastState.running.startedAt** (Time) + + 容器上次(重新)启动的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **containerStatuses.lastState.terminated** (ContainerStateTerminated) + + 有关已终止容器的详细信息。 + + **ContainerStateTerminated 是容器的终止状态。** + + + + - **containerStatuses.lastState.terminated.containerID** (string) + + 格式为 `://` 的容器 ID。 + + - **containerStatuses.lastState.terminated.exitCode** (int32),必需 + + 容器最后终止的退出状态码。 + + + + - **containerStatuses.lastState.terminated.startedAt** (Time) + + 容器上次执行时的开始时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **containerStatuses.lastState.terminated.finishedAt** (Time) + + 容器上次终止的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **containerStatuses.lastState.terminated.message** (string) + + 关于容器上次终止的消息。 + + - **containerStatuses.lastState.terminated.reason** (string) + + 容器上次终止的(简要)原因 + + - **containerStatuses.lastState.terminated.signal** (int32) + + 容器上次终止的信号。 + + + + - **containerStatuses.lastState.waiting** (ContainerStateWaiting) + + 有关等待容器的详细信息。 + + **ContainerStateWaiting 是容器的等待状态。** + + + + - **containerStatuses.lastState.waiting.message** (string) + + 关于容器尚未运行的原因的消息。 + + - **containerStatuses.lastState.waiting.reason** (string) + + 容器尚未运行的(简要)原因。 + + + + - **containerStatuses.ready** (boolean),必需 + + 指定容器是否已通过其就绪态探针。 + + - **containerStatuses.restartCount** (int32),必需 + + 容器重启的次数。 + + + + - **containerStatuses.started** (boolean) + + 指定容器是否已通过其启动探针探测。初始化为 false,startupProbe 被认为成功后变为 true。 + 当容器重新启动或 kubelet 暂时丢失状态时重置为 false。 + 未定义启动探针时始终为 true。 + + +- **ephemeralContainerStatuses** ([]ContainerStatus) + + 已在此 Pod 中运行的任何临时容器的状态。 + 此字段是 Beta 级别的,可在尚未禁用 `EphemeralContainers` 特性门控的集群上使用。 + + **ContainerStatus 包含此容器当前状态的详细信息。** + + + + - **ephemeralContainerStatuses.name** (string),必需 + + 字段值必须是 DNS_LABEL。Pod 中的每个容器都必须具有唯一的名称。无法更新。 + + - **ephemeralContainerStatuses.image** (string),必需 + + 容器正在运行的镜像。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/containers/images。 + + + + - **ephemeralContainerStatuses.imageID** (string),必需 + + 容器镜像的镜像 ID。 + + - **ephemeralContainerStatuses.containerID** (string) + + 格式为 `://` 的容器 ID。 + + + - **ephemeralContainerStatuses.state** (ContainerState) + + 有关容器当前状况的详细信息。 + + ContainerState 保存容器的可能状态。只能设置其中一个成员。如果所有成员都未设置, + 则默认为 ContainerStateWaiting。 + + + + - **ephemeralContainerStatuses.state.running** (ContainerStateRunning) + + 有关正在运行的容器的详细信息 + + **ContainerStateRunning 是容器的运行状态。** + + + + - **ephemeralContainerStatuses.state.running.startedAt** (Time) + + 容器上次(重新)启动的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **ephemeralContainerStatuses.state.terminated** (ContainerStateTerminated) + + 有关已终止容器的详细信息。 + + **ContainerStateTerminated 是容器的终止状态。** + + + + - **ephemeralContainerStatuses.state.terminated.containerID** (string) + + 格式为 `://` 的容器 ID。 + + - **ephemeralContainerStatuses.state.terminated.exitCode** (int32),必需 + + 容器上次终止的退出状态码。 + + + + - **ephemeralContainerStatuses.state.terminated.startedAt** (Time) + + 容器上次执行的开始时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **ephemeralContainerStatuses.state.terminated.finishat** (Time) + + 容器上次终止的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **ephemeralContainerStatuses.state.terminated.message** (string) + + 关于容器上次终止的消息。 + + - **ephemeralContainerStatuses.state.terminated.reason** (string) + + 容器上次终止的(简要)原因 + + - **ephemeralContainerStatuses.state.terminated.signal** (int32) + + 容器上次终止的信号 + + + + - **ephemeralContainerStatuses.state.waiting** (ContainerStateWaiting) + + 有关等待容器的详细信息。 + + **ContainerStateWaiting 是容器的等待状态。** + + + + - **ephemeralContainerStatuses.state.waiting.message** (string) + + 关于容器尚未运行的原因的消息。 + + - **ephemeralContainerStatuses.state.waiting.reason** (string) + + 容器尚未运行的(简要)原因。 + + + - **ephemeralContainerStatuses.lastState** (ContainerState) + + 有关容器的上次终止状况的详细信息。 + + ContainerState 保存容器的可能状态。只能设置其中一个成员。如果所有成员都未设置, + 则默认为 `ContainerStateWaiting`。 + + + + - **ephemeralContainerStatuses.lastState.running** (ContainerStateRunning) + + 有关正在运行的容器的详细信息。 + + **ContainerStateRunning 是容器的运行状态。** + + + + - **ephemeralContainerStatuses.lastState.running.startedAt** (Time) + + 容器上次(重新)启动的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **ephemeralContainerStatuses.lastState.terminated** (ContainerStateTerminated) + + 有关已终止容器的详细信息。 + + **`ContainerStateTerminated` 是容器的终止状态。** + + + + - **ephemeralContainerStatuses.lastState.terminated.containerID** (string) + + 格式为 `://` 的容器 ID。 + + - **ephemeralContainerStatuses.lastState.terminated.exitCode** (int32),必需 + + 容器上次终止时的退出状态码。 + + + + - **ephemeralContainerStatuses.lastState.terminated.startedAt** (Time) + + 容器上次执行的开始时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **ephemeralContainerStatuses.lastState.terminated.finishedAt** (Time) + + 容器上次终止的时间。 + + Time 是 `time.Time` 的包装器,支持正确编组为 YAML 和 JSON。 + time 包所提供的许多工厂方法都有包装器。 + + + + - **ephemeralContainerStatuses.lastState.terminated.message** (string) + + 关于容器上次终止的消息。 + + - **ephemeralContainerStatuses.lastState.terminated.reason** (string) + + 容器上次终止的(简要)原因。 + + - **ephemeralContainerStatuses.lastState.terminated.signal** (int32) + + 容器上次终止的信号。 + + + + - **ephemeralContainerStatuses.lastState.waiting** (ContainerStateWaiting) + + 有关等待状态容器的详细信息。 + + **ContainerStateWaiting 是容器的等待状态。** + + + + - **ephemeralContainerStatuses.lastState.waiting.message** (string) + + 关于容器尚未运行的原因的消息。 + + - **ephemeralContainerStatuses.lastState.waiting.reason** (string) + + 容器尚未运行的(简要)原因。 + + + + - **ephemeralContainerStatuses.ready** (boolean),必需 + + 指定容器是否已通过其就绪态探测。 + + - **ephemeralContainerStatuses.restartCount** (int32),必需 + + 容器重新启动的次数。 + + + + - **ephemeralContainerStatuses.started** (boolean) + + 指定容器是否已通过其启动探测。初始化为 false,在 startProbe 成功之后变为 true。 + 在容器重新启动时或者 kubelet 暂时失去状态时重置为 false。 + 在未定义 startupProbe 时始终为 true。 + +## PodList {#PodList} + + +PodList 是 Pod 的列表。 + +
+ + +- **items** ([]}}">Pod),必需 + + Pod 列表。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md + + +- **apiVersion** (string) + + apiVersion 定义对象表示的版本化模式。服务器应将已识别的模式转换为最新的内部值, + 并可能拒绝无法识别的值。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + +- **kind**(string) + + kind 是一个字符串值,表示此对象表示的 REST 资源。服务器可以从客户端提交请求的端点推断出资源类别。 + 无法更新。采用驼峰式命名。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + +- **metadata** (}}">ListMeta) + + 标准的列表元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + +## 操作 {#Operations} + +
+ + +### `get` 读取指定的 Pod + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/pods/{name} + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +401: Unauthorized + + +### `get` 读取指定 Pod 的 ephemeralcontainers + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/pods/{name}/ephemeralcontainers + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +401: Unauthorized + + + +### `get` 读取指定 Pod 的日志 + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/pods/{name}/log + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称。 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **container** (**查询参数**): string + + 为其流式传输日志的容器。如果 Pod 中有一个容器,则默认为仅容器。 + + +- **follow** (**查询参数**):boolean + + 跟踪 Pod 的日志流。默认为 false。 + +- **insecureSkipTLSVerifyBackend** (**查询参数**):boolean + + `insecureSkipTLSVerifyBackend` 表示 API 服务器不应确认它所连接的后端的服务证书的有效性。 + 这将使 API 服务器和后端之间的 HTTPS 连接不安全。 + 这意味着 API 服务器无法验证它接收到的日志数据是否来自真正的 kubelet。 + 如果 kubelet 配置为验证 API 服务器的 TLS 凭据,这并不意味着与真实 kubelet + 的连接容易受到中间人攻击(例如,攻击者无法拦截来自真实 kubelet 的实际日志数据)。 + + +- **limitBytes** (**查询参数**): integer + + 如果设置,则表示在终止日志输出之前从服务器读取的字节数。 + 设置此参数可能导致无法显示完整的最后一行日志记录,并且可能返回略多于或略小于指定限制。 + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **previous** (**查询参数**):boolean + + 返回之前终止了的容器的日志。默认为 false。 + +- **sinceSeconds** (**查询参数**): integer + + 显示日志的当前时间之前的相对时间(以秒为单位)。如果此值早于 Pod 启动时间, + 则仅返回自 Pod 启动以来的日志。如果此值是将来的值,则不会返回任何日志。 + 只能指定 `sinceSeconds` 或 `sinceTime` 之一。 + + +- **tailLines** (**查询参数**): integer + + 如果设置,则从日志末尾开始显示的行数。如果未指定,则从容器创建或 `sinceSeconds` 或 + `sinceTime` 时刻显示日志。 + +- **timestamps** (**查询参数**):boolean + + 如果为 true,则在每行日志输出的开头添加 RFC3339 或 RFC3339Nano 时间戳。默认为 false。 + + +#### 响应 + +200 (string): OK + +401: Unauthorized + + +### `get` 读取指定 Pod 的状态 + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/pods/{name}/status + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +401: Unauthorized + + +### `list` 列出或观察 Pod 种类的对象 + + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/pods + + +#### 参数 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **allowWatchBookmarks** (**查询参数**):boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**):boolean + + }}">watch + + +#### 响应 + + +200 (}}">PodList): OK + +401: Unauthorized + + +### `list` 列出或观察 Pod 种类的对象 + + +#### HTTP 请求 + +GET /api/v1/pods + + +#### 参数 + + +- **allowWatchBookmarks** (**查询参数**):boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + + +- **timeoutSeconds** (**查询参数**):integer + + }}">timeoutSeconds + +- **watch** (**查询参数**):boolean + + }}">watch + + +#### 响应 + +200 (}}">PodList): OK + +401: Unauthorized + + +### `create` 创建一个 Pod + +#### HTTP 请求 + +POST /api/v1/namespaces/{namespace}/pods + + +#### 参数 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">Pod,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +201 (}}">Pod): Created + +202 (}}">Pod): Accepted + +401: Unauthorized + + +### `update` 替换指定的 Pod + + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/pods/{name} + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称。 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">Pod,必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +201 (}}">Pod): Created + +401: Unauthorized + + +### `update` 替换指定 Pod 的 ephemeralcontainers + + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/pods/{name}/ephemeralcontainers + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">Pod,必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +201 (}}">Pod): Created + +401: Unauthorized + + +### `update` 替换指定 Pod 的状态 + + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/pods/{name}/status + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">Pod,必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +201 (}}">Pod): Created + +401: Unauthorized + + +### `patch` 部分更新指定 Pod + + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/pods/{name} + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">Patch,必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**):boolean + + }}">force + +- **pretty** (**查询参数**):string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +201 (}}">Pod): Created + +401: Unauthorized + + +### `patch` 部分更新指定 Pod 的 ephemeralcontainers + + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/pods/{name}/ephemeralcontainers + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称。 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">Patch,必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**):boolean + + }}">force + +- **pretty** (**查询参数**):string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +201 (}}">Pod): Created + +401: Unauthorized + + +### `patch` 部分更新指定 Pod 的状态 + + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/pods/{name}/status + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称。 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">Patch,必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**):boolean + + }}">force + +- **pretty** (**查询参数**):string + + }}">pretty + + +#### 响应 + +200 (}}">Pod): OK + +201 (}}">Pod): Created + +401: Unauthorized + + +### `delete` 删除一个 Pod + + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/pods/{name} + + +#### 参数 + + +- **name** (**路径参数**): string,必需 + + Pod 的名称。 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">DeleteOptions + + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">Pod): OK + +202 (}}">Pod): Accepted + +401: Unauthorize + + +### `deletecollection` 删除 Pod 的集合 + + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/pods + + +#### 参数 + + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**:}}">DeleteOptions + + +- **continue** (**查询参数**): string + + }}">continue + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized + From 04506f1cabb6a265485c85db63b9929247daef61 Mon Sep 17 00:00:00 2001 From: liulijin <253954033@qq.com> Date: Mon, 25 Jul 2022 16:41:35 +0800 Subject: [PATCH 193/292] html label err, this cause description of namespace in PolicyRule did not appear in right place Signed-off-by: liulijin <253954033@qq.com> --- content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md b/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md index e7bc19c4f68d0..920e8e0146852 100644 --- a/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md +++ b/content/zh-cn/docs/reference/config-api/apiserver-audit.v1.md @@ -743,12 +743,12 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别 -

此规则所适用的名字空间列表。 空字符串("")意味着适用于非名字空间作用域的资源。 空列表意味着适用于所有名字空间。

+
@@ -77,7 +76,7 @@ defaulting to 0.0.0.0:10256 metricsBindAddress is the IP address and port for the metrics server to serve on, defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces) --> -

metricsBindAddress 字段是度量值服务器提供服务时所使用的的 IP 地址和端口, +

metricsBindAddress 字段是指标服务器提供服务时所使用的 IP 地址和端口, 默认设置为 '127.0.0.1:10249'(设置为 0.0.0.0 意味着在所有接口上提供服务)。

@@ -101,7 +100,7 @@ defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces) Profiling handlers will be handled by metrics server. -->

enableProfiling 字段通过 '/debug/pprof' 处理程序在 Web 界面上启用性能分析。 - 性能分析处理程序将由度量值服务器执行。

+ 性能分析处理程序将由指标服务器执行。

--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

[EXPERIMENTAL] O caminho para o 'real' sistema de arquivos raiz do host.

nonResourceURLs
From 1d94b16b27f0bb1dd5de2edaadc4255ce4b19b6d Mon Sep 17 00:00:00 2001 From: bhangra Date: Mon, 25 Jul 2022 18:02:21 +0900 Subject: [PATCH 194/292] Update kubelet-config-file.md trying to fit it to yaml style, while making it work on my system required quite a change. I'm not sure if this is appropriate to change it so much. --- .../administer-cluster/kubelet-config-file.md | 23 +++++++++++++------ 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index da1e167cccb54..fdcecf47cdb86 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -28,13 +28,22 @@ in this struct. Make sure the Kubelet has read permissions on the file. Here is an example of what this file might look like: ``` -apiVersion: kubelet.config.k8s.io/v1beta1 -kind: KubeletConfiguration -address: "192.168.0.8" -port: 20250 -serializeImagePulls: false -evictionHard: - memory.available: "200Mi" +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubelet-config-1.20 + namespace: kube-system +data: + kubelet: | + apiVersion: kubelet.config.k8s.io/v1beta1 + kind: KubeletConfiguration + address: "192.168.0.8" + port: 20250 + serializeImagePulls: false + evictionHard: + memory.available: "200Mi" +~ + ``` In the example, the Kubelet is configured to serve on IP address 192.168.0.8 and port 20250, pull images in parallel, From 35337abc4685fe48a6bca5f49a41198e1a950748 Mon Sep 17 00:00:00 2001 From: Sean Wei Date: Mon, 25 Jul 2022 17:14:00 +0800 Subject: [PATCH 195/292] [zh-cn] Sync working-with-objects/names.md --- .../overview/working-with-objects/names.md | 20 ++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/names.md b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md index 85710cda6a46e..8b8bea11b61fe 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/names.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md @@ -3,6 +3,14 @@ title: 对象名称和 IDs content_type: concept weight: 20 --- + @@ -21,10 +29,10 @@ For example, you can only have one Pod named `myapp-1234` within the same [names 中有一个名为 `myapp-1234` 的 Pod,但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`。 对于用户提供的非唯一性的属性,Kubernetes 提供了 -[标签(Labels)](/zh-cn/docs/concepts/working-with-objects/labels)和 +[标签(Labels)](/zh-cn/docs/concepts/overview/working-with-objects/labels/)和 [注解(Annotation)](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)机制。 @@ -75,7 +83,7 @@ DNS 子域名的定义可参见 [RFC 1123](https://tools.ietf.org/html/rfc1123) - 必须以字母数字结尾 下面是一个名为 `nginx-demo` 的 Pod 的配置清单: @@ -149,10 +157,10 @@ spec: - containerPort: 80 ``` +{{< note >}} -{{< note >}} 某些资源类型可能具有额外的命名约束。 {{< /note >}} @@ -175,5 +183,3 @@ UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667。 --> * 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/) * 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md)的设计文档 - - From eb8aa3f642f71b8b11b37a47e70bebbb81e1e605 Mon Sep 17 00:00:00 2001 From: Garrit Franke <32395585+garritfra@users.noreply.github.com> Date: Mon, 25 Jul 2022 11:24:58 +0200 Subject: [PATCH 196/292] Remove periods from headings --- .../configure-pod-container/configure-service-account.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 70122389448dd..b241eb9d126de 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -29,7 +29,7 @@ When they do, they are authenticated as a particular Service Account (for exampl -## Use the Default Service Account to access the API server. +## Use the Default Service Account to access the API server When you create a pod, if you do not specify a service account, it is automatically assigned the `default` service account in the same namespace. @@ -68,7 +68,7 @@ spec: The pod spec takes precedence over the service account if both specify a `automountServiceAccountToken` value. -## Use Multiple Service Accounts. +## Use Multiple Service Accounts Every namespace has a default service account resource called `default`. You can list this and any other serviceAccount resources in the namespace with this command: @@ -136,7 +136,7 @@ You can clean up the service account from this example like this: kubectl delete serviceaccount/build-robot ``` -## Manually create a service account API token. +## Manually create a service account API token Suppose we have an existing service account named "build-robot" as mentioned above, and we create a new secret manually. From 5a3fa65b941a49d6cab7d40e2a715829c3dcd05c Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Mon, 25 Jul 2022 17:25:09 +0800 Subject: [PATCH 197/292] Update diagram-guide.md --- content/zh-cn/docs/contribute/style/diagram-guide.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/docs/contribute/style/diagram-guide.md b/content/zh-cn/docs/contribute/style/diagram-guide.md index 55d0282f48d81..4f106e9af65c7 100644 --- a/content/zh-cn/docs/contribute/style/diagram-guide.md +++ b/content/zh-cn/docs/contribute/style/diagram-guide.md @@ -723,12 +723,12 @@ Note that the live editor doesn't recognize Hugo shortcodes. ### Example 1 - Pod topology spread constraints Figure 6 shows the diagram appearing in the -[Pod topology pread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels) +[Pod topology pread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/#node-labels) page. --> ### 示例 1 - Pod 拓扑分布约束 -图 6 展示的是 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels) +图 6 展示的是 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/#node-labels) 页面所出现的图表。 {{< mermaid >}} From 7024a599fbbbad2a64d9a5c02868fd35307b6f91 Mon Sep 17 00:00:00 2001 From: divya-mohan0209 Date: Mon, 25 Jul 2022 14:57:28 +0530 Subject: [PATCH 198/292] Revert "Remove commas from kubelet configuration example" --- .../administer-cluster/kubelet-config-file.md | 23 ++++++------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index fdcecf47cdb86..668f4532a51fb 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -28,22 +28,13 @@ in this struct. Make sure the Kubelet has read permissions on the file. Here is an example of what this file might look like: ``` -apiVersion: v1 -kind: ConfigMap -metadata: - name: kubelet-config-1.20 - namespace: kube-system -data: - kubelet: | - apiVersion: kubelet.config.k8s.io/v1beta1 - kind: KubeletConfiguration - address: "192.168.0.8" - port: 20250 - serializeImagePulls: false - evictionHard: - memory.available: "200Mi" -~ - +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +address: "192.168.0.8", +port: 20250, +serializeImagePulls: false, +evictionHard: + memory.available: "200Mi" ``` In the example, the Kubelet is configured to serve on IP address 192.168.0.8 and port 20250, pull images in parallel, From c75f5c3e920bbb20c9ad1fe61c8de6920a7208ab Mon Sep 17 00:00:00 2001 From: divya-mohan0209 Date: Mon, 25 Jul 2022 15:04:01 +0530 Subject: [PATCH 199/292] Update + add changes suggested --- .../docs/tasks/administer-cluster/kubelet-config-file.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index 668f4532a51fb..2ed522628d308 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -27,12 +27,12 @@ The configuration file must be a JSON or YAML representation of the parameters in this struct. Make sure the Kubelet has read permissions on the file. Here is an example of what this file might look like: -``` +```yaml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration -address: "192.168.0.8", -port: 20250, -serializeImagePulls: false, +address: "192.168.0.8" +port: 20250 +serializeImagePulls: false evictionHard: memory.available: "200Mi" ``` From 64006e38831e507cee29ffdf1127ee361a20822a Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Mon, 25 Jul 2022 17:57:13 +0800 Subject: [PATCH 200/292] Update config.md --- content/zh-cn/docs/reference/scheduling/config.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/docs/reference/scheduling/config.md b/content/zh-cn/docs/reference/scheduling/config.md index 0cd7190d71bc5..b79b23e2e9e53 100644 --- a/content/zh-cn/docs/reference/scheduling/config.md +++ b/content/zh-cn/docs/reference/scheduling/config.md @@ -229,10 +229,10 @@ extension points: 实现的扩展点:`filter`,`score`. -- `PodTopologySpread`:实现了 [Pod 拓扑分布](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。 +- `PodTopologySpread`:实现了 [Pod 拓扑分布](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)。 实现的扩展点:`preFilter`,`filter`,`preScore`,`score`。 如果你的集群跨了多个可用区或者地理区域,你可以使用节点标签,结合 -[Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +[Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/) 来控制如何在你的集群中多个失效域之间分布 Pods。这里的失效域可以是 地理区域、可用区甚至是特定节点。 这些提示信息使得{{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}} From c54c1ec4507ed1f71a5dbe5bf1ffef8d1f099733 Mon Sep 17 00:00:00 2001 From: gaiyaning Date: Mon, 25 Jul 2022 19:29:52 +0800 Subject: [PATCH 202/292] "Modify" both can ensure that "should translate to" both used together, can ensure that --- .../configure-liveness-readiness-startup-probes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index bf86ee2d1f045..8ca6b9cd65710 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -570,7 +570,7 @@ for it, and that containers are restarted when they fail. HTTP 和 TCP 的就绪探测器配置也和存活探测器的配置完全相同。 就绪和存活探测可以在同一个容器上并行使用。 -两者都可以确保流量不会发给还未就绪的容器,当这些探测失败时容器会被重新启动。 +两者共同使用,可以确保流量不会发给还未就绪的容器,当这些探测失败时容器会被重新启动。 + For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { + --> - namespaceSelector 根据对象的命名空间是否与 selector 匹配来决定是否在该对象上运行 Webhook。 - 如果对象本身是 Namespace,则针对 object.metadata.labels 执行匹配。 - 如果对象是其他集群作用域资源,则永远不会跳过 Webhook 的匹配动作。 - - 例如,为了针对 “runlevel” 不为 “0” 或 “1” 的名字空间中的所有对象运行 Webhook; - 你可以按如下方式设置 selector : - ``` - "namespaceSelector": { - "matchExpressions": [ - { - "key": "runlevel", - "operator": "NotIn", - "values": [ - "0", - "1" - ] - } - ] - } - ``` - - - 相反,如果你只想针对 “environment” 为 “prod” 或 “staging” 的名字空间中的对象运行 Webhook; - 你可以按如下方式设置 selector: - ``` - "namespaceSelector": { - "matchExpressions": [ - { - "key": "environment", - "operator": "In", - "values": [ - "prod", - "staging" - ] - } - ] - } - ``` - + + 相反,如果你只想针对 “environment” 为 “prod” 或 “staging” 的名字空间中的对象运行 Webhook; + 你可以按如下方式设置 selector: + ``` + "namespaceSelector": { + "matchExpressions": [ + { + "key": "environment", + "operator": "In", + "values": [ + "prod", + "staging" + ] + } + ] + } + ``` + - 有关标签选择算符的更多示例,请参阅 - https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels。 + Default to the empty LabelSelector, which matches everything. + --> - 默认为空的 LabelSelector,匹配所有对象。 + 有关标签选择算符的更多示例,请参阅 + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels。 + + 默认为空的 LabelSelector,匹配所有对象。 - reinvocationPolicy 表示这个 webhook 是否可以被多次调用,作为一次准入评估的一部分。可取值有 “Never” 和 “IfNeeded”。 - - Never: 在一次录取评估中,webhook 被调用的次数不会超过一次。 + reinvocationPolicy 表示这个 Webhook 是否可以被多次调用,作为一次准入评估的一部分。可取值有 “Never” 和 “IfNeeded”。 + + - Never: 在一次录取评估中,Webhook 被调用的次数不会超过一次。 - IfNeeded:如果被录取的对象在被最初的 Webhook 调用后又被其他录取插件修改, - 那么该 webhook 将至少被额外调用一次作为录取评估的一部分。 - 指定此选项的 webhook **必须**是幂等的,能够处理它们之前承认的对象。 + 那么该 Webhook 将至少被额外调用一次作为录取评估的一部分。 + 指定此选项的 Webhook **必须**是幂等的,能够处理它们之前承认的对象。 注意:**不保证额外调用的次数正好为1。** 如果额外的调用导致对对象的进一步修改,Webhook 不保证会再次被调用。 - **使用该选项的 webhook 可能会被重新排序,以最小化额外调用的数量。** + **使用该选项的 Webhook 可能会被重新排序,以最小化额外调用的数量。** 在保证所有的变更都完成后验证一个对象,使用验证性质的准入 Webhook 代替。 默认值为 “Never” 。 @@ -387,8 +389,8 @@ MutatingWebhookConfiguration 描述准入 Webhook 的配置,该 Webhook 可接 - **webhooks.rules.apiGroups** ([]string) - apiGroups 是资源所属的 API 组列表。'*' 是所有组。 - 如果存在 '*',则列表的长度必须为 1。必需。 + apiGroups 是资源所属的 API 组列表。`*` 是所有组。 + 如果存在 `*`,则列表的长度必须为 1。必需。 + +### kubernetes.io/psp(已弃用) {#kubernetes-io-psp} + +例如:`kubernetes.io/psp: restricted` + +这个注解只在你使用 [PodSecurityPolicies](/zh-cn/docs/concepts/security/pod-security-policy/) 时才有意义。 + +当 PodSecurityPolicy 准入控制器接受一个 Pod 时,会修改该 Pod, +并给这个 Pod 添加此注解。 +注解的值是用来对 Pod 进行验证检查的 PodSecurityPolicy 的名称。 + 1. 优先考虑允许 Pod 保持原样,不会更改 Pod 字段默认值或其他配置的 PodSecurityPolicy。 这类非更改性质的 PodSecurityPolicy 对象之间的顺序无关紧要。 2. 如果必须要为 Pod 设置默认值或者其他配置,(按名称顺序)选择第一个允许 Pod 操作的 PodSecurityPolicy 对象。 +当根据 PodSecurityPolicy 对一个 Pod 进行验证时,会为 Pod 添加 +[一个 `kubernetes.io/psp` 注释](/zh-cn/docs/reference/labels-annotations-taints/#kubernetes-io-psp)会被添加到 Pod 中, +注解的值为 PodSecurityPolicy 的名称。 + {{< note >}} ### 创建一个策略和一个 Pod {#create-a-policy-and-a-pod} -在一个文件中定义一个示例的 PodSecurityPolicy 对象。 -这里的策略只是用来禁止创建有特权要求的 Pods。 +下面是一个防止创建特权 Pod 的策略。 + PodSecurityPolicy 对象的名称必须是合法的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 @@ -477,7 +484,7 @@ And create it with kubectl: 使用 kubectl 执行创建操作: ```shell -kubectl-admin create -f example-psp.yaml +kubectl-admin create -f https://k8s.io/examples/policy/example-psp.yaml ``` +输出类似于: + ``` no ``` @@ -597,11 +609,29 @@ pod "pause" created ``` 此次尝试不出所料地成功了! -不过任何创建特权 Pod 的尝试还是会被拒绝: +你可以验证 Pod 是根据新创建的 PodSecurityPolicy 验证的。 + +```shell +kubectl-user get pod pause -o yaml | grep kubernetes.io/psp +``` + + +输出类似于: + +``` +kubernetes.io/psp: example +``` + +但任何试图创建特权 Pod 的请求仍然会被拒绝。 ```shell kubectl-user create -f- < Date: Sat, 23 Jul 2022 15:40:05 +0800 Subject: [PATCH 207/292] Update deprecation-guide.md --- .../docs/reference/using-api/deprecation-guide.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/zh-cn/docs/reference/using-api/deprecation-guide.md b/content/zh-cn/docs/reference/using-api/deprecation-guide.md index 70b0277985ad8..2d2f7476c08d7 100644 --- a/content/zh-cn/docs/reference/using-api/deprecation-guide.md +++ b/content/zh-cn/docs/reference/using-api/deprecation-guide.md @@ -723,13 +723,13 @@ For example: ### 定位何处使用了已弃用的 API -使用 [client warnings, metrics, and audit information available in 1.19+](https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings) -来定位在何处使用了已启用的 API。 +使用 [client warnings, metrics, and audit information available in 1.19+](/blog/2020/09/03/warnings/#deprecation-warnings) +来定位在何处使用了已弃用的 API。 -例如,要将较老的 Deployment 转换为 `apps/v1` 版本,你可以运行 +例如,要将较老的 Deployment 版本转换为 `apps/v1` 版本,你可以运行 `kubectl-convert -f ./my-deployment.yaml --output-version apps/v1` @@ -763,5 +763,5 @@ For example, to convert an older Deployment to `apps/v1`, you can run: Note that this may use non-ideal default values. To learn more about a specific resource, check the Kubernetes [API reference](/docs/reference/kubernetes-api/). --> -注意这种操作生成的结果中可能使用的默认值并不理想。 +需要注意的是这种操作使用的默认值可能并不理想。 要进一步了解某个特定资源,可查阅 Kubernetes [API 参考](/zh-cn/docs/reference/kubernetes-api/)。 From db572ae969743c2d8df0207f21bb83a52ee078f1 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Mon, 25 Jul 2022 21:51:41 +0800 Subject: [PATCH 208/292] Update user-guide.md --- .../zh-cn/docs/concepts/windows/user-guide.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/concepts/windows/user-guide.md b/content/zh-cn/docs/concepts/windows/user-guide.md index 9e85ab2b25f6e..ede5611ad5ccd 100644 --- a/content/zh-cn/docs/concepts/windows/user-guide.md +++ b/content/zh-cn/docs/concepts/windows/user-guide.md @@ -157,17 +157,17 @@ port 80 of the container directly to the Service. --> 1. 检查部署是否成功。请验证: - * 使用 `kubectl get pods` 从 Linux 控制平面节点能够列出两个 Pod - * 跨网络的节点到 Pod 通信,从 Linux 控制平面节点上执行 `curl` 访问 - Pod IP 的 80 端口以检查 Web 服务器响应 + * 当执行 `kubectl get pods` 命令时,能够从 Linux 控制平面所在的节点上列出两个 Pod。 + * 跨网络的节点到 Pod 通信,从 Linux 控制平面所在的节点上执行 `curl` 命令来访问 + Pod IP 的 80 端口以检查 Web 服务器响应。 * Pod 间通信,使用 `docker exec` 或 `kubectl exec` - 在 Pod 之间(以及跨主机,如果你有多个 Windows 节点)互 ping - * Service 到 Pod 的通信,在 Linux 控制平面节点以及独立的 Pod 中执行 `curl` - 访问虚拟的服务 IP(在 `kubectl get services` 下查看) - * 服务发现,使用 Kubernetes [默认 DNS 后缀](/zh-cn/docs/concepts/services-networking/dns-pod-service/#services)的服务名称, - 用 `curl` 访问服务名称 - * 入站连接,在 Linux 控制平面节点或集群外的机器上执行 `curl` 来访问 NodePort 服务 - * 出站连接,使用 `kubectl exec`,从 Pod 内部执行 `curl` 访问外部 IP + 命令进入容器,并在 Pod 之间(以及跨主机,如果你有多个 Windows 节点)相互进行 ping 操作。 + * Service 到 Pod 的通信,在 Linux 控制平面所在的节点以及独立的 Pod 中执行 `curl` + 命令来访问虚拟的服务 IP(在 `kubectl get services` 命令下查看)。 + * 服务发现,执行 `curl` 命令来访问带有 Kubernetes + [默认 DNS 后缀](/zh-cn/docs/concepts/services-networking/dns-pod-service/#services)的服务名称。 + * 入站连接,在 Linux 控制平面所在的节点上或集群外的机器上执行 `curl` 命令来访问 NodePort 服务。 + * 出站连接,使用 `kubectl exec`,从 Pod 内部执行 `curl` 访问外部 IP。 {{< note >}} -如果存在两个候选节点,都满足 `requiredDuringSchedulingIgnoredDuringExecution` 规则, +如果存在两个候选节点,都满足 `preferredDuringSchedulingIgnoredDuringExecution` 规则, 其中一个节点具有标签 `label-1:key-1`,另一个节点具有标签 `label-2:key-2`, 调度器会考察各个节点的 `weight` 取值,并将该权重值添加到节点的其他得分值之上, From 600e4e892c97b7a4818618d302621fba3f725697 Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Tue, 26 Jul 2022 01:26:56 +0800 Subject: [PATCH 210/292] [zh-cn]Update content/zh-cn/docs/reference/_index.md --- content/zh-cn/docs/reference/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/docs/reference/_index.md b/content/zh-cn/docs/reference/_index.md index c0ca319c1f82b..b4fe004ca6bb1 100644 --- a/content/zh-cn/docs/reference/_index.md +++ b/content/zh-cn/docs/reference/_index.md @@ -135,7 +135,7 @@ operator to use or manage a cluster. * [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) * [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/) -* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1/) +* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/) * [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/) * [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) @@ -158,7 +158,7 @@ operator to use or manage a cluster. * [kube-apiserver 配置 (v1alpha1)](/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1/) * [kube-apiserver 配置 (v1)](/zh-cn/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver 加密 (v1)](/zh-cn/docs/reference/config-api/apiserver-encryption.v1/) -* [kube-apiserver 事件速率限制 (v1alpha1)](/zh-cn/docs/reference/config-api/apiserver-eventratelimit.v1/) +* [kube-apiserver 事件速率限制 (v1alpha1)](/zh-cn/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/) * [kubelet 配置 (v1alpha1)](/zh-cn/docs/reference/config-api/kubelet-config.v1alpha1/) 和 [kubelet 配置 (v1beta1)](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) * [kubelet 凭据驱动 (v1alpha1)](/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) From e8163adeafe1f799df3c203c39457ce3c212c458 Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Tue, 26 Jul 2022 01:29:30 +0800 Subject: [PATCH 211/292] [zh-cn]Update content/zh-cn/examples/service/networking/dual-stack-default-svc.yaml --- .../examples/service/networking/dual-stack-default-svc.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/examples/service/networking/dual-stack-default-svc.yaml b/content/zh-cn/examples/service/networking/dual-stack-default-svc.yaml index 86eadd5478aa9..a42c7d8a2517d 100644 --- a/content/zh-cn/examples/service/networking/dual-stack-default-svc.yaml +++ b/content/zh-cn/examples/service/networking/dual-stack-default-svc.yaml @@ -3,10 +3,10 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 From 12e85f6e28817ad92ff144a70c29c04276c5f87e Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Fri, 22 Jul 2022 18:26:20 +0000 Subject: [PATCH 212/292] Update the Create section intro for accuracy --- .../managing-secret-using-config-file.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 6fb5cdca3d188..08b1b4813c7a7 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -15,18 +15,18 @@ description: Creating Secret objects using resource configuration file. ## Create the Config file -You can create a Secret in a file first, in JSON or YAML format, and then -create that object. The +You can define the `Secret` object in a file first, in JSON or YAML format, and then create that object. The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) resource contains two maps: `data` and `stringData`. The `data` field is used to store arbitrary data, encoded using base64. The `stringData` field is provided for convenience, and it allows you to provide -Secret data as unencoded strings. +the same data as unencoded strings. The keys of `data` and `stringData` must consist of alphanumeric characters, `-`, `_` or `.`. -For example, to store two strings in a Secret using the `data` field, convert -the strings to base64 as follows: +The following example stores two strings in a Secret using the `data` field. + +Convert the strings to base64 as follows: ```shell echo -n 'admin' | base64 From 28a872ede5c1b2a21d3269c7937847178fb5a0e5 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Fri, 22 Jul 2022 18:30:42 +0000 Subject: [PATCH 213/292] Update step intros and improve step formatting --- .../managing-secret-using-config-file.md | 36 +++++++------------ 1 file changed, 13 insertions(+), 23 deletions(-) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 08b1b4813c7a7..46e470b2479a9 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -26,29 +26,30 @@ The keys of `data` and `stringData` must consist of alphanumeric characters, The following example stores two strings in a Secret using the `data` field. -Convert the strings to base64 as follows: +Convert the strings to base64: ```shell echo -n 'admin' | base64 -``` - -The output is similar to: - -``` -YWRtaW4= -``` - -```shell echo -n '1f2d1e2e67df' | base64 ``` +{{< note >}} +The serialized JSON and YAML values of Secret data are encoded as base64 +strings. Newlines are not valid within these strings and must be omitted. When +using the `base64` utility on Darwin/macOS, users should avoid using the `-b` +option to split long lines. Conversely, Linux users *should* add the option +`-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` +option is not available. +{{< /note >}} + The output is similar to: ``` +YWRtaW4= MWYyZDFlMmU2N2Rm ``` -Write a Secret config file that looks like this: +Create the configuration file: ```yaml apiVersion: v1 @@ -64,15 +65,6 @@ data: Note that the name of a Secret object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). -{{< note >}} -The serialized JSON and YAML values of Secret data are encoded as base64 -strings. Newlines are not valid within these strings and must be omitted. When -using the `base64` utility on Darwin/macOS, users should avoid using the `-b` -option to split long lines. Conversely, Linux users *should* add the option -`-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` -option is not available. -{{< /note >}} - For certain scenarios, you may wish to use the `stringData` field instead. This field allows you to put a non-base64 encoded string directly into the Secret, and the string will be encoded for you when the Secret is created or updated. @@ -104,9 +96,7 @@ stringData: password: ``` -## Create the Secret object - -Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): +Create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): ```shell kubectl apply -f ./secret.yaml From 134eeb2282011aac2f4aedb0c4f9a2b3f5e7f103 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Fri, 22 Jul 2022 18:35:00 +0000 Subject: [PATCH 214/292] Add step numbering to steps --- .../managing-secret-using-config-file.md | 69 ++++++++++--------- 1 file changed, 38 insertions(+), 31 deletions(-) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 46e470b2479a9..9a98c4c79fa25 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -26,44 +26,51 @@ The keys of `data` and `stringData` must consist of alphanumeric characters, The following example stores two strings in a Secret using the `data` field. -Convert the strings to base64: +1. Convert the strings to base64: -```shell -echo -n 'admin' | base64 -echo -n '1f2d1e2e67df' | base64 -``` + ```shell + echo -n 'admin' | base64 + echo -n '1f2d1e2e67df' | base64 + ``` -{{< note >}} -The serialized JSON and YAML values of Secret data are encoded as base64 -strings. Newlines are not valid within these strings and must be omitted. When -using the `base64` utility on Darwin/macOS, users should avoid using the `-b` -option to split long lines. Conversely, Linux users *should* add the option -`-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` -option is not available. -{{< /note >}} + {{< note >}} + The serialized JSON and YAML values of Secret data are encoded as base64 + strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/macOS, users should avoid using the `-b` option to split long lines. Conversely, Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` option is not available. {{< /note >}} -The output is similar to: + The output is similar to: -``` -YWRtaW4= -MWYyZDFlMmU2N2Rm -``` + ``` + YWRtaW4= + MWYyZDFlMmU2N2Rm + ``` -Create the configuration file: +1. Create the configuration file: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -``` + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: mysecret + type: Opaque + data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm + ``` + + Note that the name of a Secret object must be a valid + [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +1. Create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): + + ```shell + kubectl apply -f ./secret.yaml + ``` + + The output is similar to: -Note that the name of a Secret object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + ``` + secret/mysecret created + ``` For certain scenarios, you may wish to use the `stringData` field instead. This field allows you to put a non-base64 encoded string directly into the Secret, From 0b9899c49de8cf1a40f19ace18787d7bdb9e2155 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Fri, 22 Jul 2022 18:39:58 +0000 Subject: [PATCH 215/292] Add information about returned values in stringdata and a link to verify secret --- .../configmap-secret/managing-secret-using-config-file.md | 5 +++++ .../tasks/configmap-secret/managing-secret-using-kubectl.md | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 9a98c4c79fa25..fd7c5d5cec9ee 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -72,6 +72,9 @@ The following example stores two strings in a Secret using the `data` field. secret/mysecret created ``` +To verify that the Secret was created and to decode the Secret data, refer to +[Managing Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret). + For certain scenarios, you may wish to use the `stringData` field instead. This field allows you to put a non-base64 encoded string directly into the Secret, and the string will be encoded for you when the Secret is created or updated. @@ -102,6 +105,8 @@ stringData: username: password: ``` +When you retrieve the Secret data, the command returns the encoded values, +and not the plaintext values you provided in `stringData`. Create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 7e607b9b799a4..086d44eed8b91 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -111,7 +111,7 @@ username: 5 bytes The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by default. This is to protect the `Secret` from being exposed -accidentally, or from being stored in a terminal log. +accidentally, or from being stored in a terminal log. To check the actual content of the encoded data, please refer to [Decoding the Secret](#decoding-secret). ## Decoding the Secret {#decoding-secret} From a3dc78ac3a4ac5a52c4d95012779297c7ec323ff Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Fri, 22 Jul 2022 18:43:48 +0000 Subject: [PATCH 216/292] Remove the Check the secret section and merge into the existing sections --- .../managing-secret-using-config-file.md | 30 ++++--------------- 1 file changed, 5 insertions(+), 25 deletions(-) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index fd7c5d5cec9ee..46e259878a704 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -108,22 +108,7 @@ stringData: When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in `stringData`. -Create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): - -```shell -kubectl apply -f ./secret.yaml -``` - -The output is similar to: - -``` -secret/mysecret created -``` - -## Check the Secret - -The `stringData` field is a write-only convenience field. It is never output when -retrieving Secrets. For example, if you run the following command: +For example, if you run the following command: ```shell kubectl get secret mysecret -o yaml @@ -145,14 +130,9 @@ metadata: type: Opaque ``` -The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by -default. This is to protect the `Secret` from being exposed accidentally to an onlooker, -or from being stored in a terminal log. -To check the actual content of the encoded data, please refer to -[decoding secret](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret). +### Specifying both `data` and `stringData` -If a field, such as `username`, is specified in both `data` and `stringData`, -the value from `stringData` is used. For example, the following Secret definition: +If you specify a field in both `data` and `stringData`, the value from `stringData` is used. For example, if you define the following Secret: ```yaml apiVersion: v1 @@ -166,7 +146,7 @@ stringData: username: administrator ``` -Results in the following Secret: +The `Secret` object is created as follows: ```yaml apiVersion: v1 @@ -182,7 +162,7 @@ metadata: type: Opaque ``` -Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. +`YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. ## Clean Up From ced82c8e9696c636b554fd3a5ac30ea57429b6d2 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Fri, 22 Jul 2022 18:32:19 +0000 Subject: [PATCH 217/292] Move stringdata info to its own section to avoid breaking up task flow --- .../tasks/configmap-secret/managing-secret-using-config-file.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 46e259878a704..a8d0db6c87e63 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -75,6 +75,8 @@ The following example stores two strings in a Secret using the `data` field. To verify that the Secret was created and to decode the Secret data, refer to [Managing Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret). +### Specify unencoded data when creating a Secret + For certain scenarios, you may wish to use the `stringData` field instead. This field allows you to put a non-base64 encoded string directly into the Secret, and the string will be encoded for you when the Secret is created or updated. From 829411fce6feb260e1b17a129d8a7b126792ba9d Mon Sep 17 00:00:00 2001 From: Michael Date: Sat, 23 Jul 2022 08:44:01 +0800 Subject: [PATCH 218/292] [es] updated the home page --- content/es/_index.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/es/_index.html b/content/es/_index.html index af07c0764bbc1..1a5d8d06cdf5c 100644 --- a/content/es/_index.html +++ b/content/es/_index.html @@ -41,12 +41,12 @@

El desafío de migrar más de 150 microservicios a Kubernetes



- Asista a la KubeCon en San Diego del 18 al 21 de Nov. 2019 + Asista a la KubeCon en Norte América del 24 al 28 de Octubre 2022



- Asista a la KubeCon en Amsterdam del 30 Marzo al 2 Abril + Asista a la KubeCon en Europa del 17 al 21 de Abril 2023
From 5d384a6df8cf69a8c9fff7eca502254bf25c3b93 Mon Sep 17 00:00:00 2001 From: jacky Date: Tue, 26 Jul 2022 10:38:31 +0800 Subject: [PATCH 219/292] update punctuation,spaces and parentheses Signed-off-by: jacky --- .../zh-cn/case-studies/nordstrom/index.html | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/content/zh-cn/case-studies/nordstrom/index.html b/content/zh-cn/case-studies/nordstrom/index.html index f44f35997d4e1..c76be9019e9bc 100644 --- a/content/zh-cn/case-studies/nordstrom/index.html +++ b/content/zh-cn/case-studies/nordstrom/index.html @@ -31,7 +31,7 @@

解决方案

-

在四年前采用 DevOps 转型并启动持续集成/部署 (CI/CD)项目后,该公司将部署时间从 3 个月缩短到 30 分钟。但是他们想在部署环境上走得更快,所以他们开始他们的云原生之旅,采用与 Kubernetes 协调的 Docker 容器。

+

在四年前采用 DevOps 转型并启动持续集成/部署 (CI/CD)项目后,该公司将部署时间从 3 个月缩短到 30 分钟。但是他们想在部署环境上走得更快,所以他们开始他们的云原生之旅,采用与 Kubernetes 协调的 Docker 容器。

-

为 Nordstrom 构建 Kubernetes 企业平台的团队高级工程师 Dhawal Patel 说,“使用 Kubernetes 的 Nordstrom 技术开发人员现在项目部署得更快,并且能够只专注于编写应用程序。”此外,该团队还提高了运营效率,根据工作负载将 CPU 利用率从 5 倍提高到 12 倍。Patel 说:“我们运行了数千台虚拟机 (VM),但无法有效地使用所有这些资源。借助 Kubernetes,我们甚至不需要尝试去提高集群的效率,就能使运营效率增长 10 倍。”

+

为 Nordstrom 构建 Kubernetes 企业平台的团队高级工程师 Dhawal Patel 说,“使用 Kubernetes 的 Nordstrom 技术开发人员现在项目部署得更快,并且能够只专注于编写应用程序。”此外,该团队还提高了运营效率,根据工作负载将 CPU 利用率从 5 倍提高到 12 倍。Patel 说:“我们运行了数千台虚拟机(VM),但无法有效地使用所有这些资源。借助 Kubernetes,我们甚至不需要尝试去提高集群的效率,就能使运营效率增长 10 倍。”

-{{< case-studies/quote author="Dhawal Patel, Nordstrom 高级工程师" >}} -“我们一直在寻找通过技术进行优化和提供更多价值的方法。通过 Kubernetes ,我们在开发效率和运营效率这两方面取得了示范性的提升。这是一个双赢。” +{{< case-studies/quote author="Dhawal Patel, Nordstrom 高级工程师" >}} +“我们一直在寻找通过技术进行优化和提供更多价值的方法。通过 Kubernetes,我们在开发效率和运营效率这两方面取得了示范性的提升。这是一个双赢。” {{< /case-studies/quote >}} -

当 Dhawal Patel 五年前加入 Nordstrom ,担任该零售商网站的应用程序开发人员时,他意识到有机会帮助加快开发周期。

+

当 Dhawal Patel 五年前加入 Nordstrom,担任该零售商网站的应用程序开发人员时,他意识到有机会帮助加快开发周期。

-

在早期的 DevOps 时代,,Nordstrom 技术仍然遵循传统的孤岛团队和功能模型。Patel 说:“作为开发人员,我花在维护环境上的时间比编写代码和为业务增加价值的时间要多。我对此充满热情,因此我有机会参与帮助修复它。”

+

在早期的 DevOps 时代,Nordstrom 技术仍然遵循传统的孤岛团队和功能模型。Patel 说:“作为开发人员,我花在维护环境上的时间比编写代码和为业务增加价值的时间要多。我对此充满热情,因此我有机会参与帮助修复它。”

-

公司也渴望加快步伐,并在 2013 年启动了首个持续集成/部署 (CI/CD)项目。该项目是 Nordstrom 云原生之旅的第一步。

+

公司也渴望加快步伐,并在 2013 年启动了首个持续集成/部署(CI/CD)项目。该项目是 Nordstrom 云原生之旅的第一步。

-

开发人员和运营团队成员构建了一个 CI/CD 管道,在内部使用公司的服务器。团队选择了 Chef ,并编写了自动虚拟 IP 创建、服务器和负载均衡的指导手册。Patel 说:“项目完成后,部署时间从 3 个月减少到 30 分钟。我们仍有开发、测试、暂存、然后生产等多个环境需要重新部署。之后,每个运行 Chef 说明书的环境部署都只花 30 分钟。在那个时候,这是一个巨大的成就。”

+

开发人员和运营团队成员构建了一个 CI/CD 管道,在内部使用公司的服务器。团队选择了 Chef,并编写了自动虚拟 IP 创建、服务器和负载均衡的指导手册。Patel 说:“项目完成后,部署时间从 3 个月减少到 30 分钟。我们仍有开发、测试、暂存、然后生产等多个环境需要重新部署。之后,每个运行 Chef 说明书的环境部署都只花 30 分钟。在那个时候,这是一个巨大的成就。”

@@ -87,7 +87,7 @@

影响

-

Patel 说:“云提供了对资源的更快访问,因为我们在内部需要花数周时间才能部署一个虚拟机 (VM)来提供服务。但现在我们可以做同样的事情,只需五分钟。”

+

Patel 说:“云提供了对资源的更快访问,因为我们在内部需要花数周时间才能部署一个虚拟机(VM)来提供服务。但现在我们可以做同样的事情,只需五分钟。”

-

对于加入的团队来说,这些好处是立竿见影的。Grigoriu 说:“在我们的 Kubernetes 集群中运行的团队喜欢这样一个事实,即他们担心的问题更少,他们不需要管理基础设施或操作系统。早期使用者喜欢 Kubernetes 的声明特性,让他们不得不处理的面积减少。

+

对于加入的团队来说,这些好处是立竿见影的。Grigoriu 说:“在我们的 Kubernetes 集群中运行的团队喜欢这样一个事实,即他们担心的问题更少,他们不需要管理基础设施或操作系统。早期使用者喜欢 Kubernetes 的声明特性,让他们不得不处理的面积减少。

-

为了支持这些早期使用者,Patel 的团队开始发展集群并构建生产级服务。“我们与 Prometheus 集成了监控功能,并配有 Grafana 前端;我们使用 Fluentd 将日志推送到 Elasticsearch ,从而提供日志聚合”Patel 说。该团队还增加了数十个开源组件,包括 CNCF 项目,而且把这些成果都贡献给了 Kubernetes 、Terraform 和 kube2iam 。

+

为了支持这些早期使用者,Patel 的团队开始发展集群并构建生产级服务。“我们与 Prometheus 集成了监控功能,并配有 Grafana 前端;我们使用 Fluentd 将日志推送到 Elasticsearch,从而提供日志聚合”Patel 说。该团队还增加了数十个开源组件,包括 CNCF 项目,而且把这些成果都贡献给了 Kubernetes、Terraform 和 kube2iam。

-

现在有 60 多个开发团队在 Nordstrom 上运行 Kubernetes ,随着成功案例的涌现,更多的团队加入进来。Patel 说:“我们最初的客户群,那些愿意尝试这些的客户群,现在已经开始向后续用户宣传。一个早期使用者拥有 Docker 容器,他不知道如何在生产中运行它。我们和他坐在一起,在 15 分钟内,我们将其部署到生产中。他认为这是惊人的,他所在的组织更多的人开始加入进来。”

+

现在有 60 多个开发团队在 Nordstrom 上运行 Kubernetes,随着成功案例的涌现,更多的团队加入进来。Patel 说:“我们最初的客户群,那些愿意尝试这些的客户群,现在已经开始向后续用户宣传。一个早期使用者拥有 Docker 容器,他不知道如何在生产中运行它。我们和他一起协作,在 15 分钟内,我们将其部署到生产中。他认为这是惊人的,他所在的组织更多的人开始加入进来。”

{{< case-studies/quote >}} -“借助 Kubernetes ,我们甚至不需要尝试去提高集群的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 Pod,如果它们直接进入云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。 +“借助 Kubernetes,我们甚至不需要尝试去提高集群的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 Pod,如果它们直接进入云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。 {{< /case-studies/quote >}} -

速度很重要,并且很容易证明,但也许更大的影响在于运营效率。Patel 说:“我们在 AWS 上运行了数千个 VM ,它们的总体平均 CPU 利用率约为 4%。借助 Kubernetes ,我们甚至不需要尝试去提高集群的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 pod ,如果它们直接上云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。

+

速度很重要,并且很容易证明,但也许更大的影响在于运营效率。Patel 说:“我们在 AWS 上运行了数千个 VM,它们的总体平均 CPU 利用率约为 4%。借助 Kubernetes,我们甚至不需要尝试去提高集群的效率,目前 CPU 利用率为 40%,较之前增长了 10 倍。我们正在运行 2600 多个客户 Pod,如果它们直接上云,这些 Pod 将是 2600 多个 VM。我们现在在 40 台 VM 上运行它们,因此这大大降低了运营开销。

-

因此,Patel 热切关注 Kubernetes 多集群能力的发展。他说:“有了集群联合,我们可以将内部部署作为主集群,将云作为辅助可突发集群。因此,当有周年销售或黑色星期五销售并且我们需要更多的容器时,我们可以上云。”

+

因此,Patel 热切关注 Kubernetes 多集群能力的发展。他说:“有了集群联合,我们可以将内部部署作为主集群,将云作为辅助可突发集群。因此,当有周年销售或黑色星期五销售并且我们需要更多的容器时,我们可以上云。”

+ @@ -24,7 +44,7 @@

挑战

-

其游戏业务是世界上最大的游戏业务之一,但这不是 NetEase 为中国消费者提供的所有。公司还经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务有近10亿用户通过网站使用免费的电子邮件服务,如 163.com。在2015 年,为所有这些系统提供基础设施的 NetEase Cloud 团队意识到,他们的研发流程正在减缓开发人员的速度。NetEase Cloud 和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们希望通过无服务器容器服务自动为用户提供基础设施和工具。”

+

其游戏业务是世界上最大的游戏业务之一,但这不是 NetEase 为中国消费者提供的所有。公司还经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务有近10亿用户通过网站使用免费的电子邮件服务,如 163.com。在2015 年,为所有这些系统提供基础设施的 NetEase Cloud 团队意识到,他们的研发流程正在减缓开发人员的速度。NetEase Cloud 和容器服务架构师 Feng Changjian 表示:“我们的用户需要自己准备所有基础设施。”“我们希望通过无服务器容器服务自动为用户提供基础设施和工具。”

-

在考虑构建自己的业务流程解决方案后,NetEase 决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自 Google,这一事实让团队有信心,它能够跟上 NetEase 的规模。“经过2到3个月的评估,我们相信它能满足我们的需求,”冯长健说。该团队于 2015 年开始与 Kubernetes 合作,那会它甚至还不是 1.0 版本。如今,NetEase 内部云平台还使用了 CNCF 项目 PrometheusEnvoyHarborgRPC Helm, 在生产集群中运行 10000 个节点,并可支持集群多达 30000 个节点。基于对内部平台的学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,NetEase 轻舟微服务。

+

在考虑构建自己的业务流程解决方案后,NetEase 决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自 Google,这一事实让团队有信心,它能够跟上 NetEase 的规模。“经过2到3个月的评估,我们相信它能满足我们的需求,”Feng Changjian 说。该团队于 2015 年开始与 Kubernetes 合作,那会它甚至还不是 1.0 版本。如今,NetEase 内部云平台还使用了 CNCF 项目 PrometheusEnvoyHarborgRPCHelm, 在生产集群中运行 10000 个节点,并可支持集群多达 30000 个节点。基于对内部平台的学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,NetEase 轻舟微服务。

-

NetEase 团队报告说,Kubernetes 已经提高了研发效率一倍多,部署效率提高了 2.8 倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。

+

NetEase 团队报告说,Kubernetes 已经提高了研发效率一倍多,部署效率提高了 2.8 倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”Feng Changjian 说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。

-{{< case-studies/quote author="曾宇兴,NetEase 架构师" >}} +{{< case-studies/quote author="Zeng Yuxing,NetEase 架构师" >}} “系统可以在单个集群中支持 30000 个节点。在生产中,我们在单个集群中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。” {{< /case-studies/quote >}} @@ -61,18 +81,18 @@

影响

{{< /case-studies/lead >}} --> {{< case-studies/lead >}} -其游戏业务是世界第五大游戏业务,但这不是 NetEase 为消费者提供的所有业务。 +其游戏业务是世界第五大游戏业务,但这不是 NetEase 为消费者提供的所有业务。 {{< /case-studies/lead >}} -

公司还在中国经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务是有近 10 亿用户使用的网站,如 163.com 126.com 免费电子邮件服务。有了这样的规模,为所有这些系统提供基础设施的 NetEase Cloud 团队在 2015 年就意识到,他们的研发流程使得开发人员难以跟上需求。NetEase Cloud 和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们渴望通过无服务器容器服务自动为用户提供基础设施和工具。”

+

公司还在中国经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台;其中最后一个服务是有近 10 亿用户使用的网站,如 163.com126.com 免费电子邮件服务。有了这样的规模,为所有这些系统提供基础设施的 NetEase Cloud 团队在 2015 年就意识到,他们的研发流程使得开发人员难以跟上需求。NetEase Cloud 和容器服务架构师 Feng Changjian 表示:“我们的用户需要自己准备所有基础设施。”“我们渴望通过无服务器容器服务自动为用户提供基础设施和工具。”

-

在考虑构建自己的业务流程解决方案后,NetEase 决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自谷歌,这一事实让团队有信心,它能够跟上 NetEase 的规模。“经过 2 到 3 个月的评估,我们相信它能满足我们的需求,”冯长健说。

+

在考虑构建自己的业务流程解决方案后,NetEase 决定将其私有云平台建立在 Kubernetes 的基础上。这项技术来自 Google,这一事实让团队有信心,它能够跟上 NetEase 的规模。“经过 2 到 3 个月的评估,我们相信它能满足我们的需求,”Feng Changjian 说。

{{< case-studies/quote image="/images/case-studies/netease/banner3.jpg" - author="冯长健,NetEase Cloud 和容器托管平台架构师" + author="Feng Changjian,NetEase Cloud 和容器托管平台架构师" >}} “我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。” {{< /case-studies/quote >}} @@ -92,12 +112,12 @@

影响

-

该团队于 2015 年开始采用 Kubernetes,那会它甚至还不是 1.0 版本,因为它相对易于使用,并且使 DevOps 在公司中得以实现。“我们放弃了 Kubernetes 的一些概念;我们只想使用标准化框架,”冯长健说。“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。”

+

该团队于 2015 年开始采用 Kubernetes,那会它甚至还不是 1.0 版本,因为它相对易于使用,并且使 DevOps 在公司中得以实现。“我们放弃了 Kubernetes 的一些概念;我们只想使用标准化框架,”Feng Changjian 说。“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。”

-

团队首先专注于构建容器平台以更好地管理资源,然后通过添加内部系统(如监视)来改进对微服务的支持。这意味着整合了 CNCF 项目 Prometheus EnvoyHarborgRPC Helm。“我们正在努力提供简化和标准化的流程,以便我们的用户和客户能够利用我们的最佳实践,”冯长健说。

+

团队首先专注于构建容器平台以更好地管理资源,然后通过添加内部系统(如监视)来改进对微服务的支持。这意味着整合了 CNCF 项目 PrometheusEnvoyHarborgRPCHelm。“我们正在努力提供简化和标准化的流程,以便我们的用户和客户能够利用我们的最佳实践,”Feng Changjian 说。

{{< case-studies/quote image="/images/case-studies/netease/banner4.jpg" - author="李兰青,NetEase Kubernetes 开发人员" + author="Li Lanqing,NetEase Kubernetes 开发人员" >}} “只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的有所助力的技术。” {{< /case-studies/quote >}} @@ -122,33 +142,33 @@

影响

-

“系统可以在单个群集中支持 30000 个节点。在生产中,我们在单个群集中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。”

+

“如今,系统可以在单个集群中支持 30000 个节点,“架构师 Zeng Yuxing 说。“在生产中,我们在单个集群中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。”

-

NetEase 团队报告说,Kubernetes 已经提高了研发效率一倍多。部署效率提高了 2.8 倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。

+

NetEase 团队报告说,Kubernetes 已经提高了研发效率一倍多。部署效率提高了 2.8 倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”Feng Changjian 说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU,则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。

-{{< case-studies/quote author="李兰青,NetEase Kubernetes 开发人员">}} +{{< case-studies/quote author="Li Lanqing,NetEase Kubernetes 开发人员">}} “通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。” {{< /case-studies/quote >}} -

基于使用内部平台的成果和学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品, NetEase 轻舟微服务。“我们的想法是,我们可以找到我们的游戏和电子商务以及云音乐提供商遇到的问题,所以我们可以整合他们的体验,并提供一个平台,以满足所有用户的需求,”曾宇兴说。

+

基于使用内部平台的成果和学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,NetEase 轻舟微服务。“我们的想法是,我们可以找到我们的游戏和电子商务以及云音乐提供商遇到的问题,所以我们可以整合他们的体验,并提供一个平台,以满足所有用户的需求,”Zeng Yuxing 说。

-

无论是否使用 NetEase 产品,该团队鼓励其他公司尝试 Kubernetes。Kubernetes 开发者李兰青表示:“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的技术,可以帮助他们。”

+

无论是否使用 NetEase 产品,该团队鼓励其他公司尝试 Kubernetes。Kubernetes 开发者 Li Lanqing 表示:“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的技术,可以帮助他们。”

-

作为最终用户和供应商,NetEase 已经更多地参与社区,向其他公司学习,分享他们所做的工作。该团队一直在为 Harbor 和 Envoy 项目做出贡献,在 NetEase 进行规模测试技术时提供反馈。“我们是一个团队,专注于应对微服务架构的挑战,”冯长健说。“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。”

+

作为最终用户和供应商,NetEase 已经更多地参与社区,向其他公司学习,分享他们所做的工作。该团队一直在为 Harbor 和 Envoy 项目做出贡献,在 NetEase 进行规模测试技术时提供反馈。“我们是一个团队,专注于应对微服务架构的挑战,”Feng Changjian 说。“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。”

From b396da8a947fe6fb4f8346ee771773cefcea9337 Mon Sep 17 00:00:00 2001 From: Will Vesey Date: Mon, 25 Jul 2022 23:45:55 -0400 Subject: [PATCH 221/292] Fix minor typo (#35296) * Fix minor typo Removes an extraneous `"` character * Additional minor grammatical changes --- .../migrating-from-dockershim/change-runtime-containerd.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md index 8c5c8b3a72431..5b6afe04e5abe 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md @@ -5,8 +5,8 @@ content_type: task --- This task outlines the steps needed to update your container runtime to containerd from Docker. It -is applicable for cluster operators running Kubernetes 1.23 or earlier. Also this covers an -example scenario for migrating from dockershim to containerd and alternative container runtimes +is applicable for cluster operators running Kubernetes 1.23 or earlier. This also covers an +example scenario for migrating from dockershim to containerd. Alternative container runtimes can be picked from this [page](/docs/setup/production-environment/container-runtimes/). ## {{% heading "prerequisites" %}} @@ -100,7 +100,7 @@ then run the following commands: Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags. `--container-runtime=remote` and -`--container-runtime-endpoint=unix:///run/containerd/containerd.sock"`. +`--container-runtime-endpoint=unix:///run/containerd/containerd.sock`. Users using kubeadm should be aware that the `kubeadm` tool stores the CRI socket for each host as an annotation in the Node object for that host. To change it you can execute the following command From 9cf150f4c2e5cd239b9a31138bddf92bc5332f91 Mon Sep 17 00:00:00 2001 From: Kartik Sharma Date: Tue, 26 Jul 2022 09:24:45 +0530 Subject: [PATCH 222/292] Update content/en/docs/concepts/storage/dynamic-provisioning.md Co-authored-by: divya-mohan0209 --- content/en/docs/concepts/storage/dynamic-provisioning.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md index c8bdf8840976d..6da1850adaf8f 100644 --- a/content/en/docs/concepts/storage/dynamic-provisioning.md +++ b/content/en/docs/concepts/storage/dynamic-provisioning.md @@ -116,7 +116,7 @@ can enable this behavior by: is enabled on the API server. An administrator can mark a specific `StorageClass` as default by adding the -[`storageclass.kubernetes.io/is-default-class`](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) annotation to it. +`storageclass.kubernetes.io/is-default-class` [annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it. When a default `StorageClass` exists in a cluster and a user creates a `PersistentVolumeClaim` with `storageClassName` unspecified, the `DefaultStorageClass` admission controller automatically adds the From 95f53722078788bf10e458bd8812252433487afc Mon Sep 17 00:00:00 2001 From: Arhell Date: Tue, 26 Jul 2022 10:42:19 +0300 Subject: [PATCH 223/292] [pl] update KubeCon date --- content/pl/_index.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/pl/_index.html b/content/pl/_index.html index 6d61651f02788..0bb06fe15fe07 100644 --- a/content/pl/_index.html +++ b/content/pl/_index.html @@ -44,12 +44,12 @@

The Challenges of Migrating 150+ Microservices to Kubernetes



- Weź udział w KubeCon Europe 17-20.06.2022 + Weź udział w KubeCon North America 24-28.10.2022



- Weź udział w KubeCon North America 24-28.10.2022 + Weź udział w KubeCon Europe 17-21.04.2023
From c3f86b7fc081ed84e972315c5352c421a60b0689 Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Tue, 26 Jul 2022 09:19:15 +0100 Subject: [PATCH 224/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Shannon Kularathna --- .../tasks/configure-pod-container/configure-pod-configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index b6679980ad8c4..dd2f2b1c6733d 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -746,7 +746,7 @@ If you run this pod, and there is no ConfigMap named `no-config`, the mounted vo When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into existence after a pod has started. -Kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the +The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. From 748d4a998d89bde6c78b3eb301f11deb7372be57 Mon Sep 17 00:00:00 2001 From: Tom Kivlin <52716470+tomkivlin@users.noreply.github.com> Date: Tue, 26 Jul 2022 09:20:11 +0100 Subject: [PATCH 225/292] Update content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md Co-authored-by: Shannon Kularathna --- .../configure-pod-container/configure-pod-configmap.md | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index dd2f2b1c6733d..44b0dfcbd6731 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -712,11 +712,8 @@ If you run this pod, and there is a ConfigMap named `a-config` but that ConfigMa a key named `akey`, the output is also empty. If you do set a value for `akey` in the `a-config` ConfigMap, this pod prints that value and then terminates. -#### Optional ConfigMap via volume plugin - -Volumes and files provided by a ConfigMap can be also be marked as optional. -The ConfigMap or the key specified does not have to exist. -The mount path for such items will always be created. +You can also mark the volumes and files provided by a ConfigMap as optional. Kubernetes always creates the mount paths for the volume, even if the referenced ConfigMap or key doesn't exist. For example, the following +Pod specification marks a volume that references a ConfigMap as optional: ```yaml apiVersion: v1 @@ -737,9 +734,6 @@ spec: name: no-config optional: true # mark the source ConfigMap as optional restartPolicy: Never -``` - -If you run this pod, and there is no ConfigMap named `no-config`, the mounted volume will be empty. ### Mounted ConfigMaps are updated automatically From ad33b0c10757978ffaddfa66a691dc9ea302bec1 Mon Sep 17 00:00:00 2001 From: Tom Kivlin Date: Tue, 26 Jul 2022 09:48:28 +0100 Subject: [PATCH 226/292] updates from Shannon feedback --- .../configure-pod-configmap.md | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 44b0dfcbd6731..02329f931ca4d 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -657,7 +657,7 @@ data: ### Restrictions -- You must create a ConfigMap before referencing it in a Pod specification, or mark the ConfigMap as "optional" (see [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist, or hasn't been marked as "optional" the Pod won't start. Likewise, references to keys that don't exist in the ConfigMap will prevent the pod from starting. +- You must create the `ConfigMap` object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as `optional` in the Pod spec (see [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist and you don't mark the reference as `optional`, the Pod won't start. Similarly, references to keys that don't exist in the ConfigMap will also prevent the Pod from starting, unless you mark the key references as `optional`. - If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered invalid will be skipped. The pod will be allowed to start, but the invalid names will be recorded in the event log (`InvalidVariableNames`). The log message lists each skipped key. For example: @@ -677,15 +677,11 @@ data: ### Optional ConfigMaps -In a Pod, or pod template, you can mark a reference to a ConfigMap as _optional_. -If the ConfigMap is non-existent, the configuration for which it provides data in the Pod (e.g. environment variable, mounted volume) will be empty. +You can mark a reference to a ConfigMap as _optional_ in a Pod specification. +If the ConfigMap doesn't exist, the configuration for which it provides data in the Pod (e.g. environment variable, mounted volume) will be empty. If the ConfigMap exists, but the referenced key is non-existent the data is also empty. -#### Optional ConfigMap in environment variables - -There might be situations where environment variables are not always required. -You can mark an environment variables for a container as optional, -like this: +For example, the following Pod specification marks an environment variable from a ConfigMap as optional: ```yaml apiVersion: v1 @@ -734,6 +730,7 @@ spec: name: no-config optional: true # mark the source ConfigMap as optional restartPolicy: Never +``` ### Mounted ConfigMaps are updated automatically From 08ec8543dba613c288b1072d443c4bb925e9cd09 Mon Sep 17 00:00:00 2001 From: Michael Date: Tue, 26 Jul 2022 18:33:58 +0800 Subject: [PATCH 227/292] [zh-cn] updated /concepts/storage/dynamic-provisioning.md --- .../concepts/storage/dynamic-provisioning.md | 46 ++++++++++--------- 1 file changed, 24 insertions(+), 22 deletions(-) diff --git a/content/zh-cn/docs/concepts/storage/dynamic-provisioning.md b/content/zh-cn/docs/concepts/storage/dynamic-provisioning.md index f256992599b8f..01fd7ed29cc92 100644 --- a/content/zh-cn/docs/concepts/storage/dynamic-provisioning.md +++ b/content/zh-cn/docs/concepts/storage/dynamic-provisioning.md @@ -1,5 +1,5 @@ --- -title: 动态卷供应 +title: 动态卷制备 content_type: concept weight: 40 --- @@ -20,11 +20,11 @@ to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. --> -动态卷供应允许按需创建存储卷。 -如果没有动态供应,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷, +动态卷制备允许按需创建存储卷。 +如果没有动态制备,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷, 然后在 Kubernetes 集群创建 [`PersistentVolume` 对象](/zh-cn/docs/concepts/storage/persistent-volumes/)来表示这些卷。 -动态供应功能消除了集群管理员预先配置存储的需要。 相反,它在用户请求时自动供应存储。 +动态制备功能消除了集群管理员预先配置存储的需要。相反,它在用户请求时自动制备存储。 @@ -40,9 +40,9 @@ from the API group `storage.k8s.io`. A cluster administrator can define as many *provisioner*) that provisions a volume and the set of parameters to pass to that provisioner when provisioning. --> -动态卷供应的实现基于 `storage.k8s.io` API 组中的 `StorageClass` API 对象。 -集群管理员可以根据需要定义多个 `StorageClass` 对象,每个对象指定一个*卷插件*(又名 *provisioner*), -卷插件向卷供应商提供在创建卷时需要的数据卷信息及相关参数。 +动态卷制备的实现基于 `storage.k8s.io` API 组中的 `StorageClass` API 对象。 +集群管理员可以根据需要定义多个 `StorageClass` 对象,每个对象指定一个**卷插件**(又名 **provisioner**), +卷插件向卷制备商提供在创建卷时需要的数据卷信息及相关参数。 集群管理员可以在集群中定义和公开多种存储(来自相同或不同的存储系统),每种都具有自定义参数集。 -该设计也确保终端用户不必担心存储供应的复杂性和细微差别,但仍然能够从多个存储选项中进行选择。 +该设计也确保终端用户不必担心存储制备的复杂性和细微差别,但仍然能够从多个存储选项中进行选择。 -## 启用动态卷供应 {#enabling-dynamic-provisioning} +## 启用动态卷制备 {#enabling-dynamic-provisioning} -要启用动态供应功能,集群管理员需要为用户预先创建一个或多个 `StorageClass` 对象。 -`StorageClass` 对象定义当动态供应被调用时,哪一个驱动将被使用和哪些参数将被传递给驱动。 +要启用动态制备功能,集群管理员需要为用户预先创建一个或多个 `StorageClass` 对象。 +`StorageClass` 对象定义当动态制备被调用时,哪一个驱动将被使用和哪些参数将被传递给驱动。 StorageClass 对象的名字必须是一个合法的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 以下清单创建了一个 `StorageClass` 存储类 "slow",它提供类似标准磁盘的永久磁盘。 @@ -110,7 +110,7 @@ parameters: -## 使用动态卷供应 +## 使用动态卷制备 {#using-dynamic-provisioning} -用户通过在 `PersistentVolumeClaim` 中包含存储类来请求动态供应的存储。 +用户通过在 `PersistentVolumeClaim` 中包含存储类来请求动态制备的存储。 在 Kubernetes v1.9 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。 用户现在能够而且应该使用 `PersistentVolumeClaim` 对象的 `storageClassName` 字段。 这个字段的值必须能够匹配到集群管理员配置的 `StorageClass` 名称(见[下面](#enabling-dynamic-provisioning))。 @@ -150,7 +150,7 @@ spec: This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed. --> -该声明会自动供应一块类似 SSD 的永久磁盘。 +该声明会自动制备一块类似 SSD 的永久磁盘。 在删除该声明后,这个卷也会被销毁。 -可以在集群上启用动态卷供应,以便在未指定存储类的情况下动态设置所有声明。 +可以在集群上启用动态卷制备,以便在未指定存储类的情况下动态设置所有声明。 集群管理员可以通过以下方式启用此行为: -- 标记一个 `StorageClass` 为 *默认*; +- 标记一个 `StorageClass` 为 **默认**; - 确保 [`DefaultStorageClass` 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)在 API 服务端被启用。 -管理员可以通过向其添加 `storageclass.kubernetes.io/is-default-class` 注解来将特定的 `StorageClass` 标记为默认。 +管理员可以通过向其添加 `storageclass.kubernetes.io/is-default-class` +[annotation](/zh-cn/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) +来将特定的 `StorageClass` 标记为默认。 当集群中存在默认的 `StorageClass` 并且用户创建了一个未指定 `storageClassName` 的 `PersistentVolumeClaim` 时, `DefaultStorageClass` 准入控制器会自动向其中添加指向默认存储类的 `storageClassName` 字段。 @@ -191,13 +193,13 @@ Note that there can be at most one *default* storage class on a cluster, or a `PersistentVolumeClaim` without `storageClassName` explicitly specified cannot be created. --> -请注意,集群上最多只能有一个 *默认* 存储类,否则无法创建没有明确指定 +请注意,集群上最多只能有一个 **默认** 存储类,否则无法创建没有明确指定 `storageClassName` 的 `PersistentVolumeClaim`。 -## 拓扑感知 +## 拓扑感知 {#topology-awareness} -在[多区域](/zh-cn/docs/setup/best-practices/multiple-zones/)集群中,Pod 可以被分散到多个区域。 -单区域存储后端应该被供应到 Pod 被调度到的区域。 +在[多可用区](/zh-cn/docs/setup/best-practices/multiple-zones/)集群中,Pod 可以被分散到某个区域的多个可用区。 +单可用区存储后端应该被制备到 Pod 被调度到的可用区。 这可以通过设置[卷绑定模式](/zh-cn/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。 From 38020e8b48b3ae5dbae8dff6405d896fbe1edd52 Mon Sep 17 00:00:00 2001 From: Michael Date: Tue, 26 Jul 2022 19:13:51 +0800 Subject: [PATCH 228/292] [zh-cn] updated /concepts/storage/volume-snapshots.md --- .../docs/concepts/storage/volume-snapshots.md | 90 +++++++++++-------- 1 file changed, 51 insertions(+), 39 deletions(-) diff --git a/content/zh-cn/docs/concepts/storage/volume-snapshots.md b/content/zh-cn/docs/concepts/storage/volume-snapshots.md index 719bcfa2a323a..c523fb6c4dbc2 100644 --- a/content/zh-cn/docs/concepts/storage/volume-snapshots.md +++ b/content/zh-cn/docs/concepts/storage/volume-snapshots.md @@ -12,13 +12,11 @@ weight: 40 -{{< feature-state for_k8s_version="1.17" state="beta" >}} - -在 Kubernetes 中,卷快照是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes -的 [持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)。 +在 Kubernetes 中,**卷快照** 是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes +的[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)。 @@ -31,29 +29,31 @@ In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage -与 `PersistentVolume` 和 `PersistentVolumeClaim` 两个 API 资源用于给用户和管理员提供卷类似,`VolumeSnapshotContent` 和 `VolumeSnapshot` 两个 API 资源用于给用户和管理员创建卷快照。 +与 `PersistentVolume` 和 `PersistentVolumeClaim` 这两个 API 资源用于给用户和管理员制备卷类似, +`VolumeSnapshotContent` 和 `VolumeSnapshot` 这两个 API 资源用于给用户和管理员创建卷快照。 -`VolumeSnapshotContent` 是一种快照,从管理员已提供的集群中的卷获取。就像持久卷是集群的资源一样,它也是集群中的资源。 +`VolumeSnapshotContent` 是从一个卷获取的一种快照,该卷由管理员在集群中进行制备。 +就像持久卷(PersistentVolume)是集群的资源一样,它也是集群中的资源。 -`VolumeSnapshot` 是用户对于卷的快照的请求。它类似于持久卷声明。 +`VolumeSnapshot` 是用户对于卷的快照的请求。它类似于持久卷声明(PersistentVolumeClaim)。 -`VolumeSnapshotClass` 允许指定属于 `VolumeSnapshot` 的不同属性。在从存储系统的相同卷上获取的快照之间,这些属性可能有所不同,因此不能通过使用与 `PersistentVolumeClaim` 相同的 `StorageClass` 来表示。 +`VolumeSnapshotClass` 允许指定属于 `VolumeSnapshot` 的不同属性。在从存储系统的相同卷上获取的快照之间, +这些属性可能有所不同,因此不能通过使用与 `PersistentVolumeClaim` 相同的 `StorageClass` 来表示。 -卷快照能力为 Kubernetes 用户提供了一种标准的方式来在指定时间点 -复制卷的内容,并且不需要创建全新的卷。例如,这一功能使得数据库管理员 -能够在执行编辑或删除之类的修改之前对数据库执行备份。 +卷快照能力为 Kubernetes 用户提供了一种标准的方式来在指定时间点复制卷的内容,并且不需要创建全新的卷。 +例如,这一功能使得数据库管理员能够在执行编辑或删除之类的修改之前对数据库执行备份。 -* API 对象 `VolumeSnapshot`,`VolumeSnapshotContent` 和 `VolumeSnapshotClass` - 是 {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}, +* API 对象 `VolumeSnapshot`,`VolumeSnapshotContent` 和 `VolumeSnapshotClass` + 是 {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}, 不属于核心 API。 * `VolumeSnapshot` 支持仅可用于 CSI 驱动。 * 作为 `VolumeSnapshot` 部署过程的一部分,Kubernetes 团队提供了一个部署于控制平面的快照控制器, @@ -78,12 +78,12 @@ Users need to be aware of the following when using this feature: 并且负责创建和删除 `VolumeSnapshotContent` 对象。 边车 csi-snapshotter 监视 `VolumeSnapshotContent` 对象, 并且触发针对 CSI 端点的 `CreateSnapshot` 和 `DeleteSnapshot` 的操作。 -* 还有一个验证性质的 Webhook 服务器,可以对快照对象进行更严格的验证。 - Kubernetes 发行版应将其与快照控制器和 CRD(而非 CSI 驱动程序)一起安装。 +* 还有一个验证性质的 Webhook 服务器,可以对快照对象进行更严格的验证。 + Kubernetes 发行版应将其与快照控制器和 CRD(而非 CSI 驱动程序)一起安装。 此服务器应该安装在所有启用了快照功能的 Kubernetes 集群中。 -* CSI 驱动可能实现,也可能没有实现卷快照功能。CSI 驱动可能会使用 csi-snapshotter +* CSI 驱动可能实现,也可能没有实现卷快照功能。CSI 驱动可能会使用 csi-snapshotter 来提供对卷快照的支持。详见 [CSI 驱动程序文档](https://kubernetes-csi.github.io/docs/) -* Kubernetes 负责 CRDs 和快照控制器的安装。 +* Kubernetes 负责 CRD 和快照控制器的安装。 ## 卷快照和卷快照内容的生命周期 {#lifecycle-of-a-volume-snapshot-and-volume-snapshot-content} -`VolumeSnapshotContents` 是集群中的资源。`VolumeSnapshots` 是对于这些资源的请求。`VolumeSnapshotContents` 和 `VolumeSnapshots` 之间的交互遵循以下生命周期: +`VolumeSnapshotContents` 是集群中的资源。`VolumeSnapshots` 是对于这些资源的请求。 +`VolumeSnapshotContents` 和 `VolumeSnapshots` 之间的交互遵循以下生命周期: -### 供应卷快照 {#provisioning-volume-snapshot} +### 制备卷快照 {#provisioning-volume-snapshot} -快照可以通过两种方式进行配置:预配置或动态配置。 +快照可以通过两种方式进行制备:预制备或动态制备。 -#### 预配置 {#static} -集群管理员创建多个 `VolumeSnapshotContents`。它们带有存储系统上实际卷快照的详细信息,可以供集群用户使用。它们存在于 Kubernetes API 中,并且能够被使用。 +#### 预制备 {#static} + +集群管理员创建多个 `VolumeSnapshotContents`。它们带有存储系统上实际卷快照的详细信息,可以供集群用户使用。 +它们存在于 Kubernetes API 中,并且能够被使用。 -#### 动态的 {#dynamic} +#### 动态制备 {#dynamic} 可以从 `PersistentVolumeClaim` 中动态获取快照,而不用使用已经存在的快照。 在获取快照时,[卷快照类](/zh-cn/docs/concepts/storage/volume-snapshot-classes/) @@ -128,12 +131,13 @@ The snapshot controller handles the binding of a `VolumeSnapshot` object with an --> ### 绑定 {#binding} -在预配置和动态配置场景下,快照控制器处理绑定 `VolumeSnapshot` 对象和其合适的 `VolumeSnapshotContent` 对象。绑定关系是一对一的。 +在预制备和动态制备场景下,快照控制器处理绑定 `VolumeSnapshot` 对象和其合适的 `VolumeSnapshotContent` 对象。 +绑定关系是一对一的。 -在预配置快照绑定场景下,`VolumeSnapshotContent` 对象创建之后,才会和 `VolumeSnapshot` 进行绑定。 +在预制备快照绑定场景下,`VolumeSnapshotContent` 对象创建之后,才会和 `VolumeSnapshot` 进行绑定。 -如果一个 PVC 正在被快照用来作为源进行快照创建,则该 PVC 是使用中的。如果用户删除正作为快照源的 PVC API 对象,则 PVC 对象不会立即被删除掉。相反,PVC 对象的删除将推迟到任何快照不在主动使用它为止。当快照的 `Status` 中的 `ReadyToUse`值为 `true` 时,PVC 将不再用作快照源。 +如果一个 PVC 正在被快照用来作为源进行快照创建,则该 PVC 是使用中的。如果用户删除正作为快照源的 PVC API 对象, +则 PVC 对象不会立即被删除掉。相反,PVC 对象的删除将推迟到任何快照不在主动使用它为止。 +当快照的 `Status` 中的 `ReadyToUse`值为 `true` 时,PVC 将不再用作快照源。 -当从 `PersistentVolumeClaim` 中生成快照时,`PersistentVolumeClaim` 就在被使用了。如果删除一个作为快照源的 `PersistentVolumeClaim` 对象,这个 `PersistentVolumeClaim` 对象不会立即被删除的。相反,删除 `PersistentVolumeClaim` 对象的动作会被放弃,或者推迟到快照的 Status 为 ReadyToUse时再执行。 +当从 `PersistentVolumeClaim` 中生成快照时,`PersistentVolumeClaim` 就在被使用了。 +如果删除一个作为快照源的 `PersistentVolumeClaim` 对象,这个 `PersistentVolumeClaim` 对象不会立即被删除的。 +相反,删除 `PersistentVolumeClaim` 对象的动作会被放弃,或者推迟到快照的 Status 为 ReadyToUse 时再执行。 ### 删除 {#delete} -删除 `VolumeSnapshot` 对象触发删除 `VolumeSnapshotContent` 操作,并且 `DeletionPolicy` 会紧跟着执行。如果 `DeletionPolicy` 是 `Delete`,那么底层存储快照会和 `VolumeSnapshotContent` 一起被删除。如果 `DeletionPolicy` 是 `Retain`,那么底层快照和 `VolumeSnapshotContent` 都会被保留。 +删除 `VolumeSnapshot` 对象触发删除 `VolumeSnapshotContent` 操作,并且 `DeletionPolicy` 会紧跟着执行。 +如果 `DeletionPolicy` 是 `Delete`,那么底层存储快照会和 `VolumeSnapshotContent` 一起被删除。 +如果 `DeletionPolicy` 是 `Retain`,那么底层快照和 `VolumeSnapshotContent` 都会被保留。 ## 卷快照 {#volume-snapshots} -每个 `VolumeSnapshot` 包含一个 spec 和一个状态。 +每个 `VolumeSnapshot` 包含一个 spec 和一个 status。 ```yaml apiVersion: snapshot.storage.k8s.io/v1 @@ -194,7 +204,7 @@ A volume snapshot can request a particular class by specifying the name of a using the attribute `volumeSnapshotClassName`. If nothing is set, then the default class is used if available. --> `persistentVolumeClaimName` 是 `PersistentVolumeClaim` 数据源对快照的名称。 -这个字段是动态配置快照中的必填字段。 +这个字段是动态制备快照中的必填字段。 卷快照可以通过指定 [VolumeSnapshotClass](/zh-cn/docs/concepts/storage/volume-snapshot-classes/) 使用 `volumeSnapshotClassName` 属性来请求特定类。如果没有设置,那么使用默认类(如果有)。 @@ -202,8 +212,8 @@ using the attribute `volumeSnapshotClassName`. If nothing is set, then the defau -如下面例子所示,对于预配置的快照,需要给快照指定 `volumeSnapshotContentName` 来作为源。 -对于预配置的快照 `source` 中的`volumeSnapshotContentName` 字段是必填的。 +如下面例子所示,对于预制备的快照,需要给快照指定 `volumeSnapshotContentName` 作为来源。 +对于预制备的快照 `source` 中的`volumeSnapshotContentName` 字段是必填的。 ```yaml apiVersion: snapshot.storage.k8s.io/v1 @@ -221,7 +231,8 @@ spec: Each VolumeSnapshot contains a spec and a status, which is the specification and status of the volume snapshot. Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates `VolumeSnapshotContent` objects. Here is an example: --> -每个 VolumeSnapshotContent 对象包含 spec 和 status。在动态配置时,快照通用控制器创建 `VolumeSnapshotContent` 对象。下面是例子: +每个 VolumeSnapshotContent 对象包含 spec 和 status。 +在动态制备时,快照通用控制器创建 `VolumeSnapshotContent` 对象。下面是例子: ```yaml apiVersion: snapshot.storage.k8s.io/v1 @@ -248,7 +259,7 @@ For pre-provisioned snapshots, you (as cluster administrator) are responsible fo --> `volumeHandle` 是存储后端创建卷的唯一标识符,在卷创建期间由 CSI 驱动程序返回。动态设置快照需要此字段。它指出了快照的卷源。 -对于预配置快照,你(作为集群管理员)要按如下命令来创建 `VolumeSnapshotContent` 对象。 +对于预制备快照,你(作为集群管理员)要按如下命令来创建 `VolumeSnapshotContent` 对象。 ```yaml apiVersion: snapshot.storage.k8s.io/v1 @@ -268,7 +279,8 @@ spec: -`snapshotHandle` 是存储后端创建卷的唯一标识符。对于预设置快照,这个字段是必须的。它指定此 `VolumeSnapshotContent` 表示的存储系统上的 CSI 快照 id。 +`snapshotHandle` 是存储后端创建卷的唯一标识符。对于预设置快照,这个字段是必须的。 +它指定此 `VolumeSnapshotContent` 表示的存储系统上的 CSI 快照 ID。 -对于预配置的快照,`Spec.SourceVolumeMode` 需要由集群管理员填充。 +对于预制备的快照,`Spec.SourceVolumeMode` 需要由集群管理员填充。 启用此特性的 `VolumeSnapshotContent` 资源示例如下所示: @@ -340,13 +352,13 @@ spec: -## 从快照供应卷 +## 从快照制备卷 {#provisioning-volumes-from-snapshots} -你可以配置一个新卷,该卷预填充了快照中的数据,在 `持久卷声明` 对象中使用 *dataSource* 字段。 +你可以制备一个新卷,该卷预填充了快照中的数据,在 `持久卷声明` 对象中使用 **dataSource** 字段。 ## 创建配置文件 -[`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) 结构体定义了可以通过文件配置的 Kubelet 配置子集, +[`KubeletConfiguration`](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) +结构体定义了可以通过文件配置的 Kubelet 配置子集, -在这个示例中, Kubelet 被设置为在地址 192.168.0.8 端口 20250 上提供服务,以并行方式拖拽镜像, -当可用内存低于 200Mi 时, kubelet 将会开始驱逐 Pods。 -没有声明的其余配置项都将使用默认值,除非使用命令行参数来重载。 +在这个示例中, Kubelet 被设置为在地址 192.168.0.8 端口 20250 上提供服务,以并行方式拉取镜像, +当可用内存低于 200Mi 时, kubelet 将会开始驱逐 Pod。 +没有声明的其余配置项都将使用默认值,除非使用命令行参数来重载。 命令行中的参数将会覆盖配置文件中的对应值。 本任务给出将容器运行时从 Docker 改为 containerd 所需的步骤。 此任务适用于运行 1.23 或更早版本 Kubernetes 的集群操作人员。 -同时,此任务也涉及从 dockershim 迁移到 containerd 的示例场景, -以及可以从[此页面](/zh-cn/docs/setup/production-environment/container-runtimes/) -获得的其他容器运行时列表。 +同时,此任务也涉及从 dockershim 迁移到 containerd 的示例场景。 +有关其他备选的容器运行时,可查阅 +[此页面](/zh-cn/docs/setup/production-environment/container-runtimes/)进行拣选。 ## {{% heading "prerequisites" %}} @@ -48,7 +48,7 @@ kubectl drain --ignore-daemonsets -将 `` 替换为你所要腾空的节点的名称 +将 `` 替换为你所要腾空的节点的名称。 ## 配置 kubelet 使用 containerd 作为其容器运行时 编辑文件 `/var/lib/kubelet/kubeadm-flags.env`,将 containerd 运行时添加到标志中: `--container-runtime=remote` 和 -`--container-runtime-endpoint=unix:///run/containerd/containerd.sock"`。 +`--container-runtime-endpoint=unix:///run/containerd/containerd.sock`。 最后,在一切顺利时删除 Docker。 -{{< tabs name="tab-remove-docker-enigine" >}} +{{< tabs name="tab-remove-docker-engine" >}} {{% tab name="CentOS" %}} ```shell From 655ae211b9d5e2aa06081af559004fb26763176c Mon Sep 17 00:00:00 2001 From: Maciej Filocha Date: Tue, 26 Jul 2022 14:48:12 +0200 Subject: [PATCH 231/292] Change KubeCon/CloudNativeCon Europe link --- content/en/_index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/_index.html b/content/en/_index.html index 831e27b27fc60..fbdd5d5720859 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -48,7 +48,7 @@

The Challenges of Migrating 150+ Microservices to Kubernetes




- Attend KubeCon Europe on April 17-21, 2023 + Attend KubeCon Europe on April 17-21, 2023
From c9b6674c65a6bc44c3721df808f0d020ee1d4a71 Mon Sep 17 00:00:00 2001 From: Michael Date: Tue, 26 Jul 2022 20:45:06 +0800 Subject: [PATCH 232/292] [zh-cn] resync /tasks/tools/install-kubectl-linux.md --- .../docs/tasks/tools/install-kubectl-linux.md | 101 ++++++++++-------- 1 file changed, 58 insertions(+), 43 deletions(-) diff --git a/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md b/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md index 45705fb314379..7ecf0233d6ef5 100644 --- a/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md +++ b/content/zh-cn/docs/tasks/tools/install-kubectl-linux.md @@ -171,49 +171,65 @@ Or use this for detailed view of version: ### 用原生包管理工具安装 {#install-using-native-package-management} {{< tabs name="kubectl_install" >}} -{{% tab name="Ubuntu、Debian 或 HypriotOS" %}} - - - 1. 更新 `apt` 包索引,并安装使用 Kubernetes `apt` 仓库所需要的包: - - ```shell - sudo apt-get update - sudo apt-get install -y apt-transport-https ca-certificates curl - ``` - - 2. 下载 Google Cloud 公开签名秘钥: - - ```shell - sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg - ``` - - - 3. 添加 Kubernetes `apt` 仓库: - - ```shell - echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list - ``` - - - 4. 更新 `apt` 包索引,使之包含新的仓库并安装 kubectl: - - ```shell - sudo apt-get update - sudo apt-get install -y kubectl - ``` +{{% tab name="基于 Debian 的发行版" %}} + + +1. 更新 `apt` 包索引,并安装使用 Kubernetes `apt` 仓库所需要的包: + + ```shell + sudo apt-get update + sudo apt-get install -y ca-certificates curl + ``` + + {{< note >}} + + 如果你使用 Debian 9(stretch)或更早版本,则你还需要安装 `apt-transport-https`: + + ```shell + sudo apt-get install -y apt-transport-https + ``` + + {{< /note >}} + + + +2. 下载 Google Cloud 公开签名秘钥: + + ```shell + sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg + ``` + + + +3. 添加 Kubernetes `apt` 仓库: + + ```shell + echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list + ``` + + + +4. 更新 `apt` 包索引,使之包含新的仓库并安装 kubectl: + + ```shell + sudo apt-get update + sudo apt-get install -y kubectl + ``` + {{% /tab %}} -{{% tab name="基于 Red Hat 的发行版" %}} +{{< tab name="基于 Red Hat 的发行版" codelang="bash" >}} -```shell cat <}} {{< /tabs >}} + +`apiVersion: v1` + +`import "k8s.io/api/core/v1"` + +## PodTemplate {#PodTemplate} + + +PodTemplate 描述一种模板,用来为预定义的 Pod 生成副本。 + +
+ +- **apiVersion**: v1 + +- **kind**: PodTemplate + +- **metadata** (}}">ObjectMeta) + + + 标准的对象元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **template** (}}">PodTemplateSpec) + + + template 定义将基于此 Pod 模板所创建的 Pod。 + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +## PodTemplateSpec {#PodTemplateSpec} + + +PodTemplateSpec 描述基于某模板所创建的 Pod 所应具有的数据。 + +
+ +- **metadata** (}}">ObjectMeta) + + + 标准的对象元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **spec** (}}">PodSpec) + + + Pod 预期行为的规约。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +## PodTemplateList {#PodTemplateList} + + +PodTemplateList 是 PodTemplate 对象的列表。 + +
+ +- **apiVersion**: v1 + +- **kind**: PodTemplateList + +- **metadata** (}}">ListMeta) + + + 标准的列表元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + +- **items** ([]}}">PodTemplate),必需 + + + PodTemplate 对象列表。 + + +## 操作 {#Operations} + +
+ + +### `get` 读取指定的 PodTemplate + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/podtemplates/{name} + + +#### 参数 + + +- **name** (**路径参数**):string,必需 + + PodTemplate 的名称 + + +- **namespace** (**路径参数**):string,必需 + + }}">namespace + + +- **pretty** (**查询参数**):string + + }}">pretty + + +#### 响应 + +200 (}}">PodTemplate): OK + +401: Unauthorized + + +### `list` 列出或监视 PodTemplate 类型的对象 + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/podtemplates + + +#### 参数 + + +- **namespace** (**路径参数**):string,必需 + + }}">namespace + + +- **allowWatchBookmarks** (**查询参数**):boolean + + }}">allowWatchBookmarks + + +- **continue** (**查询参数**):string + + }}">continue + + +- **fieldSelector** (**查询参数**):string + + }}">fieldSelector + + +- **labelSelector** (**查询参数**):string + + }}">labelSelector + + +- **limit** (**查询参数**):integer + + }}">limit + + +- **pretty** (**查询参数**):string + + }}">pretty + + +- **resourceVersion** (**查询参数**):string + + }}">resourceVersion + + +- **resourceVersion** (**查询参数**):string + + }}">resourceVersionMatch + + +- **timeoutSeconds** (**查询参数**):integer + + }}">timeoutSeconds + + +- **watch** (**查询参数**):boolean + + }}">watch + + +#### 响应 + +200 (}}">PodTemplateList): OK + +401: Unauthorized + + +### `list` 列出或监视 PodTemplate 类型的对象 + +#### HTTP 请求 + +GET /api/v1/podtemplates + + +#### 参数 + + +- **allowWatchBookmarks** (**查询参数**):boolean + + }}">allowWatchBookmarks + + +- **continue** (**查询参数**):string + + }}">continue + + +- **fieldSelector** (**查询参数**):string + + }}">fieldSelector + + +- **labelSelector** (**查询参数**):string + + }}">labelSelector + + +- **limit** (**查询参数**):integer + + }}">limit + + +- **pretty** (**查询参数**):string + + }}">pretty + + +- **resourceVersion** (**查询参数**):string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**):string + + }}">resourceVersionMatch + + +- **timeoutSeconds** (**查询参数**):integer + + }}">timeoutSeconds + + +- **watch** (**查询参数**):boolean + + }}">watch + + +#### 响应 + +200 (}}">PodTemplateList): OK + +401: Unauthorized + + +### `create` 创建一个 PodTemplate + +#### HTTP 请求 + +POST /api/v1/namespaces/{namespace}/podtemplates + + +#### 参数 + + +- **namespace** (**路径参数**):string,必需 + + }}">namespace + + +- **body**: }}">PodTemplate,必需 + + +- **dryRun** (**查询参数**):string + + }}">dryRun + + +- **fieldManager** (**查询参数**):string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**):string + + }}">fieldValidation + + +- **pretty** (**查询参数**):string + + }}">pretty + + +#### 响应 + +200 (}}">PodTemplate): OK + +201 (}}">PodTemplate): Created + +202 (}}">PodTemplate): Accepted + +401: Unauthorized + + +### `update` 替换指定的 PodTemplate + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/podtemplates/{name} + + +#### 参数 + + +- **name** (**路径参数**):string,必需 + + PodTemplate 的名称 + + +- **namespace** (**路径参数**):string,必需 + + }}">namespace + + +- **body**: }}">PodTemplate,必需 + + +- **dryRun** (**查询参数**):string + + }}">dryRun + + +- **fieldManager** (**查询参数**):string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**):string + + }}">fieldValidation + + +- **pretty** (**查询参数**):string + + }}">pretty + + +#### 响应 + +200 (}}">PodTemplate): OK + +201 (}}">PodTemplate): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 PodTemplate + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/podtemplates/{name} + + +#### 参数 + + +- **name** (**路径参数**):string,必需 + + PodTemplate 的名称 + + +- **namespace** (**路径参数**):string,必需 + + }}">namespace + + +- **body**: }}">Patch,必需 + + +- **dryRun** (**查询参数**):string + + }}">dryRun + + +- **fieldManager** (**查询参数**):string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**):string + + }}">fieldValidation + + +- **force** (**查询参数**):boolean + + }}">force + + +- **pretty** (**查询参数**):string + + }}">pretty + + +#### 响应 + +200 (}}">PodTemplate): OK + +201 (}}">PodTemplate): Created + +401: Unauthorized + + +### `delete` 删除一个 PodTemplate + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/podtemplates/{name} + + +#### 参数 + + +- **name** (**路径参数**):string,必需 + + PodTemplate 的名称 + + +- **namespace** (**路径参数**):string,必需 + + }}">namespace + + +- **body**: }}">DeleteOptions + + +- **dryRun** (**查询参数**):string + + }}">dryRun + + +- **gracePeriodSeconds** (**查询参数**):integer + + }}">gracePeriodSeconds + + +- **pretty** (**查询参数**):string + + }}">pretty + + +- **propagationPolicy** (**查询参数**):string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">PodTemplate): OK + +202 (}}">PodTemplate): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 PodTemplate 的集合 + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/podtemplates + + +#### 参数 + + +- **namespace** (**路径参数**):string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + + +- **continue** (**查询参数**):string + + }}">continue + + +- **dryRun** (**查询参数**):string + + }}">dryRun + + +- **fieldSelector** (**查询参数**):string + + }}">fieldSelector + + +- **gracePeriodSeconds** (**查询参数**):integer + + }}">gracePeriodSeconds + + +- **labelSelector** (**查询参数**):string + + }}">labelSelector + + +- **limit** (**查询参数**):integer + + }}">limit + + +- **pretty** (**查询参数**):string + + }}">pretty + + +- **propagationPolicy** (**查询参数**):string + + }}">propagationPolicy + + +- **resourceVersion** (**查询参数**):string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**):string + + }}">resourceVersionMatch + + +- **timeoutSeconds** (**查询参数**):integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized + From 2a16b951a960ba4cb07c3153965aeaf2a307f959 Mon Sep 17 00:00:00 2001 From: kadtendulkar Date: Tue, 26 Jul 2022 22:03:50 +0530 Subject: [PATCH 234/292] Update content/en/docs/tasks/administer-cluster/access-cluster-api.md --- content/en/docs/tasks/administer-cluster/access-cluster-api.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index 5732aed3af8b4..3d620e4b63f7a 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -226,7 +226,7 @@ mvn install See [https://github.com/kubernetes-client/java/releases](https://github.com/kubernetes-client/java/releases) to see which versions are supported. The Java client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) -as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java): +as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java): ```java package io.kubernetes.client.examples; From 683604129d629e9a5c7b04ec8a73be1be34cbd90 Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Wed, 27 Jul 2022 01:56:20 +0800 Subject: [PATCH 235/292] [zh-cn]Replace sample label --- .../service/networking/dual-stack-ipfamilies-ipv6.yaml | 4 ++-- .../examples/service/networking/dual-stack-ipv6-svc.yaml | 2 +- .../service/networking/dual-stack-prefer-ipv6-lb-svc.yaml | 4 ++-- .../networking/dual-stack-preferred-ipfamilies-svc.yaml | 4 ++-- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/content/zh-cn/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml b/content/zh-cn/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml index 7c7239cae6c72..77949c883f095 100644 --- a/content/zh-cn/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml +++ b/content/zh-cn/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml @@ -3,12 +3,12 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilies: - IPv6 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/zh-cn/examples/service/networking/dual-stack-ipv6-svc.yaml b/content/zh-cn/examples/service/networking/dual-stack-ipv6-svc.yaml index 2aa0725059bbc..85c699506c6d2 100644 --- a/content/zh-cn/examples/service/networking/dual-stack-ipv6-svc.yaml +++ b/content/zh-cn/examples/service/networking/dual-stack-ipv6-svc.yaml @@ -5,7 +5,7 @@ metadata: spec: ipFamily: IPv6 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/zh-cn/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml b/content/zh-cn/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml index 0949a7542818b..5a4a99a45cae1 100644 --- a/content/zh-cn/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml +++ b/content/zh-cn/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml @@ -3,14 +3,14 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 type: LoadBalancer selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/zh-cn/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml b/content/zh-cn/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml index c31acfec581ed..79a4f34a7f749 100644 --- a/content/zh-cn/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml +++ b/content/zh-cn/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml @@ -3,14 +3,14 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 From 1955505a1d179026a6c4f070a9b45f6b700ba9c1 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Tue, 26 Jul 2022 18:17:50 +0000 Subject: [PATCH 236/292] Fix minor text wrapping nits. Co-authored-by: Divya Mohan --- .../configmap-secret/managing-secret-using-config-file.md | 8 +++++--- .../configmap-secret/managing-secret-using-kubectl.md | 4 +++- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index a8d0db6c87e63..5ec87d0827a2f 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -34,8 +34,8 @@ The following example stores two strings in a Secret using the `data` field. ``` {{< note >}} - The serialized JSON and YAML values of Secret data are encoded as base64 - strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/macOS, users should avoid using the `-b` option to split long lines. Conversely, Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` option is not available. {{< /note >}} + The serialized JSON and YAML values of Secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/macOS, users should avoid using the `-b` option to split long lines. Conversely, Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` option is not available. + {{< /note >}} The output is similar to: @@ -134,7 +134,9 @@ type: Opaque ### Specifying both `data` and `stringData` -If you specify a field in both `data` and `stringData`, the value from `stringData` is used. For example, if you define the following Secret: +If you specify a field in both `data` and `stringData`, the value from `stringData` is used. + +For example, if you define the following Secret: ```yaml apiVersion: v1 diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 086d44eed8b91..191523042c980 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -111,7 +111,9 @@ username: 5 bytes The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by default. This is to protect the `Secret` from being exposed -accidentally, or from being stored in a terminal log. To check the actual content of the encoded data, please refer to [Decoding the Secret](#decoding-secret). +accidentally, or from being stored in a terminal log. + +To check the actual content of the encoded data, please refer to [Decoding the Secret](#decoding-secret). ## Decoding the Secret {#decoding-secret} From 037bc3394ebad9f4a46340400a577e659fa34e63 Mon Sep 17 00:00:00 2001 From: Michael Date: Tue, 26 Jul 2022 16:35:07 +0800 Subject: [PATCH 237/292] [zh-cn] relocate topology-spread-constraints.md --- .../topology-spread-constraints.md | 867 ++++++++++++++++++ .../docs/concepts/workloads/pods/_index.md | 4 +- .../pods/pod-topology-spread-constraints.md | 688 -------------- 3 files changed, 869 insertions(+), 690 deletions(-) create mode 100644 content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md delete mode 100644 content/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints.md diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md new file mode 100644 index 0000000000000..abf8566ce1ed2 --- /dev/null +++ b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -0,0 +1,867 @@ +--- +title: Pod 拓扑分布约束 +content_type: concept +weight: 40 +--- + + + + + + +你可以使用 **拓扑分布约束(Topology Spread Constraints)** 来控制 +{{< glossary_tooltip text="Pod" term_id="Pod" >}} 在集群内故障域之间的分布, +例如区域(Region)、可用区(Zone)、节点和其他用户自定义拓扑域。 +这样做有助于实现高可用并提升资源利用率。 + +你可以将[集群级约束](#cluster-level-default-constraints)设为默认值,或为个别工作负载配置拓扑分布约束。 + + + + +## 动机 {#motivation} + +假设你有一个最多包含二十个节点的集群,你想要运行一个自动扩缩的 +{{< glossary_tooltip text="工作负载" term_id="workload" >}},请问要使用多少个副本? +答案可能是最少 2 个 Pod,最多 15 个 Pod。 +当只有 2 个 Pod 时,你倾向于这 2 个 Pod 不要同时在同一个节点上运行: +你所遭遇的风险是如果放在同一个节点上且单节点出现故障,可能会让你的工作负载下线。 + +除了这个基本的用法之外,还有一些高级的使用案例,能够让你的工作负载受益于高可用性并提高集群利用率。 + + +随着你的工作负载扩容,运行的 Pod 变多,将需要考虑另一个重要问题。 +假设你有 3 个节点,每个节点运行 5 个 Pod。这些节点有足够的容量能够运行许多副本; +但与这个工作负载互动的客户端分散在三个不同的数据中心(或基础设施可用区)。 +现在你可能不太关注单节点故障问题,但你会注意到延迟高于自己的预期, +在不同的可用区之间发送网络流量会产生一些网络成本。 + +你决定在正常运营时倾向于将类似数量的副本[调度](/zh-cn/docs/concepts/scheduling-eviction/) +到每个基础设施可用区,且你想要该集群在遇到问题时能够自愈。 + +Pod 拓扑分布约束使你能够以声明的方式进行配置。 + + +## `topologySpreadConstraints` 字段 + +Pod API 包括一个 `spec.topologySpreadConstraints` 字段。这里有一个示例: + +```yaml +--- +apiVersion: v1 +kind: Pod +metadata: + name: example-pod +spec: + # 配置一个拓扑分布约束 + topologySpreadConstraints: + - maxSkew: + minDomains: # 可选;自从 v1.24 开始成为 Alpha + topologyKey: + whenUnsatisfiable: + labelSelector: + ### 其他 Pod 字段置于此处 +``` + + +你可以运行 `kubectl explain Pod.spec.topologySpreadConstraints` 阅读有关此字段的更多信息。 + + +### 分布约束定义 + +你可以定义一个或多个 `topologySpreadConstraints` 条目以指导 kube-scheduler +如何将每个新来的 Pod 与跨集群的现有 Pod 相关联。这些字段包括: + + +- **maxSkew** 描述这些 Pod 可能被均匀分布的程度。你必须指定此字段且该数值必须大于零。 + 其语义将随着 `whenUnsatisfiable` 的值发生变化: + + - 如果你选择 `whenUnsatisfiable: DoNotSchedule`,则 `maxSkew` 定义目标拓扑中匹配 Pod 的数量与 + **全局最小值**(与拓扑域中标签选择算符匹配的最小 Pod 数量)之间的最大允许差值。 + 例如,如果你有 3 个可用区,分别有 2、4 和 5 个匹配的 Pod,则全局最小值为 2, + 而 `maxSkew` 相对于该数字进行比较。 + - 如果你选择 `whenUnsatisfiable: ScheduleAnyway`,则该调度器会更为偏向能够降低偏差值的拓扑域。 + + +- **minDomains** 表示符合条件的域的最小数量。此字段是可选的。域是拓扑的一个特定实例。 + 符合条件的域是其节点与节点选择器匹配的域。 + + {{< note >}} + `minDomains` 字段是 1.24 中添加的一个 Alpha 字段。 + 你必须启用 `MinDomainsInPodToplogySpread` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/),才能使用该字段。 + {{< /note >}} + + + + - 指定的 `minDomains` 值必须大于 0。你可以结合 `whenUnsatisfiable: DoNotSchedule` 仅指定 `minDomains`。 + - 当符合条件的、拓扑键匹配的域的数量小于 `minDomains` 时,拓扑分布将“全局最小值”(global minimum)设为 0, + 然后进行 `skew` 计算。“全局最小值” 是一个符合条件的域中匹配 Pod 的最小数量, + 如果符合条件的域的数量小于 `minDomains`,则全局最小值为零。 + - 当符合条件的拓扑键匹配域的个数等于或大于 `minDomains` 时,该值对调度没有影响。 + - 如果你未指定 `minDomains`,则约束行为类似于 `minDomains` 等于 1。 + + +- **topologyKey** 是[节点标签](#node-labels)的键。如果两个节点使用此键标记并且具有相同的标签值, + 则调度器会将这两个节点视为处于同一拓扑域中。该调度器尝试在每个拓扑域中放置数量均衡的 Pod。 + +- **whenUnsatisfiable** 指示如果 Pod 不满足分布约束时如何处理: + - `DoNotSchedule`(默认)告诉调度器不要调度。 + - `ScheduleAnyway` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对节点进行排序。 + +- **labelSelector** 用于查找匹配的 Pod。匹配此标签的 Pod 将被统计,以确定相应拓扑域中 Pod 的数量。 + 有关详细信息,请参考[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors)。 + + +当 Pod 定义了不止一个 `topologySpreadConstraint`,这些约束之间是逻辑与的关系。 +kube-scheduler 会为新的 Pod 寻找一个能够满足所有约束的节点。 + + +### 节点标签 {#node-labels} + +拓扑分布约束依赖于节点标签来标识每个{{< glossary_tooltip text="节点" term_id="node" >}}所在的拓扑域。例如,某节点可能具有标签: + +```yaml + region: us-east-1 + zone: us-east-1a +``` + + +{{< note >}} +为了简便,此示例未使用[众所周知](/zh-cn/docs/reference/labels-annotations-taints/)的标签键 +`topology.kubernetes.io/zone` 和 `topology.kubernetes.io/region`。 +但是,建议使用那些已注册的标签键,而不是此处使用的私有(不合格)标签键 `region` 和 `zone`。 + +你无法对不同上下文之间的私有标签键的含义做出可靠的假设。 +{{< /note >}} + + +假设你有一个 4 节点的集群且带有以下标签: + +``` +NAME STATUS ROLES AGE VERSION LABELS +node1 Ready 4m26s v1.16.0 node=node1,zone=zoneA +node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA +node3 Ready 3m17s v1.16.0 node=node3,zone=zoneB +node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB +``` + + +那么,从逻辑上看集群如下: + +{{}} +graph TB + subgraph "zoneB" + n3(Node3) + n4(Node4) + end + subgraph "zoneA" + n1(Node1) + n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} + + +## 一致性 {#Consistency} + +你应该为一个组中的所有 Pod 设置相同的 Pod 拓扑分布约束。 + +通常,如果你正使用一个工作负载控制器,例如 Deployment,则 Pod 模板会你你解决这个问题。 +如果你混合不同的分布约束,则 Kubernetes 会遵循该字段的 API 定义; +但是,该行为可能更令人困惑,并且故障排除也没那么简单。 + +你需要一种机制来确保拓扑域(例如云提供商区域)中的所有节点具有一致的标签。 +为了避免你需要手动为节点打标签,大多数集群会自动填充知名的标签, +例如 `topology.kubernetes.io/hostname`。检查你的集群是否支持此功能。 + + +## 拓扑分布约束示例 {#topology-spread-constraint-examples} + +### 示例:一个拓扑分布约束 {#example-one-topologyspreadconstraint} + +假设你拥有一个 4 节点集群,其中标记为 `foo: bar` 的 3 个 Pod 分别位于 node1、node2 和 node3 中: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} + + +如果你希望新来的 Pod 均匀分布在现有的可用区域,则可以按如下设置其清单: + +{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} + + +从此清单看,`topologyKey: zone` 意味着均匀分布将只应用于存在标签键值对为 `zone: ` 的节点 +(没有 `zone` 标签的节点将被跳过)。如果调度器找不到一种方式来满足此约束, +则 `whenUnsatisfiable: DoNotSchedule` 字段告诉该调度器将新来的 Pod 保持在 pending 状态。 + +如果该调度器将这个新来的 Pod 放到可用区 `A`,则 Pod 的分布将成为 `[3, 1]`。 +这意味着实际偏差是 2(计算公式为 `3 - 1`),这违反了 `maxSkew: 1` 的约定。 +为了满足这个示例的约束和上下文,新来的 Pod 只能放到可用区 `B` 中的一个节点上: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + p4(mypod) --> n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +或者 + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + p4(mypod) --> n3 + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} + + +你可以调整 Pod 规约以满足各种要求: + +- 将 `maxSkew` 更改为更大的值,例如 `2`,这样新来的 Pod 也可以放在可用区 `A` 中。 +- 将 `topologyKey` 更改为 `node`,以便将 Pod 均匀分布在节点上而不是可用区中。 + 在上面的例子中,如果 `maxSkew` 保持为 `1`,则新来的 Pod 只能放到 `node4` 节点上。 +- 将 `whenUnsatisfiable: DoNotSchedule` 更改为 `whenUnsatisfiable: ScheduleAnyway`, + 以确保新来的 Pod 始终可以被调度(假设满足其他的调度 API)。但是,最好将其放置在匹配 Pod 数量较少的拓扑域中。 + 请注意,这一优先判定会与其他内部调度优先级(如资源使用率等)排序准则一起进行标准化。 + + +### 示例:多个拓扑分布约束 {#example-multiple-topologyspreadconstraints} + +下面的例子建立在前面例子的基础上。假设你拥有一个 4 节点集群, +其中 3 个标记为 `foo: bar` 的 Pod 分别位于 node1、node2 和 node3 上: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} + + +可以组合使用 2 个拓扑分布约束来控制 Pod 在节点和可用区两个维度上的分布: + +{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} + + +在这种情况下,为了匹配第一个约束,新的 Pod 只能放置在可用区 `B` 中; +而在第二个约束中,新来的 Pod 只能调度到节点 `node4` 上。 +该调度器仅考虑满足所有已定义约束的选项,因此唯一可行的选择是放置在节点 `node4` 上。 + + +### 示例:有冲突的拓扑分布约束 {#example-conflicting-topologyspreadconstraints} + +多个约束可能导致冲突。假设有一个跨 2 个可用区的 3 节点集群: + +{{}} +graph BT + subgraph "zoneB" + p4(Pod) --> n3(Node3) + p5(Pod) --> n3 + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n1 + p3(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} + + +如果你将 [`two-constraints.yaml`](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/topology-spread-constraints/two-constraints.yaml) +(来自上一个示例的清单)应用到**这个**集群,你将看到 Pod `mypod` 保持在 `Pending` 状态。 +出现这种情况的原因为:为了满足第一个约束,Pod `mypod` 只能放置在可用区 `B` 中; +而在第二个约束中,Pod `mypod` 只能调度到节点 `node2` 上。 +两个约束的交集将返回一个空集,且调度器无法放置该 Pod。 + +为了应对这种情形,你可以提高 `maxSkew` 的值或修改其中一个约束才能使用 `whenUnsatisfiable: ScheduleAnyway`。 +根据实际情形,例如若你在故障排查时发现某个漏洞修复工作毫无进展,你还可能决定手动删除一个现有的 Pod。 + + +#### 与节点亲和性和节点选择算符的相互作用 {#interaction-with-node-affinity-and-node-selectors} + +如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`, +调度器将在偏差计算中跳过不匹配的节点。 + + +### 示例:带节点亲和性的拓扑分布约束 {#example-topologyspreadconstraints-with-nodeaffinity} + +假设你有一个跨可用区 A 到 C 的 5 节点集群: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class n1,n2,n3,n4,p1,p2,p3 k8s; +class p4 plain; +class zoneA,zoneB cluster; +{{< /mermaid >}} + +{{}} +graph BT + subgraph "zoneC" + n5(Node5) + end + +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class n5 k8s; +class zoneC cluster; +{{< /mermaid >}} + + +而且你知道可用区 `C` 必须被排除在外。在这种情况下,可以按如下方式编写清单, +以便将 Pod `mypod` 放置在可用区 `B` 上,而不是可用区 `C` 上。 +同样,Kubernetes 也会一样处理 `spec.nodeSelector`。 + +{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} + + +## 隐式约定 {#implicit-conventions} + +这里有一些值得注意的隐式约定: + +- 只有与新来的 Pod 具有相同命名空间的 Pod 才能作为匹配候选者。 + +- 调度器会忽略没有任何 `topologySpreadConstraints[*].topologyKey` 的节点。这意味着: + + 1. 位于这些节点上的 Pod 不影响 `maxSkew` 计算,在上面的例子中,假设节点 `node1` 没有标签 "zone", + 则 2 个 Pod 将被忽略,因此新来的 Pod 将被调度到可用区 `A` 中。 + 2. 新的 Pod 没有机会被调度到这类节点上。在上面的例子中, + 假设节点 `node5` 带有 **拼写错误的** 标签 `zone-typo: zoneC`(且没有设置 `zone` 标签)。 + 节点 `node5` 接入集群之后,该节点将被忽略且针对该工作负载的 Pod 不会被调度到那里。 + + +- 注意,如果新 Pod 的 `topologySpreadConstraints[*].labelSelector` 与自身的标签不匹配,将会发生什么。 + 在上面的例子中,如果移除新 Pod 的标签,则 Pod 仍然可以放置到可用区 `B` 中的节点上,因为这些约束仍然满足。 + 然而,在放置之后,集群的不平衡程度保持不变。可用区 `A` 仍然有 2 个 Pod 带有标签 `foo: bar`, + 而可用区 `B` 有 1 个 Pod 带有标签 `foo: bar`。如果这不是你所期望的, + 更新工作负载的 `topologySpreadConstraints[*].labelSelector` 以匹配 Pod 模板中的标签。 + + +## 集群级别的默认约束 {#cluster-level-default-constraints} + +为集群设置默认的拓扑分布约束也是可能的。默认拓扑分布约束在且仅在以下条件满足时才会被应用到 Pod 上: + +- Pod 没有在其 `.spec.topologySpreadConstraints` 中定义任何约束。 +- Pod 隶属于某个 Service、ReplicaSet、StatefulSet 或 ReplicationController。 + +默认约束可以设置为[调度方案](/zh-cn/docs/reference/scheduling/config/#profiles)中 +`PodTopologySpread` 插件参数的一部分。约束的设置采用[如前所述的 API](#api), +只是 `labelSelector` 必须为空。 +选择算符是根据 Pod 所属的 Service、ReplicaSet、StatefulSet 或 ReplicationController 来设置的。 + +配置的示例可能看起来像下面这个样子: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration + +profiles: + - schedulerName: default-scheduler + pluginConfig: + - name: PodTopologySpread + args: + defaultConstraints: + - maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + defaultingType: List +``` + + +{{< note >}} +默认配置下,[`SelectorSpread` 插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)是被禁用的。 +Kubernetes 项目建议使用 `PodTopologySpread` 以执行类似行为。 +{{< /note >}} + + +### 内置默认约束 {#internal-default-constraints} + +{{< feature-state for_k8s_version="v1.24" state="stable" >}} + + +如果你没有为 Pod 拓扑分布配置任何集群级别的默认约束, +kube-scheduler 的行为就像你指定了以下默认拓扑约束一样: + +```yaml +defaultConstraints: + - maxSkew: 3 + topologyKey: "kubernetes.io/hostname" + whenUnsatisfiable: ScheduleAnyway + - maxSkew: 5 + topologyKey: "topology.kubernetes.io/zone" + whenUnsatisfiable: ScheduleAnyway +``` + + +此外,原来用于提供等同行为的 `SelectorSpread` 插件默认被禁用。 + + +{{< note >}} +对于分布约束中所指定的拓扑键而言,`PodTopologySpread` 插件不会为不包含这些拓扑键的节点评分。 +这可能导致在使用默认拓扑约束时,其行为与原来的 `SelectorSpread` 插件的默认行为不同。 + +如果你的节点不会 **同时** 设置 `kubernetes.io/hostname` 和 `topology.kubernetes.io/zone` 标签, +你应该定义自己的约束而不是使用 Kubernetes 的默认约束。 +{{< /note >}} + + +如果你不想为集群使用默认的 Pod 分布约束,你可以通过设置 `defaultingType` 参数为 `List`, +并将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration + +profiles: + - schedulerName: default-scheduler + pluginConfig: + - name: PodTopologySpread + args: + defaultConstraints: [] + defaultingType: List +``` + + +## 比较 podAffinity 和 podAntiAffinity {#comparison-with-podaffinity-podantiaffinity} + +在 Kubernetes 中,Pod 间亲和性和反亲和性控制 Pod 彼此的调度方式(更密集或更分散)。 + +对于 `podAffinity`:吸引 Pod;你可以尝试将任意数量的 Pod 集中到符合条件的拓扑域中。 +对于 `podAntiAffinity`:驱逐 Pod。如果将此设为 `requiredDuringSchedulingIgnoredDuringExecution` 模式, +则只有单个 Pod 可以调度到单个拓扑域;如果你选择 `preferredDuringSchedulingIgnoredDuringExecution`, +则你将丢失强制执行此约束的能力。 + + +要实现更细粒度的控制,你可以设置拓扑分布约束来将 Pod 分布到不同的拓扑域下,从而实现高可用性或节省成本。 +这也有助于工作负载的滚动更新和平稳地扩展副本规模。 + +有关详细信息,请参阅有关 Pod 拓扑分布约束的增强倡议的 +[动机](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation)一节。 + + +## 已知局限性 {#known-limitations} + +- 当 Pod 被移除时,无法保证约束仍被满足。例如,缩减某 Deployment 的规模时,Pod 的分布可能不再均衡。 + + 你可以使用 [Descheduler](https://github.com/kubernetes-sigs/descheduler) 来重新实现 Pod 分布的均衡。 + +- 具有污点的节点上匹配的 Pod 也会被统计。 + 参考 [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)。 + + +- 该调度器不会预先知道集群拥有的所有可用区和其他拓扑域。 + 拓扑域由集群中存在的节点确定。在自动扩缩的集群中,如果一个节点池(或节点组)的节点数量缩减为零, + 而用户正期望其扩容时,可能会导致调度出现问题。 + 因为在这种情况下,调度器不会考虑这些拓扑域,因为其中至少有一个节点。 + 你可以通过使用感知 Pod 拓扑分布约束并感知整个拓扑域集的集群自动扩缩工具来解决此问题。 + +## {{% heading "whatsnext" %}} + + +- 博客:[PodTopologySpread 介绍](/blog/2020/05/introducing-podtopologyspread/)详细解释了 `maxSkew`, + 并给出了一些进阶的使用示例。 +- 阅读针对 Pod 的 API 参考的 + [调度](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling)一节。 diff --git a/content/zh-cn/docs/concepts/workloads/pods/_index.md b/content/zh-cn/docs/concepts/workloads/pods/_index.md index 818dcda9bf6aa..afd9ecdef408c 100644 --- a/content/zh-cn/docs/concepts/workloads/pods/_index.md +++ b/content/zh-cn/docs/concepts/workloads/pods/_index.md @@ -609,7 +609,7 @@ in the Pod Lifecycle documentation. The {{< api-reference page="workload-resources/pod-v1" >}} object definition describes the object in detail. * [The Distributed System Toolkit: Patterns for Composite Containers](/blog/2015/06/the-distributed-system-toolkit-patterns/) explains common layouts for Pods with more than one container. -* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). +* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints//). --> * 了解 [Pod 生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 * 了解 [RuntimeClass](/zh-cn/docs/concepts/containers/runtime-class/),以及如何使用它 @@ -621,7 +621,7 @@ in the Pod Lifecycle documentation. 对象的定义中包含了更多的细节信息。 * 博客 [分布式系统工具箱:复合容器模式](/blog/2015/06/the-distributed-system-toolkit-patterns/) 中解释了在同一 Pod 中包含多个容器时的几种常见布局。 -* 了解 [Pod 拓扑分布约束](/zh-cn/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。 +* 了解 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints//)。 - - - - -你可以使用 _拓扑分布约束(Topology Spread Constraints)_ 来控制 -{{< glossary_tooltip text="Pod" term_id="Pod" >}} 在集群内故障域之间的分布, -例如区域(Region)、可用区(Zone)、节点和其他用户自定义拓扑域。 -这样做有助于实现高可用并提升资源利用率。 - - - - -## 先决条件 {#prerequisites} - -### 节点标签 {#node-labels} - - -拓扑分布约束依赖于节点标签来标识每个节点所在的拓扑域。 -例如,某节点可能具有标签:`node=node1,zone=us-east-1a,region=us-east-1` - - -假设你拥有具有以下标签的一个 4 节点集群: - -``` -NAME STATUS ROLES AGE VERSION LABELS -node1 Ready 4m26s v1.16.0 node=node1,zone=zoneA -node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA -node3 Ready 3m17s v1.16.0 node=node3,zone=zoneB -node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB -``` - - -那么,从逻辑上看集群如下: - -{{}} -graph TB - subgraph "zoneB" - n3(Node3) - n4(Node4) - end - subgraph "zoneA" - n1(Node1) - n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4 k8s; - class zoneA,zoneB cluster; -{{< /mermaid >}} - - -你可以复用在大多数集群上自动创建和填充的[常用标签](/zh-cn/docs/reference/labels-annotations-taints/), -而不是手动添加标签。 - - -## Pod 的分布约束 {#spread-constraints-for-pods} - -### API - - -`pod.spec.topologySpreadConstraints` 字段定义如下所示: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - topologySpreadConstraints: - - maxSkew: - topologyKey: - whenUnsatisfiable: - labelSelector: -``` - - -你可以定义一个或多个 `topologySpreadConstraint` 来指示 kube-scheduler -如何根据与现有的 Pod 的关联关系将每个传入的 Pod 部署到集群中。字段包括: - - - -- **maxSkew** 描述 Pod 分布不均的程度。这是给定拓扑类型中任意两个拓扑域中匹配的 - Pod 之间的最大允许差值。它必须大于零。取决于 `whenUnsatisfiable` 的取值, - 其语义会有不同。 - - 当 `whenUnsatisfiable` 等于 "DoNotSchedule" 时,`maxSkew` 是目标拓扑域中匹配的 - Pod 数与全局最小值(一个拓扑域中与标签选择器匹配的 Pod 的最小数量。例如,如果你有 - 3 个区域,分别具有 0 个、2 个 和 3 个匹配的 Pod,则全局最小值为 0。)之间可存在的差异。 - - 当 `whenUnsatisfiable` 等于 "ScheduleAnyway" 时,调度器会更为偏向能够降低偏差值的拓扑域。 - - -- **minDomains** 表示符合条件的域的最小数量。域是拓扑的一个特定实例。 - 符合条件的域是其节点与节点选择器匹配的域。 - - - 指定的 `minDomains` 的值必须大于 0。 - - 当符合条件的、拓扑键匹配的域的数量小于 `minDomains` 时,Pod 拓扑分布将“全局最小值” - (global minimum)设为 0,然后进行 `skew` 计算。“全局最小值”是一个符合条件的域中匹配 - Pod 的最小数量,如果符合条件的域的数量小于 `minDomains`,则全局最小值为零。 - - 当符合条件的拓扑键匹配域的个数等于或大于 `minDomains` 时,该值对调度没有影响。 - - 当 `minDomains` 为 nil 时,约束的行为等于 `minDomains` 为 1。 - - 当 `minDomains` 不为 nil 时,`whenUnsatisfiable` 的值必须为 "`DoNotSchedule`" 。 - - {{< note >}} - - `minDomains` 字段是在 1.24 版本中新增的 alpha 字段。你必须启用 - `MinDomainsInPodToplogySpread` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)才能使用它。 - {{< /note >}} - - -- **topologyKey** 是节点标签的键。如果两个节点使用此键标记并且具有相同的标签值, - 则调度器会将这两个节点视为处于同一拓扑域中。调度器试图在每个拓扑域中放置数量均衡的 Pod。 - -- **whenUnsatisfiable** 指示如果 Pod 不满足分布约束时如何处理: - - `DoNotSchedule`(默认)告诉调度器不要调度。 - - `ScheduleAnyway` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对节点进行排序。 - -- **labelSelector** 用于查找匹配的 Pod。匹配此标签的 Pod 将被统计, - 以确定相应拓扑域中 Pod 的数量。 - 有关详细信息,请参考[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors)。 - - -当 Pod 定义了不止一个 `topologySpreadConstraint`,这些约束之间是逻辑与的关系。 -kube-scheduler 会为新的 Pod 寻找一个能够满足所有约束的节点。 - - -你可以执行 `kubectl explain Pod.spec.topologySpreadConstraints` -命令以了解关于 topologySpreadConstraints 的更多信息。 - - -### 例子:单个 TopologySpreadConstraint - -假设你拥有一个 4 节点集群,其中标记为 `foo:bar` 的 3 个 Pod 分别位于 -node1、node2 和 node3 中: - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class zoneA,zoneB cluster; -{{< /mermaid >}} - - -如果希望新来的 Pod 均匀分布在现有的可用区域,则可以按如下设置其规约: - -{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} - - -`topologyKey: zone` 意味着均匀分布将只应用于存在标签键值对为 -"zone:<任何值>" 的节点。 -`whenUnsatisfiable: DoNotSchedule` 告诉调度器如果新的 Pod 不满足约束, -则让它保持悬决状态。 - - -如果调度器将新的 Pod 放入 "zoneA",Pods 分布将变为 [3, 1],因此实际的偏差为 -2(3 - 1)。这违反了 `maxSkew: 1` 的约定。此示例中,新 Pod 只能放置在 -"zoneB" 上: - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - p4(mypod) --> n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class p4 plain; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -或者 - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - p4(mypod) --> n3 - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class p4 plain; - class zoneA,zoneB cluster; -{{< /mermaid >}} - - -你可以调整 Pod 规约以满足各种要求: - - -- 将 `maxSkew` 更改为更大的值,比如 "2",这样新的 Pod 也可以放在 "zoneA" 上。 -- 将 `topologyKey` 更改为 "node",以便将 Pod 均匀分布在节点上而不是区域中。 - 在上面的例子中,如果 `maxSkew` 保持为 "1",那么传入的 Pod 只能放在 "node4" 上。 -- 将 `whenUnsatisfiable: DoNotSchedule` 更改为 `whenUnsatisfiable: ScheduleAnyway`, - 以确保新的 Pod 始终可以被调度(假设满足其他的调度 API)。 - 但是,最好将其放置在匹配 Pod 数量较少的拓扑域中。 - (请注意,这一优先判定会与其他内部调度优先级(如资源使用率等)排序准则一起进行标准化。) - - -### 例子:多个 TopologySpreadConstraints - - -下面的例子建立在前面例子的基础上。假设你拥有一个 4 节点集群,其中 3 个标记为 `foo:bar` 的 -Pod 分别位于 node1、node2 和 node3 上: - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class p4 plain; - class zoneA,zoneB cluster; -{{< /mermaid >}} - - -可以使用 2 个 TopologySpreadConstraint 来控制 Pod 在 区域和节点两个维度上的分布: - -{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} - - -在这种情况下,为了匹配第一个约束,新的 Pod 只能放置在 "zoneB" 中;而在第二个约束中, -新的 Pod 只能放置在 "node4" 上。最后两个约束的结果加在一起,唯一可行的选择是放置在 -"node4" 上。 - - -多个约束之间可能存在冲突。假设有一个跨越 2 个区域的 3 节点集群: - -{{}} -graph BT - subgraph "zoneB" - p4(Pod) --> n3(Node3) - p5(Pod) --> n3 - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n1 - p3(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s; - class zoneA,zoneB cluster; -{{< /mermaid >}} - - -如果对集群应用 "two-constraints.yaml",会发现 "mypod" 处于 `Pending` 状态。 -这是因为:为了满足第一个约束,"mypod" 只能放在 "zoneB" 中,而第二个约束要求 -"mypod" 只能放在 "node2" 上。Pod 调度无法满足两种约束。 - - -为了克服这种情况,你可以增加 `maxSkew` 或修改其中一个约束,让其使用 -`whenUnsatisfiable: ScheduleAnyway`。 - - -### 节点亲和性与节点选择器的相互作用 {#interaction-with-node-affinity-and-node-selectors} - -如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`, -调度器将在偏差计算中跳过不匹配的节点。 - - -### 示例:TopologySpreadConstraints 与 NodeAffinity - -假设你有一个跨越 zoneA 到 zoneC 的 5 节点集群: - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - -classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; -classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; -classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; -class n1,n2,n3,n4,p1,p2,p3 k8s; -class p4 plain; -class zoneA,zoneB cluster; -{{< /mermaid >}} - -{{}} -graph BT - subgraph "zoneC" - n5(Node5) - end - -classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; -classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; -classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; -class n5 k8s; -class zoneC cluster; -{{< /mermaid >}} - - - -而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 YAML, -以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector` -也要一样处理。 - -{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} - - -调度器不会预先知道集群拥有的所有区域和其他拓扑域。拓扑域由集群中存在的节点确定。 -在自动伸缩的集群中,如果一个节点池(或节点组)的节点数量为零, -而用户正期望其扩容时,可能会导致调度出现问题。 -因为在这种情况下,调度器不会考虑这些拓扑域信息,因为它们是空的,没有节点。 - - -### 其他值得注意的语义 {#other-noticeable-semantics} - -这里有一些值得注意的隐式约定: - - -- 只有与新的 Pod 具有相同命名空间的 Pod 才能作为匹配候选者。 -- 调度器会忽略没有 `topologySpreadConstraints[*].topologyKey` 的节点。这意味着: - 1. 位于这些节点上的 Pod 不影响 `maxSkew` 的计算。 - 在上面的例子中,假设 "node1" 没有标签 "zone",那么 2 个 Pod 将被忽略, - 因此传入的 Pod 将被调度到 "zoneA" 中。 - - 2. 新的 Pod 没有机会被调度到这类节点上。 - 在上面的例子中,假设一个带有标签 `{zone-typo: zoneC}` 的 "node5" 加入到集群, - 它将由于没有标签键 "zone" 而被忽略。 - - -- 注意,如果新 Pod 的 `topologySpreadConstraints[*].labelSelector` - 与自身的标签不匹配,将会发生什么。 - 在上面的例子中,如果移除新 Pod 上的标签,Pod 仍然可以调度到 "zoneB",因为约束仍然满足。 - 然而,在调度之后,集群的不平衡程度保持不变。zoneA 仍然有 2 个带有 {foo:bar} 标签的 Pod, - zoneB 有 1 个带有 {foo:bar} 标签的 Pod。 - 因此,如果这不是你所期望的,建议工作负载的 `topologySpreadConstraints[*].labelSelector` - 与其自身的标签匹配。 - - -### 集群级别的默认约束 {#cluster-level-default-constraints} - -为集群设置默认的拓扑分布约束也是可能的。 -默认拓扑分布约束在且仅在以下条件满足时才会被应用到 Pod 上: - -- Pod 没有在其 `.spec.topologySpreadConstraints` 设置任何约束; -- Pod 隶属于某个服务、副本控制器、ReplicaSet 或 StatefulSet。 - - -你可以在 [调度方案(Scheduling Profile)](/zh-cn/docs/reference/scheduling/config/#profiles) -中将默认约束作为 `PodTopologySpread` 插件参数的一部分来设置。 -约束的设置采用[如前所述的 API](#api),只是 `labelSelector` 必须为空。 -选择算符是根据 Pod 所属的服务、副本控制器、ReplicaSet 或 StatefulSet 来设置的。 - -配置的示例可能看起来像下面这个样子: - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1beta3 -kind: KubeSchedulerConfiguration - -profiles: - - schedulerName: default-scheduler - pluginConfig: - - name: PodTopologySpread - args: - defaultConstraints: - - maxSkew: 1 - topologyKey: topology.kubernetes.io/zone - whenUnsatisfiable: ScheduleAnyway - defaultingType: List -``` - -{{< note >}} - -[`SelectorSpread` 插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)默认是被禁用的。 -建议使用 `PodTopologySpread` 来实现类似的行为。 -{{< /note >}} - - -#### 内部默认约束 {#internal-default-constraints} - -{{< feature-state for_k8s_version="v1.24" state="stable" >}} - - -如果你没有为 Pod 拓扑分布配置任何集群级别的默认约束, -kube-scheduler 的行为就像你指定了以下默认拓扑约束一样: - -```yaml -defaultConstraints: - - maxSkew: 3 - topologyKey: "kubernetes.io/hostname" - whenUnsatisfiable: ScheduleAnyway - - maxSkew: 5 - topologyKey: "topology.kubernetes.io/zone" - whenUnsatisfiable: ScheduleAnyway -``` - - -此外,原来用于提供等同行为的 `SelectorSpread` 插件默认被禁用。 - -{{< note >}} - -对于分布约束中所指定的拓扑键而言,`PodTopologySpread` 插件不会为不包含这些主键的节点评分。 -这可能导致在使用默认拓扑约束时,其行为与原来的 `SelectorSpread` 插件的默认行为不同, - - -如果你的节点不会 **同时** 设置 `kubernetes.io/hostname` 和 -`topology.kubernetes.io/zone` 标签,你应该定义自己的约束而不是使用 -Kubernetes 的默认约束。 -{{< /note >}} - - -如果你不想为集群使用默认的 Pod 分布约束,你可以通过设置 `defaultingType` 参数为 `List` -并将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束。 - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1beta3 -kind: KubeSchedulerConfiguration - -profiles: - - schedulerName: default-scheduler - pluginConfig: - - name: PodTopologySpread - args: - defaultConstraints: [] - defaultingType: List -``` - - -## 与 PodAffinity/PodAntiAffinity 相比较 - -在 Kubernetes 中,与“亲和性”相关的指令控制 Pod 的调度方式(更密集或更分散)。 - - -- 对于 `PodAffinity`,你可以尝试将任意数量的 Pod 集中到符合条件的拓扑域中。 -- 对于 `PodAntiAffinity`,只能将一个 Pod 调度到某个拓扑域中。 - - -要实现更细粒度的控制,你可以设置拓扑分布约束来将 Pod 分布到不同的拓扑域下, -从而实现高可用性或节省成本。这也有助于工作负载的滚动更新和平稳地扩展副本规模。 -有关详细信息,请参考 -[动机](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation)文档。 - - -## 已知局限性 - -- 当 Pod 被移除时,无法保证约束仍被满足。例如,缩减某 Deployment 的规模时, - Pod 的分布可能不再均衡。 - 你可以使用 [Descheduler](https://github.com/kubernetes-sigs/descheduler) - 来重新实现 Pod 分布的均衡。 - -- 具有污点的节点上匹配的 Pods 也会被统计。 - 参考 [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)。 - -## {{% heading "whatsnext" %}} - - -- [博客: PodTopologySpread介绍](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/) - 详细解释了 `maxSkew`,并给出了一些高级的使用示例。 From da83961cd07eb11e5008b1e792cd6a5bf539a1fe Mon Sep 17 00:00:00 2001 From: Michael Date: Sat, 9 Jul 2022 14:10:36 +0800 Subject: [PATCH 238/292] [zh-cn] resync workload-resources/deployment-v1.md --- .../workload-resources/deployment-v1.md | 1107 +++++++++++++++++ 1 file changed, 1107 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1.md new file mode 100644 index 0000000000000..746ede656f8fc --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/deployment-v1.md @@ -0,0 +1,1107 @@ +--- +api_metadata: + apiVersion: "apps/v1" + import: "k8s.io/api/apps/v1" + kind: "Deployment" +content_type: "api_reference" +description: "Deployment 使得 Pod 和 ReplicaSet 能够进行声明式更新。" +title: "Deployment" +weight: 5 +--- + + +`apiVersion: apps/v1` + +`import "k8s.io/api/apps/v1"` + +## Deployment {#Deployment} + + +Deployment 使得 Pod 和 ReplicaSet 能够进行声明式更新。 + +
+ +- **apiVersion**: apps/v1 + +- **kind**: Deployment + + +- **metadata** (}}">ObjectMeta) + + 标准的对象元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **spec** (}}">DeploymentSpec) + + Deployment 预期行为的规约。 + +- **status** (}}">DeploymentStatus) + + 最近观测到的 Deployment 状态。 + +## DeploymentSpec {#DeploymentSpec} + + +DeploymentSpec 定义 Deployment 预期行为的规约。 + +
+ + +- **selector** (}}">LabelSelector),必需 + + 供 Pod 所用的标签选择算符。通过此字段选择现有 ReplicaSet 的 Pod 集合, + 被选中的 ReplicaSet 将受到这个 Deployment 的影响。此字段必须与 Pod 模板的标签匹配。 + +- **template** (}}">PodTemplateSpec),必需 + + template 描述将要创建的 Pod。 + + +- **replicas** (int32) + + 预期 Pod 的数量。这是一个指针,用于辨别显式零和未指定的值。默认为 1。 + +- **minReadySeconds** (int32) + + 新建的 Pod 在没有任何容器崩溃的情况下就绪并被系统视为可用的最短秒数。 + 默认为 0(Pod 就绪后即被视为可用)。 + + +- **strategy** (DeploymentStrategy) + + **补丁策略:retainKeys** + + 将现有 Pod 替换为新 Pod 时所用的部署策略。 + + + **DeploymentStrategy 描述如何将现有 Pod 替换为新 Pod。** + + + + - **strategy.type** (string) + + 部署的类型。取值可以是 “Recreate” 或 “RollingUpdate”。默认为 RollingUpdate。 + + - **strategy.rollingUpdate** (RollingUpdateDeployment) + + 滚动更新这些配置参数。仅当 type = RollingUpdate 时才出现。 + + + **控制滚动更新预期行为的规约。** + + + + - **strategy.rollingUpdate.maxSurge** (IntOrString) + + 超出预期的 Pod 数量之后可以调度的最大 Pod 数量。该值可以是一个绝对数(例如: + 5)或一个预期 Pod 的百分比(例如:10%)。如果 MaxUnavailable 为 0,则此字段不能为 0。 + 通过向上取整计算得出一个百分比绝对数。默认为 25%。例如:当此值设为 30% 时, + 如果滚动更新启动,则可以立即对 ReplicaSet 扩容,从而使得新旧 Pod 总数不超过预期 Pod 数量的 130%。 + 一旦旧 Pod 被杀死,则可以再次对新的 ReplicaSet 扩容, + 确保更新期间任何时间运行的 Pod 总数最多为预期 Pod 数量的 130%。 + + + **IntOrString 是可以保存 int32 或字符串的一个类型。 + 当用于 JSON 或 YAML 编组和取消编组时,它会产生或消费内部类型。 + 例如,这允许你拥有一个可以接受名称或数值的 JSON 字段。** + + + + - **strategy.rollingUpdate.maxUnavailable** (IntOrString) + + 更新期间可能不可用的最大 Pod 数量。该值可以是一个绝对数(例如: + 5)或一个预期 Pod 的百分比(例如:10%)。通过向下取整计算得出一个百分比绝对数。 + 如果 MaxSurge 为 0,则此字段不能为 0。默认为 25%。 + 例如:当此字段设为 30%,则在滚动更新启动时 ReplicaSet 可以立即缩容为预期 Pod 数量的 70%。 + 一旦新的 Pod 就绪,ReplicaSet 可以再次缩容,接下来对新的 ReplicaSet 扩容, + 确保更新期间任何时间可用的 Pod 总数至少是预期 Pod 数量的 70%。 + + + **IntOrString 是可以保存 int32 或字符串的一个类型。 + 当用于 JSON 或 YAML 编组和取消编组时,它会产生或消费内部类型。 + 例如,这允许你拥有一个可以接受名称或数值的 JSON 字段。** + + +- **revisionHistoryLimit** (int32) + + 保留允许回滚的旧 ReplicaSet 的数量。这是一个指针,用于辨别显式零和未指定的值。默认为 10。 + +- **progressDeadlineSeconds** (int32) + + Deployment 在被视为失败之前取得进展的最大秒数。Deployment 控制器将继续处理失败的部署, + 原因为 ProgressDeadlineExceeded 的状况将被显示在 Deployment 状态中。 + 请注意,在 Deployment 暂停期间将不会估算进度。默认为 600s。 + +- **paused** (boolean) + + 指示部署被暂停。 + +## DeploymentStatus {#DeploymentStatus} + + +DeploymentStatus 是最近观测到的 Deployment 状态。 + +
+ + +- **replicas** (int32) + + 此部署所针对的(其标签与选择算符匹配)未终止 Pod 的总数。 + +- **availableReplicas** (int32) + + 此部署针对的可用(至少 minReadySeconds 才能就绪)的 Pod 总数。 + +- **readyReplicas** (int32) + + readyReplicas 是此 Deployment 在就绪状况下处理的目标 Pod 数量。 + + +- **unavailableReplicas** (int32) + + 此部署针对的不可用 Pod 总数。这是 Deployment 具有 100% 可用容量时仍然必需的 Pod 总数。 + 它们可能是正在运行但还不可用的 Pod,也可能是尚未创建的 Pod。 + + +- **updatedReplicas** (int32) + + 此 Deployment 所针对的未终止 Pod 的总数,这些 Pod 采用了预期的模板规约。 + +- **collisionCount** (int32) + + 供 Deployment 所用的哈希冲突计数。 + Deployment 控制器在需要为最新的 ReplicaSet 创建名称时将此字段用作冲突预防机制。 + + +- **conditions** ([]DeploymentCondition) + + **补丁策略:按照键 `type` 合并** + + 表示 Deployment 当前状态的最新可用观测值。 + + + **DeploymentCondition 描述某个点的部署状态。** + + + + - **conditions.status** (string),必需 + + 状况的状态,取值为 True、False 或 Unknown 之一。 + + - **conditions.type** (string),必需 + + Deployment 状况的类型。 + + + + - **conditions.lastTransitionTime** (Time) + + 状况上次从一个状态转换为另一个状态的时间。 + + + **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。 + 为 time 包的许多函数方法提供了封装器。** + + + + - **conditions.lastUpdateTime** (Time) + + 上次更新此状况的时间。 + + + **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。 + 为 time 包的许多函数方法提供了封装器。** + + + + - **conditions.message** (string) + + 这是一条人类可读的消息,指示有关上次转换的详细信息。 + + - **conditions.reason** (string) + + 状况上次转换的原因。 + + +- **observedGeneration** (int64) + + Deployment 控制器观测到的代数(Generation)。 + +## DeploymentList {#DeploymentList} + + +DeploymentList 是 Deployment 的列表。 + +
+ +- **apiVersion**: apps/v1 + +- **kind**: DeploymentList + + +- **metadata** (}}">ListMeta) + + 标准的列表元数据。 + +- **items** ([]}}">Deployment),必需 + + items 是 Deployment 的列表。 + + +## 操作 {#Operations} + +
+ +### `get` 读取指定的 Deployment + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/deployments/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + Deployment 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Deployment): OK + +401: Unauthorized + + +### `get` 读取指定的 Deployment 的状态 + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + Deployment 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Deployment): OK + +401: Unauthorized + + +### `list` 列出或监视 Deployment 类别的对象 + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/deployments + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">DeploymentList): OK + +401: Unauthorized + + +### `list` 列出或监视 Deployment 类别的对象 + +#### HTTP 请求 + +GET /apis/apps/v1/deployments + + +#### 参数 + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">DeploymentList): OK + +401: Unauthorized + + +### `create` 创建 Deployment + +#### HTTP 请求 + +POST /apis/apps/v1/namespaces/{namespace}/deployments + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Deployment,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Deployment): OK + +201 (}}">Deployment): Created + +202 (}}">Deployment): Accepted + +401: Unauthorized + + +### `update` 替换指定的 Deployment + +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + Deployment 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Deployment,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Deployment): OK + +201 (}}">Deployment): Created + +401: Unauthorized + + +### `update` 替换指定的 Deployment 的状态 + +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + Deployment 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Deployment,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Deployment): OK + +201 (}}">Deployment): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 Deployment + +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/deployments/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + Deployment 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Deployment): OK + +201 (}}">Deployment): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 Deployment 的状态 + +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + Deployment 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">Deployment): OK + +201 (}}">Deployment): Created + +401: Unauthorized + + +### `delete` 删除 Deployment + +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/deployments/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + Deployment 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">Status): OK + +202 (}}">Status): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 Deployment 的集合 + +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/deployments + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **continue** (**查询参数**): string + + }}">continue + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized From f5e9117013f2086dee68f9b6a11668afa5b44e41 Mon Sep 17 00:00:00 2001 From: Michael Date: Sat, 9 Jul 2022 11:55:52 +0800 Subject: [PATCH 239/292] [zh-cn] resync workload-resources/replication-controller-v1.md --- .../replication-controller-v1.md | 969 ++++++++++++++++++ 1 file changed, 969 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/workload-resources/replication-controller-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/replication-controller-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/replication-controller-v1.md new file mode 100644 index 0000000000000..ba4baaccb9498 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/replication-controller-v1.md @@ -0,0 +1,969 @@ +--- +api_metadata: + apiVersion: "v1" + import: "k8s.io/api/core/v1" + kind: "ReplicationController" +content_type: "api_reference" +description: "ReplicationController 表示一个副本控制器的配置。" +title: "ReplicationController" +weight: 3 +--- + + +`apiVersion: v1` + +`import "k8s.io/api/core/v1"` + +## ReplicationController {#ReplicationController} + + +ReplicationController 表示一个副本控制器的配置。 + +
+ +- **apiVersion**: v1 + +- **kind**: ReplicationController + + +- **metadata** (}}">ObjectMeta) + + 如果 ReplicationController 的标签为空,则这些标签默认为与副本控制器管理的 Pod 相同。 + 标准的对象元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **spec** (}}">ReplicationControllerSpec) + + spec 定义副本控制器预期行为的规约。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + +- **status** (}}">ReplicationControllerStatus) + + status 是最近观测到的副本控制器的状态。此数据可能在某个时间窗之后过期。 + 该值由系统填充,只读。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +## ReplicationControllerSpec {#ReplicationControllerSpec} + + +ReplicationControllerSpec 表示一个副本控制器的规约。 + +
+ + +- **selector** (map[string]string) + + selector 是针对 Pod 的标签查询,符合条件的 Pod 个数应与 replicas 匹配。 + 如果 selector 为空,则默认为出现在 Pod 模板中的标签。 + 如果置空以表示默认使用 Pod 模板中的标签,则标签的主键和取值必须匹配,以便由这个副本控制器进行控制。 + 更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors + +- **template** (}}">PodTemplateSpec) + + template 是描述 Pod 的一个对象,将在检测到副本不足时创建此对象。 + 此字段优先于 templateRef。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller#pod-template + + +- **replicas** (int32) + + replicas 是预期副本的数量。这是一个指针,用于辨别显式零和未指定的值。默认为 1。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller + +- **minReadySeconds** (int32) + + 新建的 Pod 在没有任何容器崩溃的情况下就绪并被系统视为可用的最短秒数。 + 默认为 0(Pod 就绪后即被视为可用)。 + +## ReplicationControllerStatus {#ReplicationControllerStatus} + + +ReplicationControllerStatus 表示一个副本控制器的当前状态。 + +
+ + +- **replicas** (int32),必需 + + replicas 是最近观测到的副本数量。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller + +- **availableReplicas** (int32) + + 这个副本控制器可用副本(至少 minReadySeconds 才能就绪)的数量。 + + +- **readyReplicas** (int32) + + 此副本控制器所用的就绪副本的数量。 + +- **fullyLabeledReplicas** (int32) + + 标签与副本控制器的 Pod 模板标签匹配的 Pod 数量。 + + +- **conditions** ([]ReplicationControllerCondition) + + **补丁策略:按照键 `type` 合并** + + 表示副本控制器当前状态的最新可用观测值。 + + + **ReplicationControllerCondition 描述某个点的副本控制器的状态。** + + + + - **conditions.status** (string),必需 + + 状况的状态,取值为 True、False 或 Unknown 之一。 + + - **conditions.type** (string),必需 + + 副本控制器状况的类型。 + + + + - **conditions.lastTransitionTime** (Time) + + 状况上次从一个状态转换为另一个状态的时间。 + + + **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。 + 为 time 包的许多函数方法提供了封装器。** + + + - **conditions.message** (string) + + 这是一条人类可读的消息,指示有关上次转换的详细信息。 + + - **conditions.reason** (string) + + 状况上次转换的原因。 + + +- **observedGeneration** (int64) + + observedGeneration 反映了最近观测到的副本控制器的生成情况。 + +## ReplicationControllerList {#ReplicationControllerList} + +ReplicationControllerList 是副本控制器的集合。 + +
+ +- **apiVersion**: v1 + +- **kind**: ReplicationControllerList + + +- **metadata** (}}">ListMeta) + + 标准的列表元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + +- **items** ([]}}">ReplicationController),必需 + + 副本控制器的列表。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller + + +## 操作 {#Operations} + +
+ +### `get` 读取指定的 ReplicationController + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/replicationcontrollers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicationController 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicationController): OK + +401: Unauthorized + + +### `get` 读取指定的 ReplicationController 的状态 + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicationController 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicationController): OK + +401: Unauthorized + + +### `list` 列出或监视 ReplicationController 类别的对象 + +#### HTTP 请求 + +GET /api/v1/namespaces/{namespace}/replicationcontrollers + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">ReplicationControllerList): OK + +401: Unauthorized + + +### `list` 列出或监视 ReplicationController 类别的对象 + +#### HTTP 请求 + +GET /api/v1/replicationcontrollers + + +#### 参数 + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">ReplicationControllerList): OK + +401: Unauthorized + + +### `create` 创建 ReplicationController + +#### HTTP 请求 + +POST /api/v1/namespaces/{namespace}/replicationcontrollers + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">ReplicationController,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicationController): OK + +201 (}}">ReplicationController): Created + +202 (}}">ReplicationController): Accepted + +401: Unauthorized + + +### `update` 替换指定的 ReplicationController + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/replicationcontrollers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicationController 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">ReplicationController,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicationController): OK + +201 (}}">ReplicationController): Created + +401: Unauthorized + + +### `update` 替换指定的 ReplicationController 的状态 + +#### HTTP 请求 + +PUT /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicationController 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">ReplicationController,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicationController): OK + +201 (}}">ReplicationController): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 ReplicationController + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/replicationcontrollers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicationController 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicationController): OK + +201 (}}">ReplicationController): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 ReplicationController 的状态 + +#### HTTP 请求 + +PATCH /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicationController 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicationController): OK + +201 (}}">ReplicationController): Created + +401: Unauthorized + + +### `delete` 删除 ReplicationController + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/replicationcontrollers/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicationController 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">Status): OK + +202 (}}">Status): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 ReplicationController 的集合 + +#### HTTP 请求 + +DELETE /api/v1/namespaces/{namespace}/replicationcontrollers + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **continue** (**查询参数**): string + + }}">continue + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized From 81859e44b4d766a945eae08666b14524cc3bb054 Mon Sep 17 00:00:00 2001 From: Michael Date: Sat, 9 Jul 2022 10:08:07 +0800 Subject: [PATCH 240/292] [zh-cn] resync workload-resources/replica-set-v1.md --- .../workload-resources/replica-set-v1.md | 979 ++++++++++++++++++ 1 file changed, 979 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/workload-resources/replica-set-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/replica-set-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/replica-set-v1.md new file mode 100644 index 0000000000000..8a07eaac44150 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/replica-set-v1.md @@ -0,0 +1,979 @@ +--- +api_metadata: + apiVersion: "apps/v1" + import: "k8s.io/api/apps/v1" + kind: "ReplicaSet" +content_type: "api_reference" +description: "ReplicaSet 确保在任何给定的时刻都在运行指定数量的 Pod 副本。" +title: "ReplicaSet" +weight: 4 +--- + + +`apiVersion: apps/v1` + +`import "k8s.io/api/apps/v1"` + +## ReplicaSet {#ReplicaSet} + + +ReplicaSet 确保在任何给定的时刻都在运行指定数量的 Pod 副本。 + +
+ +- **apiVersion**: apps/v1 + +- **kind**: ReplicaSet + + +- **metadata** (}}">ObjectMeta) + + 如果 ReplicaSet 的标签为空,则这些标签默认为与 ReplicaSet 管理的 Pod 相同。 + 标准的对象元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + +- **spec** (}}">ReplicaSetSpec) + + spec 定义 ReplicaSet 预期行为的规约。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + +- **status** (}}">ReplicaSetStatus) + + status 是最近观测到的 ReplicaSet 状态。此数据可能在某个时间窗之后过期。 + 该值由系统填充,只读。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +## ReplicaSetSpec {#ReplicaSetSpec} + + +ReplicaSetSpec 是 ReplicaSet 的规约。 + +
+ + +- **selector** (}}">LabelSelector),必需 + + selector 是针对 Pod 的标签查询,应与副本计数匹配。标签的主键和取值必须匹配, + 以便由这个 ReplicaSet 进行控制。它必须与 Pod 模板的标签匹配。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels/#label-selectors + +- **template** (}}">PodTemplateSpec) + + template 是描述 Pod 的一个对象,将在检测到副本不足时创建此对象。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller#pod-template + + +- **replicas** (int32) + + replicas 是预期副本的数量。这是一个指针,用于辨别显式零和未指定的值。默认为 1。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller + +- **minReadySeconds** (int32) + + 新建的 Pod 在没有任何容器崩溃的情况下就绪并被系统视为可用的最短秒数。 + 默认为 0(Pod 就绪后即被视为可用)。 + +## ReplicaSetStatus {#ReplicaSetStatus} + + +ReplicaSetStatus 表示 ReplicaSet 的当前状态。 + +
+ + +- **replicas** (int32),必需 + + replicas 是最近观测到的副本数量。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller + +- **availableReplicas** (int32) + + 此副本集可用副本(至少 minReadySeconds 才能就绪)的数量。 + + +- **readyReplicas** (int32) + + readyReplicas 是此 ReplicaSet 在就绪状况下处理的目标 Pod 数量。 + +- **fullyLabeledReplicas** (int32) + + 标签与 ReplicaSet 的 Pod 模板标签匹配的 Pod 数量。 + + +- **conditions** ([]ReplicaSetCondition) + + **补丁策略:按照键 `type` 合并** + + 表示副本集当前状态的最新可用观测值。 + + + **ReplicaSetCondition 描述某个点的副本集状态。** + + + + - **conditions.status** (string),必需 + + 状况的状态,取值为 True、False 或 Unknown 之一。 + + - **conditions.type** (string),必需 + + 副本集状况的类型。 + + + + - **conditions.lastTransitionTime** (Time) + + 状况上次从一个状态转换为另一个状态的时间。 + + + **Time 是对 time.Time 的封装。Time 支持对 YAML 和 JSON 进行正确封包。 + 为 time 包的许多函数方法提供了封装器。** + + + + - **conditions.message** (string) + + 这是一条人类可读的消息,指示有关上次转换的详细信息。 + + - **conditions.reason** (string) + + 状况上次转换的原因。 + + +- **observedGeneration** (int64) + + observedGeneration 反映了最近观测到的 ReplicaSet 生成情况。 + +## ReplicaSetList {#ReplicaSetList} + + +ReplicaSetList 是多个 ReplicaSet 的集合。 + +
+ +- **apiVersion**: apps/v1 + +- **kind**: ReplicaSetList + + +- **metadata** (}}">ListMeta) + + 标准的列表元数据。更多信息: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + +- **items** ([]}}">ReplicaSet),必需 + + ReplicaSet 的列表。更多信息: + https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller + + +## 操作 {#Operations} + +
+ +### `get` 读取指定的 ReplicaSet + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/replicasets/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicaSet 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicaSet): OK + +401: Unauthorized + + +### `get` 读取指定的 ReplicaSet 的状态 + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicaSet 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicaSet): OK + +401: Unauthorized + + +### `list` 列出或监视 ReplicaSet 类别的对象 + +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/replicasets + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">ReplicaSetList): OK + +401: Unauthorized + + +### `list` 列出或监视 ReplicaSet 类别的对象 + +#### HTTP 请求 + +GET /apis/apps/v1/replicasets + + +#### 参数 + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + +- **continue** (**查询参数**): string + + }}">continue + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">ReplicaSetList): OK + +401: Unauthorized + + +### `create` 创建 ReplicaSet + +#### HTTP 请求 + +POST /apis/apps/v1/namespaces/{namespace}/replicasets + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">ReplicaSet,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicaSet): OK + +201 (}}">ReplicaSet): Created + +202 (}}">ReplicaSet): Accepted + +401: Unauthorized + + +### `update` 替换指定的 ReplicaSet + +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/replicasets/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicaSet 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">ReplicaSet,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicaSet): OK + +201 (}}">ReplicaSet): Created + +401: Unauthorized + + +### `update` 替换指定的 ReplicaSet 的状态 + +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicaSet 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">ReplicaSet,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicaSet): OK + +201 (}}">ReplicaSet): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 ReplicaSet + +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/replicasets/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicaSet 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicaSet): OK + +201 (}}">ReplicaSet): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 ReplicaSet 的状态 + +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicaSet 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">Patch,必需 + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + +- **force** (**查询参数**): boolean + + }}">force + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">ReplicaSet): OK + +201 (}}">ReplicaSet): Created + +401: Unauthorized + + +### `delete` 删除 ReplicaSet + +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/replicasets/{name} + + +#### 参数 + +- **name** (**路径参数**): string,必需 + + ReplicaSet 的名称 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">Status): OK + +202 (}}">Status): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 ReplicaSet 的集合 + +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/replicasets + + +#### 参数 + +- **namespace** (**路径参数**): string,必需 + + }}">namespace + +- **body**: }}">DeleteOptions + +- **continue** (**查询参数**): string + + }}">continue + +- **dryRun** (**查询参数**): string + + }}">dryRun + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + +- **limit** (**查询参数**): integer + + }}">limit + +- **pretty** (**查询参数**): string + + }}">pretty + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized From da42fb67d1ad0a1dd4b05c1dd6f5391e975f7be9 Mon Sep 17 00:00:00 2001 From: XuzhengChang Date: Wed, 27 Jul 2022 15:21:37 +0800 Subject: [PATCH 241/292] [zh-cn]Fix wrong trans of ingress-minikube --- .../docs/tasks/access-application-cluster/ingress-minikube.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh-cn/docs/tasks/access-application-cluster/ingress-minikube.md b/content/zh-cn/docs/tasks/access-application-cluster/ingress-minikube.md index 906b0f5c96c6d..e220d760f42af 100644 --- a/content/zh-cn/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/zh-cn/docs/tasks/access-application-cluster/ingress-minikube.md @@ -152,7 +152,7 @@ storage-provisioner 1/1 Running 0 2m -3. 验证 Service 已经创建,并且可能从节点端口访问: +3. 验证 Service 已经创建,并且可以从节点端口访问: ```shell kubectl get service web From 157f8a34c6929446cd8acb501b3f3159d94a762f Mon Sep 17 00:00:00 2001 From: ydFu Date: Wed, 27 Jul 2022 16:13:43 +0800 Subject: [PATCH 242/292] [zh-cn] Resync tasks/configure-pod-container/assign-cpu-resource.md * Resyn with the en version. content\docs\tasks\configure-pod-container\assign-cpu-resource.md Signed-off-by: ydFu --- .../assign-cpu-resource.md | 37 ++++++++++--------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/content/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource.md index e574f993c28f5..e06146a158ca4 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -18,7 +18,7 @@ a container. Containers cannot use more CPU than the configured limit. Provided the system has CPU time free, a container is guaranteed to be allocated as much CPU as it requests. --> -本页面展示如何为容器设置 CPU *request(请求)* 和 CPU *limit(限制)*。 +本页面展示如何为容器设置 CPU **request(请求)** 和 CPU **limit(限制)**。 容器使用的 CPU 不能超过所配置的限制。 如果系统有空闲的 CPU 时间,则可以保证给容器分配其所请求数量的 CPU 资源。 @@ -43,7 +43,7 @@ following command to enable metrics-server: [metrics-server](https://github.com/kubernetes-sigs/metrics-server) 服务。如果你的集群中已经有正在运行的 metrics-server 服务,可以跳过这些步骤。 -如果你正在运行{{< glossary_tooltip term_id="minikube" >}},请运行以下命令启用 metrics-server: +如果你正在运行 {{< glossary_tooltip term_id="minikube" >}},请运行以下命令启用 metrics-server: ```shell minikube addons enable metrics-server @@ -53,7 +53,7 @@ minikube addons enable metrics-server To see whether metrics-server (or another provider of the resource metrics API, `metrics.k8s.io`) is running, type the following command: --> -查看 metrics-server(或者其他资源度量 API `metrics.k8s.io` 服务提供者)是否正在运行, +查看 metrics-server(或者其他资源指标 API `metrics.k8s.io` 服务提供者)是否正在运行, 请键入以下命令: ```shell @@ -79,7 +79,7 @@ v1beta1.metrics.k8s.io Create a {{< glossary_tooltip term_id="namespace" >}} so that the resources you create in this exercise are isolated from the rest of your cluster. --> -## 创建一个名字空间 +## 创建一个名字空间 {#create-a-namespace} 创建一个{{< glossary_tooltip text="名字空间" term_id="namespace" >}},以便将 本练习中创建的资源与集群的其余部分资源隔离。 @@ -104,7 +104,7 @@ The `-cpus "2"` argument tells the Container to attempt to use 2 CPUs. Create the Pod: --> -## 指定 CPU 请求和 CPU 限制 +## 指定 CPU 请求和 CPU 限制 {#specify-a-CPU-request-and-a-CPU-limit} 要为容器指定 CPU 请求,请在容器资源清单中包含 `resources: requests` 字段。 要指定 CPU 限制,请包含 `resources:limits`。 @@ -145,7 +145,7 @@ kubectl get pod cpu-demo --output=yaml --namespace=cpu-example The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU. --> -输出显示 Pod 中的一个容器的 CPU 请求为 500 milli CPU,并且 CPU 限制为 1 个 CPU。 +输出显示 Pod 中的一个容器的 CPU 请求为 500 milliCPU,并且 CPU 限制为 1 个 CPU。 ```yaml resources: @@ -158,7 +158,7 @@ resources: -使用 `kubectl top` 命令来获取该 Pod 的度量值数据: +使用 `kubectl top` 命令来获取该 Pod 的指标: ```shell kubectl top pod cpu-demo --namespace=cpu-example @@ -207,7 +207,7 @@ The CPU resource is measured in *CPU* units. One CPU, in Kubernetes, is equivale --> ## CPU 单位 {#cpu-units} -CPU 资源以 *CPU* 单位度量。Kubernetes 中的一个 CPU 等同于: +CPU 资源以 **CPU** 单位度量。Kubernetes 中的一个 CPU 等同于: * 1 个 AWS vCPU * 1 个 GCP核心 @@ -256,7 +256,7 @@ capacity of any Node in your cluster. Create the Pod: --> -## 设置超过节点能力的 CPU 请求 +## 设置超过节点能力的 CPU 请求 {#specify-a-CPU-request-that-is-too-big-for-your-nodes} CPU 请求和限制与都与容器相关,但是我们可以考虑一下 Pod 具有对应的 CPU 请求和限制这样的场景。 Pod 对 CPU 用量的请求等于 Pod 中所有容器的请求数量之和。 @@ -341,7 +341,7 @@ Container is automatically assigned the default limit. Cluster administrators ca [LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core/) to specify a default value for the CPU limit. --> -## 如果不指定 CPU 限制 +## 如果不指定 CPU 限制 {#if-you-do-not-specify-a-cpu-limit} 如果你没有为容器指定 CPU 限制,则会发生以下情况之一: @@ -360,7 +360,7 @@ assigns a CPU request that matches the limit. Similarly, if a Container specifie but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit. --> -## 如果你设置了 CPU 限制但未设置 CPU 请求 +## 如果你设置了 CPU 限制但未设置 CPU 请求 {#if-you-specify-a-CPU-limit-but-do-not-specify-a-CPU-request} 如果你为容器指定了 CPU 限制值但未为其设置 CPU 请求,Kubernetes 会自动为其 设置与 CPU 限制相同的 CPU 请求值。类似的,如果容器设置了内存限制值但未设置 @@ -377,13 +377,14 @@ scheduled. By having a CPU limit that is greater than the CPU request, you accom * The Pod can have bursts of activity where it makes use of CPU resources that happen to be available. * The amount of CPU resources a Pod can use during a burst is limited to some reasonable amount. --> -## CPU 请求和限制的初衷 +## CPU 请求和限制的初衷 {#motivation-for-CPU-requests-and-limits} 通过配置你的集群中运行的容器的 CPU 请求和限制,你可以有效利用集群上可用的 CPU 资源。 通过将 Pod CPU 请求保持在较低水平,可以使 Pod 更有机会被调度。 通过使 CPU 限制大于 CPU 请求,你可以完成两件事: * Pod 可能会有突发性的活动,它可以利用碰巧可用的 CPU 资源。 + * Pod 在突发负载期间可以使用的 CPU 资源数量仍被限制为合理的数量。 -## 清理 +## 清理 {#clean-up} -删除名称空间: +删除名字空间: ```shell kubectl delete namespace cpu-example @@ -410,9 +411,10 @@ kubectl delete namespace cpu-example * [Configure Quality of Service for Pods](/docs/tasks/configure-pod-container/quality-service-pod/) --> -### 针对应用开发者 +### 针对应用开发者 {#for-app-developers} * [将内存资源分配给容器和 Pod](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/) + * [配置 Pod 服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/) -### 针对集群管理员 +### 针对集群管理员 {for-cluster-administrators} -* [配置名称空间的默认内存请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) +* [配置名字空间的默认内存请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) * [为名字空间配置默认 CPU 请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) * [为名字空间配置最小和最大内存限制](/zh-cn/docs/tasks/administer-cluster//manage-resources/memory-constraint-namespace/) * [为名字空间配置最小和最大 CPU 约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) * [为名字空间配置内存和 CPU 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) * [为名字空间配置 Pod 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [配置 API 对象的配额](/zh-cn/docs/tasks/administer-cluster/quota-api-object/) - From 2aa7ec7e34e875d18d9b8b7afa32b67bd74614dc Mon Sep 17 00:00:00 2001 From: yy <1827641139@qq.com> Date: Wed, 6 Jul 2022 23:16:04 +0800 Subject: [PATCH 243/292] [zh-cn]Update content/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md --- .../workload-resources/stateful-set-v1.md | 1432 +++++++++++++++++ 1 file changed, 1432 insertions(+) create mode 100644 content/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md diff --git a/content/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md new file mode 100644 index 0000000000000..fd168657d8c74 --- /dev/null +++ b/content/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md @@ -0,0 +1,1432 @@ +--- +api_metadata: + apiVersion: "apps/v1" + import: "k8s.io/api/apps/v1" + kind: "StatefulSet" +content_type: "api_reference" +description: "StatefulSet 表示一组具有一致身份的 Pod" +title: "StatefulSet" +weight: 6 +auto_generated: true +--- + + + +`apiVersion: apps/v1` + +`import "k8s.io/api/apps/v1"` + +## StatefulSet {#StatefulSet} + +StatefulSet 表示一组具有一致身份的 Pod。身份定义为: + - 网络:一个稳定的 DNS 和主机名。 + - 存储:根据要求提供尽可能多的 VolumeClaims。 +StatefulSet 保证给定的网络身份将始终映射到相同的存储身份。 +
+ +- **apiVersion**: apps/v1 + +- **kind**: StatefulSet + +- **metadata** (}}">ObjectMeta) + + + 标准的对象元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata。 + +- **spec** (}}">StatefulSetSpec) + + + spec 定义集合中 Pod 的预期身份。 + +- **status** (}}">StatefulSetStatus) + + + status 是 StatefulSet 中 Pod 的当前状态,此数据可能会在某个时间窗口内过时。 + +## StatefulSetSpec {#StatefulSetSpec} + + +StatefulSetSpec 是 StatefulSet 的规约。 + +
+ + +- **serviceName** (string), 必需 + + serviceName 是管理此 StatefulSet 服务的名称。 + 该服务必须在 StatefulSet 之前即已存在,并负责该集合的网络标识。 + Pod 会获得符合以下模式的 DNS/主机名: pod-specific-string.serviceName.default.svc.cluster.local。 + 其中 “pod-specific-string” 由 StatefulSet 控制器管理。 + + +- **selector** (}}">LabelSelector), 必需 + + selector 是对 Pod 的标签查询,查询结果应该匹配副本个数。 + 此选择算符必须与 Pod 模板中的 labels 匹配。 + 更多信息: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors + + +- **template** (}}">PodTemplateSpec), 必需 + + template 是用来描述 Pod 的对象,检测到副本不足时将创建所描述的 Pod。 + 经由 StatefulSet 创建的每个 Pod 都将满足这个模板,但与 StatefulSet 的其余 Pod 相比,每个 Pod 具有唯一的标识。 + + +- **replicas** (int32) + + replicas 是给定模板的所需的副本数。之所以称作副本,是因为它们是相同模板的实例, + 不过各个副本也具有一致的身份。如果未指定,则默认为 1。 + + +- **updateStrategy** (StatefulSetUpdateStrategy) + + updateStrategy 是一个 StatefulSetUpdateStrategy,表示当对 template 进行修订时,用何种策略更新 StatefulSet 中的 Pod 集合。 + + + + + + **StatefulSetUpdateStrategy 表示 StatefulSet 控制器将用于执行更新的策略。其中包括为指定策略执行更新所需的额外参数。** + + - **updateStrategy.type** (string) + + - **updateStrategy.rollingUpdate** (RollingUpdateStatefulSetStrategy) + + + + 当 type 为 RollingUpdate 时,使用 rollingUpdate 来传递参数。 + + + + + + **RollingUpdateStatefulSetStrategy 用于为 rollingUpdate 类型的更新传递参数。** + + - **updateStrategy.rollingUpdate.maxUnavailable** (IntOrString) + + + + 更新期间不可用的 Pod 个数上限。取值可以是绝对数量(例如:5)或所需 Pod 的百分比(例如:10%)。 + 绝对数是通过四舍五入的百分比计算得出的。不能为 0,默认为 1。 + 此字段为 Alpha 级别,仅被启用 MaxUnavailableStatefulSet 特性的服务器支持。 + 此字段适用于 0 到 replicas-1 范围内的所有 Pod。这意味着如果在 0 到 replicas-1 范围内有任何不可用的 Pod, + 这些 Pod 将被计入 maxUnavailable 中。 + + + + + + **IntOrString 是一种可以包含 int32 或字符串数值的类型。在 JSON 或 YAML 编组和解组时,** + **会生成或使用内部类型。例如,此类型允许你定义一个可以接受名称或数字的 JSON 字段。** + + - **updateStrategy.rollingUpdate.partition** (int32) + + + + partition 表示 StatefulSet 应该被分区进行更新时的序数。 + 在滚动更新期间,序数在 replicas-1 和 partition 之间的所有 Pod 都会被更新。 + 序数在 partition-1 和 0 之间的所有 Pod 保持不变。 + 这一属性有助于进行金丝雀部署。默认值为 0。 + +- **podManagementPolicy** (string) + + + + podManagementPolicy 控制在初始规模扩展期间、替换节点上的 Pod 或缩减集合规模时如何创建 Pod。 + 默认策略是 “OrderedReady”,各个 Pod 按升序创建的(pod-0,然后是pod-1 等), + 控制器将等到每个 Pod 都准备就绪后再继续。缩小集合规模时,Pod 会以相反的顺序移除。 + 另一种策略是 “Parallel”,意味着并行创建 Pod 以达到预期的规模而无需等待,并且在缩小规模时将立即删除所有 Pod。 + +- **revisionHistoryLimit** (int32) + + + + revisionHistoryLimit 是在 StatefulSet 的修订历史中维护的修订个数上限。 + 修订历史中包含并非由当前所应用的 StatefulSetSpec 版本未表示的所有修订版本。默认值为 10。 + +- **volumeClaimTemplates** ([]}}">PersistentVolumeClaim) + + + + volumeClaimTemplates 是允许 Pod 引用的申领列表。 + StatefulSet controller 负责以维持 Pod 身份不变的方式将网络身份映射到申领之上。 + 此列表中的每个申领至少必须在模板的某个容器中存在匹配的(按 name 匹配)volumeMount。 + 此列表中的申领优先于模板中具有相同名称的所有卷。 + +- **minReadySeconds** (int32) + + + + 新创建的 Pod 应准备就绪(其任何容器都未崩溃)的最小秒数,以使其被视为可用。 + 默认为 0(Pod 准备就绪后将被视为可用)。 + 这是一个 Alpha 字段,需要启用 StatefulSetMinReadySeconds 特性门控。 + +- **persistentVolumeClaimRetentionPolicy** (StatefulSetPersistentVolumeClaimRetentionPolicy) + + + + persistentVolumeClaimRetentionPolicy 描述从 VolumeClaimTemplates 创建的持久卷申领的生命周期。 + 默认情况下,所有持久卷申领都根据需要创建并被保留到手动删除。 + 此策略允许更改申领的生命周期,例如在 StatefulSet 被删除或其中 Pod 集合被缩容时删除持久卷申领。 + 此属性需要启用 StatefulSetAutoDeletePVC 特性门控。特性处于 Alpha 阶段。可选。 + + + + + + **StatefulSetPersistentVolumeClaimRetentionPolicy 描述了用于从 StatefulSet VolumeClaimTemplate 创建的 PVC 的策略** + + - **persistentVolumeClaimRetentionPolicy.whenDeleted** (string) + + + + whenDeleted 指定当 StatefulSet 被删除时,基于 StatefulSet VolumeClaimTemplates 所创建的 PVC 会发生什么。 + 默认策略 `Retain` 使 PVC 不受 StatefulSet 被删除的影响。`Delete` 策略会导致这些 PVC 也被删除。 + + - **persistentVolumeClaimRetentionPolicy.whenScaled** (string) + + + + whenScaled 指定当 StatefulSet 缩容时,基于 StatefulSet volumeClaimTemplates 创建的 PVC 会发生什么。 + 默认策略 `Retain` 使 PVC 不受缩容影响。 `Delete` 策略会导致超出副本个数的所有的多余 Pod 所关联的 PVC 被删除。 + +## StatefulSetStatus {#StatefulSetStatus} + + +StatefulSetStatus 表示 StatefulSet 的当前状态。 + +
+ + +- **replicas** (int32), 必需 + + replicas 是 StatefulSet 控制器创建的 Pod 个数。 + +- **readyReplicas** (int32) + + + readyReplicas 是为此 StatefulSet 创建的、状况为 Ready 的 Pod 个数。 + +- **currentReplicas** (int32) + + + currentReplicas 是 StatefulSet 控制器根据 currentReplicas 所指的 StatefulSet 版本创建的 Pod 个数。 + +- **updatedReplicas** (int32) + + + updatedReplicas 是 StatefulSet 控制器根据 updateRevision 所指的 StatefulSet 版本创建的 Pod 个数。 + +- **availableReplicas** (int32) + + + 此 StatefulSet 所对应的可用 Pod 总数(就绪时长至少为 minReadySeconds)。 + 这是一个 Beta 字段,由 StatefulSetMinReadySeconds 特性门控启用/禁用。 + +- **collisionCount** (int32) + + + collisionCount 是 StatefulSet 的哈希冲突计数。 + StatefulSet controller 在需要为最新的 controllerRevision 创建名称时使用此字段作为避免冲突的机制。 + +- **conditions** ([]StatefulSetCondition) + + + **补丁策略:根据 `type` 键执行合并操作** + + + 表示 StatefulSet 当前状态的最新可用观察结果。 + + + + **StatefulSetCondition 描述了 StatefulSet 在某个点的状态。** + + + + - **conditions.status** (string), 必需 + + 状况的状态为 True、False、Unknown 之一。 + + + + - **conditions.type** (string), 必需 + + StatefulSet 状况的类型。 + + - **conditions.lastTransitionTime** (Time) + + + + 最近一次状况从一种状态转换到另一种状态的时间。 + + + + + **Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。** + **time 包的许多工厂方法提供了包装器。** + + - **conditions.message** (string) + + + + 一条人类可读的消息,指示有关转换的详细信息。 + + - **conditions.reason** (string) + + + + 状况最后一次转换的原因。 + +- **currentRevision** (string) + + + + currentRevision,如果不为空,表示用于在序列 [0,currentReplicas) 之间生成 Pod 的 StatefulSet 的版本。 + +- **updateRevision** (string) + + + + updateRevision,如果不为空,表示用于在序列 [replicas-updatedReplicas,replicas) 之间生成 Pod 的 StatefulSet 的版本。 + +- **observedGeneration** (int64) + + + + observedGeneration 是 StatefulSet 的最新一代。它对应于 StatefulSet 的代数,由 API 服务器在变更时更新。 + +## StatefulSetList {#StatefulSetList} + + + +StatefulSetList 是 StatefulSet 的集合。 + +
+ +- **apiVersion**: apps/v1 + +- **kind**: StatefulSetList + +- **metadata** (}}">ListMeta) + + + + 标准的对象元数据。更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + + +- **items** ([]}}">StatefulSet), 必需 + + items 是 StatefulSet 的列表。 + + +## 操作 {#operations} + +
+ + +### `get` 读取指定的 StatefulSet +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} + + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + StatefulSet 的名称。 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">StatefulSet): OK + +401: Unauthorized + + +### `get` 读取指定 StatefulSet 的状态 +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status + + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + StatefulSet 的名称。 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">StatefulSet): OK + +401: Unauthorized + + +### `list` 列出或监视 StatefulSet 类型的对象 +#### HTTP 请求 + +GET /apis/apps/v1/namespaces/{namespace}/statefulsets + + +#### 参数 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + + +- **continue** (**查询参数**): string + + }}">continue + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + +- **limit** (**查询参数**): integer + + }}">limit + + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">StatefulSetList): OK + +401: Unauthorized + + +### `list` 列出或监视 StatefulSet 类型的对象 +#### HTTP 请求 + +GET /apis/apps/v1/statefulsets + + +#### 参数 + + +- **allowWatchBookmarks** (**查询参数**): boolean + + }}">allowWatchBookmarks + + +- **continue** (**查询参数**): string + + }}">continue + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + +- **limit** (**查询参数**): integer + + }}">limit + + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +- **watch** (**查询参数**): boolean + + }}">watch + + +#### 响应 + +200 (}}">StatefulSetList): OK + +401: Unauthorized + + +### `create` 创建一个 StatefulSet +#### HTTP 请求 + +POST /apis/apps/v1/namespaces/{namespace}/statefulsets + + +#### 参数 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">StatefulSet, 必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + + - **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">StatefulSet): OK + +201 (}}">StatefulSet): Created + +202 (}}">StatefulSet): Accepted + +401: Unauthorized + + +### `update` 替换指定的 StatefulSet +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} + + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + StatefulSet 的名称 。 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">StatefulSet, 必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">StatefulSet): OK + +201 (}}">StatefulSet): Created + +401: Unauthorized + + +### `update` 替换指定 StatefulSet 的状态 +#### HTTP 请求 + +PUT /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status + + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + StatefulSet 的名称。 + + +- **namespace** (**路径参数**): string, required + + }}">namespace + + +- **body**: }}">StatefulSet, 必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">StatefulSet): OK + +201 (}}">StatefulSet): Created + +401: Unauthorized + + +### `patch` 部分更新指定的 StatefulSet +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} + + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + StatefulSet 的名称。 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">Patch, 必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **force** (**查询参数**): boolean + + }}">force + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">StatefulSet): OK + +201 (}}">StatefulSet): Created + +401: Unauthorized + + +### `patch` 部分更新指定 StatefulSet 的状态 +#### HTTP 请求 + +PATCH /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status + + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + StatefulSet 的名称。 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + + +- **body**: }}">Patch, 必需 + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldManager** (**查询参数**): string + + }}">fieldManager + + +- **fieldValidation** (**查询参数**): string + + }}">fieldValidation + + +- **force** (**查询参数**): boolean + + }}">force + + +- **pretty** (**查询参数**): string + + }}">pretty + + +#### 响应 + +200 (}}">StatefulSet): OK + +201 (}}">StatefulSet): Created + +401: Unauthorized + + +### `delete` 删除一个 StatefulSet +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} + + +#### 参数 + + +- **name** (**路径参数**): string, 必需 + + StatefulSet 的名称。 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">DeleteOptions + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +#### 响应 + +200 (}}">Status): OK + +202 (}}">Status): Accepted + +401: Unauthorized + + +### `deletecollection` 删除 StatefulSet 的集合 +#### HTTP 请求 + +DELETE /apis/apps/v1/namespaces/{namespace}/statefulsets + + +#### 参数 + + +- **namespace** (**路径参数**): string, 必需 + + }}">namespace + +- **body**: }}">DeleteOptions + + +- **continue** (**查询参数**): string + + }}">continue + + +- **dryRun** (**查询参数**): string + + }}">dryRun + + +- **fieldSelector** (**查询参数**): string + + }}">fieldSelector + + +- **gracePeriodSeconds** (**查询参数**): integer + + }}">gracePeriodSeconds + + +- **labelSelector** (**查询参数**): string + + }}">labelSelector + + +- **limit** (**查询参数**): integer + + }}">limit + + +- **pretty** (**查询参数**): string + + }}">pretty + + +- **propagationPolicy** (**查询参数**): string + + }}">propagationPolicy + + +- **resourceVersion** (**查询参数**): string + + }}">resourceVersion + + +- **resourceVersionMatch** (**查询参数**): string + + }}">resourceVersionMatch + + +- **timeoutSeconds** (**查询参数**): integer + + }}">timeoutSeconds + + +#### 响应 + +200 (}}">Status): OK + +401: Unauthorized + From c5f98c66beb81cfb4c5973ae374235ee18497cd2 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Wed, 27 Jul 2022 00:55:54 +0800 Subject: [PATCH 244/292] update --- .../configure-persistent-volume-storage.md | 78 ++++++++++++------- 1 file changed, 49 insertions(+), 29 deletions(-) diff --git a/content/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index a3ba703b09443..5f58596b59f0b 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -13,7 +13,7 @@ weight: 60 -本文介绍如何配置 Pod 使用 +本文将向你介绍如何配置 Pod 使用 {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} 作为存储。 以下是该过程的总结: @@ -42,7 +42,8 @@ PersistentVolume. ## {{% heading "prerequisites" %}} -* 你需要一个包含单个节点的 Kubernetes 集群,并且必须配置 kubectl 命令行工具以便与集群交互。 +* 你需要一个包含单个节点的 Kubernetes 集群,并且必须配置 + {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} 命令行工具以便与集群交互。 如果还没有单节点集群,可以使用 [Minikube](https://minikube.sigs.k8s.io/docs/) 创建一个。 . @@ -62,19 +64,19 @@ do not already have a single-node cluster, you can create one by using ## 在你的节点上创建一个 index.html 文件 -打开集群中节点的一个 Shell。 +打开集群中的某个节点的 Shell。 如何打开 Shell 取决于集群的设置。 例如,如果你正在使用 Minikube,那么可以通过输入 `minikube ssh` 来打开节点的 Shell。 -在 Shell 中,创建一个 `/mnt/data` 目录: +在该节点的 Shell 中,创建一个 `/mnt/data` 目录: -``` +```shell # 这里再次假定你的节点使用 "sudo" 来以超级用户角色执行命令 sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html" ``` @@ -139,7 +141,7 @@ PersistentVolume uses a file or directory on the Node to emulate network-attache --> ## 创建 PersistentVolume -在本练习中,你将创建一个 *hostPath* 类型的 PersistentVolume。 +在本练习中,你将创建一个 **hostPath** 类型的 PersistentVolume。 Kubernetes 支持用于在单节点集群上开发和测试的 hostPath 类型的 PersistentVolume。 hostPath 类型的 PersistentVolume 使用节点上的文件或目录来模拟网络附加存储。 @@ -149,18 +151,36 @@ would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume. Cluster administrators can also use [StorageClasses](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageclass-v1-storage) to set up -[dynamic provisioning](https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes). +[dynamic provisioning](/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes). Here is the configuration file for the hostPath PersistentVolume: --> 在生产集群中,你不会使用 hostPath。 集群管理员会提供网络存储资源,比如 Google Compute Engine 持久盘卷、NFS 共享卷或 Amazon Elastic Block Store 卷。 -集群管理员还可以使用 [StorageClasses](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageclass-v1-storage) 来设置[动态提供存储](https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes)。 +集群管理员还可以使用 +[StorageClasses](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageclass-v1-storage) +来设置[动态提供存储](/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes)。 下面是 hostPath PersistentVolume 的配置文件: {{< codenew file="pods/storage/pv-volume.yaml" >}} + +配置文件指定卷位于集群节点上的 `/mnt/data` 路径。 +配置还指定了卷的容量大小为 10 GB, +访问模式为 `ReadWriteOnce`, +这意味着该卷可以被单个节点以读写方式安装。 +配置文件还在 PersistentVolume 中定义了 +[StorageClass 的名称](/zh-cn/docs/concepts/storage/persistent-volumes/#class) +为 `manual`。它将用于将 PersistentVolumeClaim 的请求绑定到此 PersistentVolume。 + @@ -216,7 +236,7 @@ Create the PersistentVolumeClaim: 创建 PersistentVolumeClaim: ```shell -kubectl create -f https://k8s.io/examples/pods/storage/pv-claim.yaml +kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml ``` 查看 PersistentVolumeClaim: -``` +```shell kubectl get pvc task-pv-claim ``` @@ -299,7 +319,7 @@ kubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml ``` 检查 Pod 中的容器是否运行正常: @@ -308,7 +328,7 @@ kubectl get pod task-pv-pod ``` 打开一个 Shell 访问 Pod 中的容器: @@ -326,11 +346,11 @@ hostPath volume: # Be sure to run these 3 commands inside the root shell that comes from # running "kubectl exec" in the previous step --> -``` +```shell # 一定要在上一步 "kubectl exec" 所返回的 Shell 中执行下面三个命令 -root@task-pv-pod:/# apt-get update -root@task-pv-pod:/# apt-get install curl -root@task-pv-pod:/# curl localhost +apt update +apt install curl +curl http://localhost/ ``` ```shell # 这里假定你使用 "sudo" 来以超级用户的角色执行命令 @@ -390,7 +413,6 @@ You can now close the shell to your Node. - ## 在两个地方挂载相同的 persistentVolume {{< codenew file="pods/storage/pv-duplicate.yaml" >}} @@ -427,8 +449,8 @@ GID 不匹配或缺失将会导致无权访问错误。 这样 GID 就能自动添加到使用 PersistentVolume 的任何 Pod 中。 使用 `pv.beta.kubernetes.io/gid` 注解的方法如下所示: - ```yaml +apiVersion: v1 kind: PersistentVolume apiVersion: v1 metadata: @@ -439,10 +461,10 @@ metadata: 当 Pod 使用带有 GID 注解的 PersistentVolume 时,注解的 GID 会被应用于 Pod 中的所有容器, 应用的方法与 Pod 的安全上下文中指定的 GID 相同。 @@ -476,5 +498,3 @@ PersistentVolume are not present on the Pod resource itself. * [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core) * [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core) * [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core) - - From bb0d7afc64e8bbf3f5cacd7e92e30ed3bce64d14 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Wed, 27 Jul 2022 02:57:57 +0800 Subject: [PATCH 245/292] Update distribute-credentials-secure.md --- .../distribute-credentials-secure.md | 55 ++++++++++++------- 1 file changed, 34 insertions(+), 21 deletions(-) diff --git a/content/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure.md index 5ede095df1049..0a07cf1094b05 100644 --- a/content/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure.md +++ b/content/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -12,8 +12,10 @@ encryption keys, into Pods. --> 本文展示如何安全地将敏感数据(如密码和加密密钥)注入到 Pods 中。 + ## {{% heading "prerequisites" %}} + {{< include "task-tutorial-prereqs.md" >}} @@ -69,8 +71,10 @@ username and password: kubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml ``` -1. - 查看 Secret 相关信息: + +2. 查看 Secret 相关信息: ```shell kubectl get secret test-secret @@ -79,12 +83,12 @@ username and password: 输出: - ```shell + ``` NAME TYPE DATA AGE test-secret Opaque 2 1m ``` -1. +1. 查看 Secret 相关的更多详细信息: ```shell @@ -94,7 +98,7 @@ username and password: 输出: - ```shell + ``` Name: test-secret Namespace: default Labels: @@ -105,7 +109,7 @@ username and password: Data ==== password: 13 bytes - username: 7 bytes + username: 7 bytes ``` @@ -155,9 +160,9 @@ Here is a configuration file you can use to create a Pod: kubectl get pod secret-test-pod ``` + 输出: - - ```shell + ``` NAME READY STATUS RESTARTS AGE secret-test-pod 1/1 Running 0 42m ``` @@ -166,7 +171,7 @@ Here is a configuration file you can use to create a Pod: 获取一个 shell 进入 Pod 中运行的容器: ```shell - kubectl exec -it secret-test-pod -- /bin/bash + kubectl exec -i -t secret-test-pod -- /bin/bash ``` 1. 在 Shell 中,显示 `username` 和 `password` 文件的内容: - ```shell # 在容器中 Shell 运行下面命令 - echo "$(cat /etc/secret-volume/username)" - echo "$(cat /etc/secret-volume/password)" + echo "$( cat /etc/secret-volume/username )" + echo "$( cat /etc/secret-volume/password )" ``` 输出为用户名和密码: - ```shell + ``` my-app 39528$vdg7Jb ``` @@ -256,11 +261,14 @@ Here is a configuration file you can use to create a Pod: kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME' ``` + 输出为: - ``` backend-admin ``` + @@ -300,13 +308,16 @@ Here is a configuration file you can use to create a Pod: ```shell kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME' ``` - + 输出: ``` DB_USERNAME=db-admin BACKEND_USERNAME=backend-admin ``` + @@ -353,7 +364,10 @@ This functionality is available in Kubernetes v1.6 and later. ```shell kubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo "username: $username\npassword: $password\n"' ``` - + + 输出为: ``` @@ -364,10 +378,9 @@ This functionality is available in Kubernetes v1.6 and later. ### 参考 -* [Secret](/docs/api-reference/{{< param "version" >}}/#secret-v1-core) -* [Volume](/docs/api-reference/{{< param "version" >}}/#volume-v1-core) -* [Pod](/docs/api-reference/{{< param "version" >}}/#pod-v1-core) - +* [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) +* [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core) +* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) ## {{% heading "whatsnext" %}} From a0926ca18caa1294c130d8b708aff1e6efd88a9f Mon Sep 17 00:00:00 2001 From: Jinny Park Date: Tue, 26 Jul 2022 21:44:16 +0900 Subject: [PATCH 246/292] Add .hugo_build.lock to .gitignore --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index 6a629010d0423..a1bead7d2556d 100644 --- a/.gitignore +++ b/.gitignore @@ -29,6 +29,7 @@ nohup.out # Hugo output public/ resources/ +.hugo_build.lock # Netlify Functions build output package-lock.json From 35410def2c13ccd9a58e68436e3857a07f02162b Mon Sep 17 00:00:00 2001 From: Sean Wei Date: Wed, 27 Jul 2022 19:32:00 +0800 Subject: [PATCH 247/292] [zh-cn] Update kubelet-authn-authz.md --- .../access-authn-authz/kubelet-authn-authz.md | 80 +++++++++---------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/content/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz.md b/content/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz.md index 4cd4dfb7c2f5c..625bb0bebd6f2 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz.md +++ b/content/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz.md @@ -1,18 +1,18 @@ --- title: Kubelet 认证/鉴权 --- - - -## 概述 +## 概述 {#overview} - @@ -20,17 +20,17 @@ kubelet 的 HTTPS 端点公开了 API, 这些 API 可以访问敏感度不同的数据, 并允许你在节点上和容器内以不同级别的权限执行操作。 - 本文档介绍了如何对 kubelet 的 HTTPS 端点的访问进行认证和鉴权。 - -## Kubelet 身份认证 +## Kubelet 身份认证 {#kubelet-authentication} - 要禁用匿名访问并向未经身份认证的请求发送 `401 Unauthorized` 响应,请执行以下操作: - * 带 `--anonymous-auth=false` 标志启动 kubelet - 要对 kubelet 的 HTTPS 端点启用 X509 客户端证书认证: - * 带 `--client-ca-file` 标志启动 kubelet,提供一个 CA 证书包以供验证客户端证书 -* 带 `--kubelet-client-certificate` 和 `--kubelet-client-key` 标志启动 apiserver +* 带 `--kubelet-client-certificate` 和 `--kubelet-client-key` 标志启动 API 服务器 * 有关更多详细信息,请参见 - [apiserver 身份验证文档](/zh/docs/reference/access-authn-authz/authentication/#x509-client-certs) + [API 服务器身份验证文档](/zh-cn/docs/reference/access-authn-authz/authentication/#x509-client-certs) - 要启用 API 持有者令牌(包括服务帐户令牌)以对 kubelet 的 HTTPS 端点进行身份验证,请执行以下操作: - * 确保在 API 服务器中启用了 `authentication.k8s.io/v1beta1` API 组 * 带 `--authentication-token-webhook` 和 `--kubeconfig` 标志启动 kubelet * kubelet 调用已配置的 API 服务器上的 `TokenReview` API,以根据持有者令牌确定用户信息 - -## Kubelet 鉴权 +## Kubelet 鉴权 {#kubelet-authorization} - -任何成功通过身份验证的请求(包括匿名请求)之后都会被鉴权。 +任何成功通过身份验证的请求(包括匿名请求)之后都会被鉴权。 默认的鉴权模式为 `AlwaysAllow`,它允许所有请求。 - 细分对 kubelet API 的访问权限可能有多种原因: - 要细分对 kubelet API 的访问权限,请将鉴权委派给 API 服务器: - -kubelet 使用与 apiserver 相同的 -[请求属性](/zh/docs/reference/access-authn-authz/authorization/#review-your-request-attributes) +kubelet 使用与 API 服务器相同的 +[请求属性](/zh-cn/docs/reference/access-authn-authz/authorization/#review-your-request-attributes) 方法对 API 请求执行鉴权。 - 请求的动词根据传入请求的 HTTP 动词确定: - HTTP 动词 | 请求动词 @@ -140,13 +140,13 @@ PUT | update PATCH | patch DELETE | delete - 资源和子资源是根据传入请求的路径确定的: Kubelet API | 资源 | 子资源 -------------|----------|------------ @@ -154,20 +154,20 @@ Kubelet API | 资源 | 子资源 /metrics/\* | nodes | metrics /logs/\* | nodes | log /spec/\* | nodes | spec -*其它所有* | nodes | proxy +**其它所有** | nodes | proxy - 名字空间和 API 组属性始终是空字符串, 资源名称始终是 kubelet 的 `Node` API 对象的名称。 - -在此模式下运行时,请确保传递给 apiserver 的由 `--kubelet-client-certificate` 和 +在此模式下运行时,请确保传递给 API 服务器的由 `--kubelet-client-certificate` 和 `--kubelet-client-key` 标志标识的用户具有以下属性的鉴权: * verb=\*, resource=nodes, subresource=proxy From 30eb2cc0cfabc19db9640559eceaa6d39f52dd60 Mon Sep 17 00:00:00 2001 From: Paszymaja <36695377+Paszymaja@users.noreply.github.com> Date: Wed, 27 Jul 2022 14:12:15 +0200 Subject: [PATCH 248/292] Update content/en/docs/concepts/security/rbac-good-practices.md Co-authored-by: divya-mohan0209 --- content/en/docs/concepts/security/rbac-good-practices.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/en/docs/concepts/security/rbac-good-practices.md b/content/en/docs/concepts/security/rbac-good-practices.md index 7699a0da45f5a..fe858bba72b42 100644 --- a/content/en/docs/concepts/security/rbac-good-practices.md +++ b/content/en/docs/concepts/security/rbac-good-practices.md @@ -135,7 +135,8 @@ granting rights to this resource. ### Escalate verb -Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses. The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), +Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses. +The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), users with this right can effectively escalate their privileges. ### Bind verb From 7d51c607a85247b17a809df5d9d4aaf888c7549b Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Sat, 23 Jul 2022 16:08:13 +0800 Subject: [PATCH 249/292] Update _index.md --- .../setup/production-environment/_index.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/setup/production-environment/_index.md b/content/zh-cn/docs/setup/production-environment/_index.md index 0bcee800d786a..1210b0d816b57 100644 --- a/content/zh-cn/docs/setup/production-environment/_index.md +++ b/content/zh-cn/docs/setup/production-environment/_index.md @@ -101,12 +101,12 @@ by managing [policies](/docs/concepts/policy/) and Before building a Kubernetes production environment on your own, consider handing off some or all of this job to [Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/) -providers or other [Kubernetes Partners](https://kubernetes.io/partners/). +providers or other [Kubernetes Partners](/partners/). Options include: --> -在自行构造 Kubernetes 生产环境之前,请考虑将这一任务的部分或者全部交给 +在自行构建 Kubernetes 生产环境之前,请考虑将这一任务的部分或者全部交给 [云方案承包服务](/zh-cn/docs/setup/production-environment/turnkey-solutions) -提供商或者其他 [Kubernetes 合作伙伴](https://kubernetes.io/partners/)。 +提供商或者其他 [Kubernetes 合作伙伴](/zh-cn/partners/)。 选项有: 无论你是自行构造一个生产用 Kubernetes 集群还是与合作伙伴一起协作,请审阅 -下面章节以评估你的需求,因为这关系到你的集群的 *控制面*、*工作节点*、 -*用户访问* 以及 *负载资源*。 +下面章节以评估你的需求,因为这关系到你的集群的**控制面**、**工作节点**、 +**用户访问**以及**负载资源**。 如果你需要一个更为持久的、高可用的集群,那么就需要考虑扩展控制面的方式。 根据设计,运行在一台机器上的单机控制面服务不是高可用的。 -如果保持集群处于运行状态并且需要确保在出现问题时能够被修复这点很重要, +如果你认为保持集群的正常运行的并需要确保它在出错时可以被修复是很重要的, 可以考虑以下步骤: - - 在安装节点时要通过配置适当的内存、CPU 和磁盘速度、存储容量来满足 + - 在安装节点时要通过配置适当的内存、CPU 和磁盘读写速率、存储容量来满足 你的负载的需求。 - 是否通用的计算机系统即足够,还是你有负载需要使用 GPU 处理器、Windows 节点 或者 VM 隔离。 @@ -583,7 +583,7 @@ for information on creating a new service account. For example, you might want t - 决定你是想自行构造自己的生产用 Kubernetes 还是从某可用的 [云服务外包厂商](/zh-cn/docs/setup/production-environment/turnkey-solutions/) - 或 [Kubernetes 合作伙伴](https://kubernetes.io/partners/)获得集群。 + 或 [Kubernetes 合作伙伴](/zh-cn/partners/)获得集群。 - 如果你决定自行构造集群,则需要规划如何处理 [证书](/zh-cn/docs/setup/best-practices/certificates/) 并为类似 From c12f94d2a3002b307db7138dba216d005f58fe68 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Wed, 27 Jul 2022 01:05:13 +0800 Subject: [PATCH 250/292] Update access-cluster-services.md --- .../access-cluster-services.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/content/zh-cn/docs/tasks/access-application-cluster/access-cluster-services.md b/content/zh-cn/docs/tasks/access-application-cluster/access-cluster-services.md index 42d5ab139711f..336b6d67c1905 100644 --- a/content/zh-cn/docs/tasks/access-application-cluster/access-cluster-services.md +++ b/content/zh-cn/docs/tasks/access-application-cluster/access-cluster-services.md @@ -25,7 +25,7 @@ their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. --> -## 访问集群上运行的服务 +## 访问集群上运行的服务 {#accessing-services-running-on-the-cluster} 在 Kubernetes 里,[节点](/zh-cn/docs/concepts/architecture/nodes/)、 [Pod](/zh-cn/docs/concepts/workloads/pods/) 和 @@ -186,22 +186,22 @@ URL 的 `` 段支持的格式为: * To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: --> -##### 示例 +##### 示例 {#examples} -* 如要访问 Elasticsearch 服务末端 `_search?q=user:kimchy`,你可以使用: +* 如要访问 Elasticsearch 服务末端 `_search?q=user:kimchy`,你可以使用以下地址: - ``` - http://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy - ``` + ``` + http://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy + ``` -* 如要访问 Elasticsearch 集群健康信息`_cluster/health?pretty=true`,你会使用: +* 如要访问 Elasticsearch 集群健康信息`_cluster/health?pretty=true`,你可以使用以下地址: - ``` - https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true - ``` + ``` + https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true + ``` -* 要访问 *https* Elasticsearch 服务健康信息 `_cluster/health?pretty=true`,你会使用: +* 如要访问 **https** Elasticsearch 服务健康信息 `_cluster/health?pretty=true`,你可以使用以下地址: - ``` - https://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true - ``` + ``` + https://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true + ``` -#### 通过 Web 浏览器访问集群中运行的服务 +#### 通过 Web 浏览器访问集群中运行的服务 {#uusing-web-browsers-to-access-services-running-on-the-cluster} 你或许能够将 API 服务器代理的 URL 放入浏览器的地址栏,然而: From b9494a23fef9c8db8dd0ed9649c8ac63238b5165 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Wed, 27 Jul 2022 22:09:50 +0800 Subject: [PATCH 251/292] [zh-cn] resync /tasks/administer-cluster/access-cluster-api.md --- .../administer-cluster/access-cluster-api.md | 23 ++++++++++--------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md b/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md index 19d2b1db472d0..12c3d68f5745d 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/zh-cn/docs/tasks/administer-cluster/access-cluster-api.md @@ -69,7 +69,8 @@ kubectl handles locating and authenticating to the API server. If you want to di --> ### 直接访问 REST API -kubectl 处理对 API 服务器的定位和身份验证。如果你想通过 http 客户端(如 `curl` 或 `wget`,或浏览器)直接访问 REST API,你可以通过多种方式对 API 服务器进行定位和身份验证: +kubectl 处理对 API 服务器的定位和身份验证。如果你想通过 http 客户端(如 `curl` 或 `wget`, +或浏览器)直接访问 REST API,你可以通过多种方式对 API 服务器进行定位和身份验证: @@ -280,12 +281,12 @@ import ( ) func main() { - // uses the current context in kubeconfig - // path-to-kubeconfig -- for example, /root/.kube/config + // 在 kubeconfig 中使用当前上下文 + // path-to-kubeconfig -- 例如 /root/.kube/config config, _ := clientcmd.BuildConfigFromFlags("", "") - // creates the clientset + // 创建 clientset clientset, _ := kubernetes.NewForConfig(config) - // access the API to list pods + // 访问 API 以列出 Pod pods, _ := clientset.CoreV1().Pods("").List(context.TODO(), v1.ListOptions{}) fmt.Printf("There are %d pods in the cluster\n", len(pods.Items)) } @@ -305,7 +306,7 @@ To use [Python client](https://github.com/kubernetes-client/python), run the fol --> 要使用 [Python 客户端](https://github.com/kubernetes-client/python),运行下列命令: `pip install kubernetes`。 -参见 [Python 客户端库主页](https://github.com/kubernetes-client/python) 了解更多安装选项。 +参见 [Python 客户端库主页](https://github.com/kubernetes-client/python)了解更多安装选项。 参阅[https://github.com/kubernetes-client/java/releases](https://github.com/kubernetes-client/java/releases) 了解当前支持的版本。 @@ -357,7 +358,7 @@ as the kubectl CLI does to locate and authenticate to the API server. See this [ Java 客户端可以使用 kubectl 命令行所使用的 [kubeconfig 文件](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/) 以定位 API 服务器并向其认证身份。 -参看此[示例](https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java): +参看此[示例](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java): ```java package io.kubernetes.client.examples; @@ -519,4 +520,4 @@ exampleWithKubeConfig = do -* [从 Pod 中访问 API](/zh-cn/docs/tasks/run-application/access-api-from-pod/) +* [从 Pod 中访问 Kubernetes API](/zh-cn/docs/tasks/run-application/access-api-from-pod/) From b6d9c1bbf283b0f64a9bbc8f8f5ae37b7643f2bf Mon Sep 17 00:00:00 2001 From: windsonsea Date: Wed, 27 Jul 2022 22:30:42 +0800 Subject: [PATCH 252/292] [zh-cn] updated /docs/reference/glossary/event.md --- content/zh-cn/docs/reference/glossary/event.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/zh-cn/docs/reference/glossary/event.md b/content/zh-cn/docs/reference/glossary/event.md index 6c74215300c41..107854cf5305c 100644 --- a/content/zh-cn/docs/reference/glossary/event.md +++ b/content/zh-cn/docs/reference/glossary/event.md @@ -2,9 +2,9 @@ title: 事件(Event) id: event date: 2022-01-16 -full_link: /docs/reference/kubernetes-api/cluster-resources/event-v1/ +full_link: /zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/ short_description: > - 对集群中周处发生的事件的报告。通常用来表述系统中某种状态变更。 + 对集群中某处所发生事件的报告。通常用来表述系统中某种状态变更。 aka: tags: - core-object @@ -25,11 +25,11 @@ tags: --> -每个 Event 是{{< glossary_tooltip text="集群" term_id="cluster" >}}中某处发生的事件的报告。 -它通常用来表述系统中的某种状态变化。 +每个 Event 是{{< glossary_tooltip text="集群" term_id="cluster" >}}中某处所发生事件的报告。 +它通常用来表述系统中的某种状态变更。 @@ -40,7 +40,7 @@ or the continued existence of events with that reason. --> 事件的保留时间有限,随着时间推进,其触发方式和消息都可能发生变化。 事件用户不应该对带有给定原因(反映下层触发源)的时间特征有任何依赖, -也不要寄希望于对应该原因的事件会一直存在。 +也不要寄希望于该原因所造成的事件会一直存在。 在 Kubernetes 中,[审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/) -机制会生成一种不同种类的 Event 记录(API 组为 `audit.k8s.io`)。 +机制会生成一种不同类别的 Event 记录(API 组为 `audit.k8s.io`)。 From ee4e170689e577ba1353a10c106ecf957a33baaa Mon Sep 17 00:00:00 2001 From: windsonsea Date: Wed, 27 Jul 2022 22:53:26 +0800 Subject: [PATCH 253/292] [zh-cn] updated /glossary/istio.md and kops.md --- content/zh-cn/docs/reference/glossary/istio.md | 8 +++----- content/zh-cn/docs/reference/glossary/kops.md | 9 ++++----- 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/reference/glossary/istio.md b/content/zh-cn/docs/reference/glossary/istio.md index 4a3300f4f6b18..fd550efc9f0a1 100644 --- a/content/zh-cn/docs/reference/glossary/istio.md +++ b/content/zh-cn/docs/reference/glossary/istio.md @@ -2,9 +2,9 @@ title: Istio id: istio date: 2018-04-12 -full_link: https://istio.io/docs/concepts/what-is-istio/ +full_link: https://istio.io/zh/docs/concepts/what-is-istio/ short_description: > - Istio 是个开放平台(非 Kubernetes 特有),提供了一种统一的方式来集成微服务、管理流量、实施策略和汇总度量数据。 + Istio 是一个(非 Kubernetes 特有的)开放平台,提供了一种统一的方式来集成微服务、管理流量、实施策略和汇总度量数据。 aka: tags: - networking @@ -13,7 +13,6 @@ tags: --- -Istio 是个开放平台(非 Kubernetes 特有),提供了一种统一的方式来集成微服务、管理流量、实施策略和汇总度量数据。 +Istio 是一个(非 Kubernetes 特有的)开放平台,提供了一种统一的方式来集成微服务、管理流量、实施策略和汇总度量数据。 diff --git a/content/zh-cn/docs/reference/glossary/kops.md b/content/zh-cn/docs/reference/glossary/kops.md index 1719e2a8a67ce..22146cf6eb06d 100644 --- a/content/zh-cn/docs/reference/glossary/kops.md +++ b/content/zh-cn/docs/reference/glossary/kops.md @@ -37,13 +37,12 @@ kops 是一个命令行工具,可以帮助您创建、销毁、升级和维护 +{{< note >}} +注意:官方仅支持 AWS。对 GCE 和 VMware vSphere 的支持还处于 Alpha 阶段。 {{< /note >}} ---> -注意:官方仅支持 AWS,GCE 和 VMware vSphere 的支持还处于 alpha* 阶段。 - Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} @@ -47,8 +48,8 @@ Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} ### 最小特权 {#least-privilege} 理想情况下,分配给用户和服务帐户的 RBAC 权限应该是最小的。 @@ -59,14 +60,15 @@ some general rules that can be applied are : ClusterRoleBindings to give users rights only within a specific namespace. - Avoid providing wildcard permissions when possible, especially to all resources. As Kubernetes is an extensible system, providing wildcard access gives rights - not just to all object types presently in the cluster, but also to all future object types + not just to all object types that currently exist in the cluster, but also to all future object types which are created in the future. - Administrators should not use `cluster-admin` accounts except where specifically needed. - Providing a low privileged account with [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation) + Providing a low privileged account with + [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation) can avoid accidental modification of cluster resources. - Avoid adding users to the `system:masters` group. Any user who is a member of this group bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be - revoked by removing Role Bindings or Cluster Role Bindings. As an aside, if a cluster is + revoked by removing RoleBindings or ClusterRoleBindings. As an aside, if a cluster is using an authorization webhook, membership of this group also bypasses that webhook (requests from users who are members of that group are never sent to the webhook) --> @@ -89,14 +91,17 @@ some general rules that can be applied are : ### 最大限度地减少特权令牌的分发 {#minimize-distribution-of-privileged-tokens} 理想情况下,不应为 Pod 分配具有强大权限(例如,在[特权提级的风险](#privilege-escalation-risks)中列出的任一权限)的服务帐户。 @@ -109,6 +114,7 @@ In cases where a workload requires powerful permissions, consider the following [Pod 反亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)确保 Pod 不会与不可信或不太受信任的 Pod 一起运行。 特别注意可信度不高的 Pod 不符合 **Restricted** Pod 安全标准的情况。 + @@ -232,7 +238,9 @@ Secrets they would not have through RBAC directly. ### 持久卷的创建 {#persistent-volume-creation} @@ -246,7 +254,7 @@ PersistentVolumes, and constrained users should use PersistentVolumeClaims to ac ### Access to `proxy` subresource of Nodes Users with access to the proxy sub-resource of node objects have rights to the Kubelet API, -which allows for command execution on every pod on the node(s) which they have rights to. +which allows for command execution on every pod on the node(s) to which they have rights. This access bypasses audit logging and admission control, so care should be taken before granting rights to this resource. --> @@ -259,8 +267,8 @@ granting rights to this resource. ### esclate 动词 {#escalate-verb} @@ -272,7 +280,7 @@ users with this right can effectively escalate their privileges. @@ -344,7 +352,8 @@ objects to create a denial of service condition either based on the size or numb specifically relevant in multi-tenant clusters if semi-trusted or untrusted users are allowed limited access to a system. -One option for mitigation of this issue would be to use [resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota) +One option for mitigation of this issue would be to use +[resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota) to limit the quantity of objects which can be created. --> ## Kubernetes RBAC - 拒绝服务攻击的风险 {#denial-of-service-risks} @@ -354,4 +363,11 @@ to limit the quantity of objects which can be created. 产生拒绝服务状况,如 [Kubernetes 使用的 etcd 容易受到 OOM 攻击](https://github.com/kubernetes/kubernetes/issues/107325)中的讨论。 允许太不受信任或者不受信任的用户对系统进行有限的访问在多租户集群中是特别重要的。 -缓解此问题的一种选择是使用[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/#object-count-quota)以限制可以创建的对象数量。 \ No newline at end of file +缓解此问题的一种选择是使用[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/#object-count-quota)以限制可以创建的对象数量。 + +## {{% heading "whatsnext" %}} + + +* 了解有关 RBAC 的更多信息,请参阅 [RBAC 文档](/zh-cn/docs/reference/access-authn-authz/rbac/)。 From 4c897e1cc152c833cf917d8e4dbf2243c6b772e7 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Wed, 27 Jul 2022 17:54:31 -0400 Subject: [PATCH 255/292] Call config file a manifest and remove a 'please' Co-authored-by: Tim Bannister --- .../configmap-secret/managing-secret-using-config-file.md | 7 ++++--- .../configmap-secret/managing-secret-using-kubectl.md | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 5ec87d0827a2f..e3299ec843490 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -13,9 +13,10 @@ description: Creating Secret objects using resource configuration file. -## Create the Config file +## Create the Secret {#create-the-config-file} -You can define the `Secret` object in a file first, in JSON or YAML format, and then create that object. The +You can define the `Secret` object in a manifest first, in JSON or YAML format, +and then create that object. The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) resource contains two maps: `data` and `stringData`. The `data` field is used to store arbitrary data, encoded using base64. The @@ -44,7 +45,7 @@ The following example stores two strings in a Secret using the `data` field. MWYyZDFlMmU2N2Rm ``` -1. Create the configuration file: +1. Create the manifest: ```yaml apiVersion: v1 diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 191523042c980..72ec2a7bc3002 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -113,7 +113,7 @@ The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by default. This is to protect the `Secret` from being exposed accidentally, or from being stored in a terminal log. -To check the actual content of the encoded data, please refer to [Decoding the Secret](#decoding-secret). +To check the actual content of the encoded data, refer to [Decoding the Secret](#decoding-secret). ## Decoding the Secret {#decoding-secret} From 35462abc89e52209425d1637aa10db4faebc2d52 Mon Sep 17 00:00:00 2001 From: Sean Wei Date: Thu, 28 Jul 2022 09:09:00 +0800 Subject: [PATCH 256/292] [zh-cn] Update /zh/ to /zh-cn/ for some pages --- .../certificate-signing-requests.md | 15 ++++++++------- .../kubelet-tls-bootstrapping.md | 18 +++++++++--------- .../reference/access-authn-authz/webhook.md | 17 ++++++++++++----- .../kube-scheduler.md | 4 ++-- .../zh-cn/docs/reference/glossary/affinity.md | 2 +- 5 files changed, 32 insertions(+), 24 deletions(-) diff --git a/content/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests.md index 503818b5bad6f..2af972ff800fa 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -447,7 +447,7 @@ O is the group that this user will belong to. You can refer to 下面的脚本展示了如何生成 PKI 私钥和 CSR。 设置 CSR 的 CN 和 O 属性很重要。CN 是用户名,O 是该用户归属的组。 -你可以参考 [RBAC](/zh/docs/reference/access-authn-authz/rbac/) 了解标准组的信息。 +你可以参考 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 了解标准组的信息。 ```shell openssl genrsa -out myuser.key 2048 @@ -524,7 +524,7 @@ kubectl certificate approve myuser ### 取得证书 {#get-the-certificate} @@ -535,7 +535,7 @@ kubectl get csr/myuser -o yaml ``` @@ -744,7 +745,7 @@ were marked as approved. ### 控制平面签名者 {#signer-control-plane} Kubernetes 控制平面实现了每一个 -[Kubernetes 签名者](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers), +[Kubernetes 签名者](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers), 每个签名者的实现都是 kube-controller-manager 的一部分。 {{< note >}} @@ -780,7 +781,7 @@ Example certificate content: REST API 的用户可以通过向待签名的 CSR 的 `status` 子资源提交更新请求来对 CSR 进行签名。 -作为这个请求的一部分, `status.certificate` 字段应设置为已签名的证书。 +作为这个请求的一部分,`status.certificate` 字段应设置为已签名的证书。 此字段可包含一个或多个 PEM 编码的证书。 所有的 PEM 块必须具备 "CERTIFICATE" 标签,且不包含文件头,且编码的数据必须是 @@ -841,7 +842,7 @@ status: * For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1 * For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986) --> -* 参阅 [管理集群中的 TLS 认证](/zh/docs/tasks/tls/managing-tls-in-a-cluster/) +* 参阅 [管理集群中的 TLS 认证](/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/) * 查看 kube-controller-manager 中[签名者](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)部分的源代码 * 查看 kube-controller-manager 中[批准者](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)部分的源代码 * 有关 X.509 本身的详细信息,请参阅 [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) 第3.1节 diff --git a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index 05a3a8d81ba21..81f08541f0099 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md +++ b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -35,14 +35,14 @@ This in turn, can make it challenging to initialize or scale a cluster. 这也使得初始化或者扩缩一个集群的操作变得具有挑战性。 为了简化这一过程,从 1.4 版本开始,Kubernetes 引入了一个证书请求和签名 -API 以便简化此过程。该提案可在 +API。该提案可在 [这里](https://github.com/kubernetes/kubernetes/pull/20439)看到。 本文档描述节点初始化的过程,如何为 kubelet 配置 TLS 客户端证书启动引导, @@ -255,7 +255,7 @@ You can use any [authenticator](/docs/reference/access-authn-authz/authenticatio 为了让启动引导的 kubelet 能够连接到 kube-apiserver 并请求证书, 它必须首先在服务器上认证自身身份。你可以使用任何一种能够对 kubelet 执行身份认证的 -[身份认证组件](/zh/docs/reference/access-authn-authz/authentication/)。 +[身份认证组件](/zh-cn/docs/reference/access-authn-authz/authentication/)。 随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC) -策略上,从而严格限制请求(使用[启动引导令牌](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)) +策略上,从而严格限制请求(使用[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)) 仅限于客户端申请提供证书。当 RBAC 被配置启用时,可以将令牌限制到某个组, 从而提高灵活性。例如,你可以在准备节点期间禁止某特定启动引导组的访问。 @@ -315,7 +315,7 @@ and then issued to the individual kubelet. You can use a single token for an ent --> #### 启动引导令牌 {#bootstrap-tokens} -启动引导令牌的细节在[这里](/zh/docs/reference/access-authn-authz/bootstrap-tokens/) +启动引导令牌的细节在[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/) 详述。启动引导令牌在 Kubernetes 集群中存储为 Secret 对象,被发放给各个 kubelet。 你可以在整个集群中使用同一个令牌,也可以为每个节点发放单独的令牌。 @@ -347,7 +347,7 @@ The details for creating the secret are available [here](/docs/reference/access- If you want to use bootstrap tokens, you must enable it on kube-apiserver with the flag: --> -关于创建 Secret 的进一步细节可访问[这里](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)。 +关于创建 Secret 的进一步细节可访问[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)。 如果你希望使用启动引导令牌,你必须在 kube-apiserver 上使用下面的标志启用之: @@ -397,7 +397,7 @@ further details. --> 向 kube-apiserver 添加 `--token-auth-file=FILENAME` 标志(或许这要对 systemd 的单元文件作修改)以启用令牌文件。参见 -[这里](/zh/docs/reference/access-authn-authz/authentication/#static-token-file) +[这里](/zh-cn/docs/reference/access-authn-authz/authentication/#static-token-file) 的文档以了解进一步的细节。 例如: @@ -603,7 +603,7 @@ collection. --> 作为 [kube-controller-manager](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) 的一部分的 `csrapproving` 控制器是自动被启用的。 -该控制器使用 [`SubjectAccessReview` API](/zh/docs/reference/access-authn-authz/authorization/#checking-api-access) +该控制器使用 [`SubjectAccessReview` API](/zh-cn/docs/reference/access-authn-authz/authorization/#checking-api-access) 来确定给定用户是否被授权请求 CSR,之后基于鉴权结果执行批复操作。 为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSRs。 该组件仅是忽略未被授权的请求。 diff --git a/content/zh-cn/docs/reference/access-authn-authz/webhook.md b/content/zh-cn/docs/reference/access-authn-authz/webhook.md index 29032a7353b72..564eee29780e8 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/webhook.md +++ b/content/zh-cn/docs/reference/access-authn-authz/webhook.md @@ -15,13 +15,16 @@ weight: 95 --> + -WebHook 是一种 HTTP 回调:某些条件下触发的 HTTP POST 请求;通过 HTTP POST 发送的简单事件通知。一个基于 web 应用实现的 WebHook 会在特定事件发生时把消息发送给特定的 URL。 +WebHook 是一种 HTTP 回调:某些条件下触发的 HTTP POST 请求;通过 HTTP POST +发送的简单事件通知。一个基于 web 应用实现的 WebHook 会在特定事件发生时把消息发送给特定的 URL。 + -`Webhook` 模式需要一个 HTTP 配置文件,通过 `--authorization-webhook-config-file=SOME_FILENAME` 的参数声明。 +`Webhook` 模式需要一个 HTTP 配置文件,通过 +`--authorization-webhook-config-file=SOME_FILENAME` 的参数声明。 -配置文件的格式使用 [kubeconfig](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)。 +配置文件的格式使用 +[kubeconfig](/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)。 在该文件中,“users” 代表着 API 服务器的 webhook,而 “cluster” 代表着远程服务。 -需要注意的是 webhook API 对象与其他 Kubernetes API 对象一样都同样都遵从[版本兼容规则](/zh/docs/concepts/overview/kubernetes-api/)。 +需要注意的是 webhook API 对象与其他 Kubernetes API +对象一样都同样都遵从[版本兼容规则](/zh-cn/docs/concepts/overview/kubernetes-api/)。 实施人员应该了解 beta 对象的更宽松的兼容性承诺,同时确认请求的 "apiVersion" 字段能被正确地反序列化。 此外,API 服务器还必须启用 `authorization.k8s.io/v1beta1` API 扩展组 (`--runtime-config=authorization.k8s.io/v1beta1=true`)。 @@ -267,5 +273,6 @@ to the REST api. For further documentation refer to the authorization.v1beta1 API objects and [webhook.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go). --> -更多信息可以参考 authorization.v1beta1 API 对象和 [webhook.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go)。 +更多信息可以参考 authorization.v1beta1 API 对象和 +[webhook.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go)。 diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md index 28fb4f2a61bc0..530cf647ff1b4 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -19,14 +19,14 @@ each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node. Multiple different schedulers may be used within a cluster; kube-scheduler is the reference implementation. -See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/) +See [scheduling](/docs/concepts/scheduling-eviction/) for more information about scheduling and the kube-scheduler component. --> Kubernetes 调度器是一个控制面进程,负责将 Pods 指派到节点上。 调度器基于约束和可用资源为调度队列中每个 Pod 确定其可合法放置的节点。 调度器之后对所有合法的节点进行排序,将 Pod 绑定到一个合适的节点。 在同一个集群中可以使用多个不同的调度器;kube-scheduler 是其参考实现。 -参阅[调度](/zh/docs/concepts/scheduling-eviction/)以获得关于调度和 +参阅[调度](/zh-cn/docs/concepts/scheduling-eviction/)以获得关于调度和 kube-scheduler 组件的更多信息。 ``` diff --git a/content/zh-cn/docs/reference/glossary/affinity.md b/content/zh-cn/docs/reference/glossary/affinity.md index 44aa0ab6cac43..67da7e05e9684 100644 --- a/content/zh-cn/docs/reference/glossary/affinity.md +++ b/content/zh-cn/docs/reference/glossary/affinity.md @@ -2,7 +2,7 @@ title: 亲和性(Affinity) id: affinity date: 2019-01-11 -full_link: zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity +full_link: /zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity short_description: > 调度程序用于确定在何处放置 Pods(亲和性)的规则 aka: From a9ce4b86334172b17cc3715966f115e0823ca3f3 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Wed, 27 Jul 2022 02:06:41 +0800 Subject: [PATCH 257/292] Update configure-service-account.md --- .../configure-service-account.md | 77 +++++++++---------- 1 file changed, 35 insertions(+), 42 deletions(-) diff --git a/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md b/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md index 63f29d00e5e50..8adda39fdb739 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md @@ -33,10 +33,8 @@ not apply. 当你(自然人)访问集群时(例如,使用 `kubectl`),API 服务器将你的身份验证为 特定的用户帐户(当前这通常是 `admin`,除非你的集群管理员已经定制了你的集群配置)。 @@ -50,29 +48,30 @@ Pod 内的容器中的进程也可以与 api 服务器接触。 ## 使用默认的服务账户访问 API 服务器 当你创建 Pod 时,如果没有指定服务账户,Pod 会被指定给命名空间中的 `default` 服务账户。 -如果你查看 Pod 的原始 JSON 或 YAML(例如:`kubectl get pods/podname -o yaml`), -你可以看到 `spec.serviceAccountName` 字段已经被自动设置了。 +如果你查看 Pod 的原始 JSON 或 YAML(例如:`kubectl get pods/ -o yaml`), +你可以看到 `spec.serviceAccountName` 字段已经被[自动设置](/zh-cn/docs/concepts/overview/working-with-objects/object-management/)了。 你可以使用自动挂载给 Pod 的服务账户凭据访问 API, -[访问集群](/zh-cn/docs/tasks/access-application-cluster/access-cluster/)页面中有相关描述。 +[访问集群](/zh-cn/docs/tasks/access-application-cluster/access-cluster)页面中有相关描述。 服务账户的 API 许可取决于你所使用的 [鉴权插件和策略](/zh-cn/docs/reference/access-authn-authz/authorization/#authorization-modules)。 @@ -111,7 +110,7 @@ The pod spec takes precedence over the service account if both specify a `automo 如果 Pod 和服务账户都指定了 `automountServiceAccountToken` 值,则 Pod 的 spec 优先于服务帐户。 输出类似于: -```none +```yaml apiVersion: v1 kind: ServiceAccount metadata: @@ -215,7 +214,7 @@ kubectl delete serviceaccount/build-robot ``` ## 为服务账户添加 ImagePullSecrets {#add-imagepullsecrets-to-a-service-account} ### 创建 ImagePullSecret -- 创建一个 ImagePullSecret,如[为 Pod 设置 ImagePullSecret](/zh-cn/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)所述。 +- 创建一个 ImagePullSecret,如[为 Pod 设置 ImagePullSecret](/zh-cn/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) 所述。 ```shell kubectl create secret docker-registry myregistrykey --docker-server=DUMMY_SERVER \ @@ -382,7 +381,7 @@ imagePullSecrets: -最后,用新的更新的 `sa.yaml` 文件替换服务账户。 +最后,使用新更新的 `sa.yaml` 文件替换服务账户。 ```shell kubectl replace serviceaccount default -f ./sa.yaml @@ -427,11 +426,8 @@ command line arguments to `kube-apiserver`: * `--service-account-issuer` @@ -440,13 +436,10 @@ command line arguments to `kube-apiserver`: 这样做是有用的。如果这个参数被多次指定,则第一个参数值会被用来生成令牌, 而所有参数值都会被用来确定哪些发放者是可接受的。你所运行的 Kubernetes 集群必须是 v1.22 或更高版本,才能多次指定 `--service-account-issuer`。 - * `--service-account-key-file` @@ -454,26 +447,21 @@ command line arguments to `kube-apiserver`: 的令牌。所指定的文件中可以包含多个秘钥,并且你可以多次使用此参数, 每次参数值为不同的文件。多次使用此参数时,由所给的秘钥之一签名的令牌会被 Kubernetes API 服务器认为是合法令牌。 - * `--service-account-signing-key-file` 指向包含当前服务账户令牌发放者的私钥的文件路径。 此发放者使用此私钥来签署所发放的 ID 令牌。 - -* `--api-audiences` (can be omitted) +* `--api-audiences` (可以省略) 服务账户令牌身份检查组件会检查针对 API 访问所使用的令牌, 确认令牌至少是被绑定到这里所给的受众(audiences)之一。 @@ -555,6 +543,7 @@ provider configuration at `{service-account-issuer}/.well-known/openid-configura If the URL does not comply, the `ServiceAccountIssuerDiscovery` endpoints will not be registered, even if the feature is enabled. --> +{{< note >}} 分发者的 URL 必须遵从 [OIDC 发现规范](https://openid.net/specs/openid-connect-discovery-1_0.html)。 这意味着 URL 必须使用 `https` 模式,并且必须在 @@ -563,6 +552,7 @@ not be registered, even if the feature is enabled. 如果 URL 没有遵从这一规范,`ServiceAccountIssuerDiscovery` 末端就不会被注册, 即使该特性已经被启用。 +{{< /note >}} +{{< note >}} 对 `/.well-known/openid-configuration` 和 `/openid/v1/jwks` 路径请求的响应被设计为与 OIDC 兼容,但不是与其完全一致。 返回的文档仅包含对 Kubernetes 服务账户令牌进行验证所必须的参数。 +{{< /note >}} -为 Pod 设置 Init 容器需要在 [Pod 规约](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) +为 Pod 设置 Init 容器需要在 [Pod 规约](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) 中添加 `initContainers` 字段, 该字段以 [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) 类型对象数组的形式组织,和应用的 `containers` 数组同级相邻。 -参阅 API 参考的[容器](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)章节了解详情。 +参阅 API 参考的[容器](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)章节了解详情。 Init 容器的状态在 `status.initContainerStatuses` 字段中以容器状态数组的格式返回 (类似 `status.containerStatuses` 字段)。 @@ -215,7 +215,7 @@ kind: Pod metadata: name: myapp-pod labels: - app: myapp + app.kubernetes.io/name: MyApp spec: containers: - name: myapp-container @@ -284,7 +284,7 @@ The output is similar to this: Name: myapp-pod Namespace: default [...] -Labels: app=myapp +Labels: app.kubernetes.io/name=MyApp Status: Pending [...] Init Containers: @@ -545,7 +545,7 @@ Pod 重启会导致 Init 容器重新执行,主要有如下几个原因: * Pod 的基础设施容器 (译者注:如 `pause` 容器) 被重启。这种情况不多见, 必须由具备 root 权限访问节点的人员来完成。 -* 当 `restartPolicy` 设置为 "`Always`",Pod 中所有容器会终止而强制重启。 +* 当 `restartPolicy` 设置为 `Always`,Pod 中所有容器会终止而强制重启。 由于垃圾收集机制的原因,Init 容器的完成记录将会丢失。 -本文档描述 Kubernetes 中的*投射卷(Projected Volumes)*。 +本文档描述 Kubernetes 中的**投射卷(Projected Volumes)**。 建议先熟悉[卷](/zh-cn/docs/concepts/storage/volumes/)概念。 @@ -49,10 +49,10 @@ Currently, the following types of volume sources can be projected: 所有的卷源都要求处于 Pod 所在的同一个名字空间内。进一步的详细信息,可参考 -[一体化卷](https://github.com/kubernetes/design-proposals-archive/blob/main/node/all-in-one-volume.md)设计文档。 +[一体化卷](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md)设计文档。 ## serviceAccountToken 投射卷 {#serviceaccounttoken} + 当 `TokenRequestProjection` 特性被启用时,你可以将当前 [服务账号](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens) 的令牌注入到 Pod 中特定路径下。例如: @@ -107,8 +108,8 @@ in the audience of the token, and otherwise should reject the token. This field is optional and it defaults to the identifier of the API server. --> 示例 Pod 中包含一个投射卷,其中包含注入的服务账号令牌。 -此 Pod 中的容器可以使用该令牌访问 Kubernetes API 服务器, 使用 -[pod 的 ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) +此 Pod 中的容器可以使用该令牌访问 Kubernetes API 服务器, 使用 +[Pod 的 ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) 进行身份验证。`audience` 字段包含令牌所针对的受众。 收到令牌的主体必须使用令牌受众中所指定的某个标识符来标识自身,否则应该拒绝该令牌。 此字段是可选的,默认值为 API 服务器的标识。 @@ -122,8 +123,8 @@ of the projected volume. --> 字段 `expirationSeconds` 是服务账号令牌预期的生命期长度。默认值为 1 小时, 必须至少为 10 分钟(600 秒)。管理员也可以通过设置 API 服务器的命令行参数 -`--service-account-max-token-expiration` 来为其设置最大值上限。`path` 字段给出 -与投射卷挂载点之间的相对路径。 +`--service-account-max-token-expiration` 来为其设置最大值上限。 +`path` 字段给出与投射卷挂载点之间的相对路径。 {{< note >}} 关于在投射的服务账号卷中处理文件访问权限的[提案](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) 介绍了如何使得所投射的文件具有合适的属主访问权限。 @@ -154,7 +155,7 @@ the projected files have the correct ownership set including container user ownership. --> 在包含了投射卷并在 -[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +[`SecurityContext`](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) 中设置了 `RunAsUser` 属性的 Linux Pod 中,投射文件具有正确的属主属性设置, 其中包含了容器用户属主。 @@ -181,8 +182,7 @@ Windows 在名为安全账号管理器(Security Account Manager,SAM) 宿主系统无法窥视到容器运行期间数据库内容。Windows 容器被设计用来运行操作系统的用户态部分, 与宿主系统之间隔离,因此维护了一个虚拟的 SAM 数据库。 所以,在宿主系统上运行的 kubelet 无法动态为虚拟的容器账号配置宿主文件的属主。 -如果需要将宿主机器上的文件与容器共享,建议将它们放到挂载于 `C:\` 之外 -的独立卷中。 +如果需要将宿主机器上的文件与容器共享,建议将它们放到挂载于 `C:\` 之外的独立卷中。 总体而言,为容器授予访问宿主系统的权限这种做法是不推荐的,因为这样做可能会打开潜在的安全性攻击之门。 -在创建 Windows Pod 时,如过在其 `SecurityContext` 中设置了 `RunAsUser`, +在创建 Windows Pod 时,如果在其 `SecurityContext` 中设置了 `RunAsUser`, Pod 会一直阻塞在 `ContainerCreating` 状态。因此,建议不要在 Windows 节点上使用仅针对 Linux 的 `RunAsUser` 选项。 {{< /note >}} From f65f5fe5135cc2a223d2396828b2a3d6aea9c37a Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Thu, 28 Jul 2022 14:41:02 +0800 Subject: [PATCH 260/292] sync --- .../overview/working-with-objects/names.md | 6 ++-- .../working-with-objects/namespaces.md | 30 ++++++++++--------- 2 files changed, 19 insertions(+), 17 deletions(-) diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/names.md b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md index 8b8bea11b61fe..a6f27bd406ac3 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/names.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md @@ -21,9 +21,9 @@ Every Kubernetes object also has a [_UID_](#uids) that is unique across your who For example, you can only have one Pod named `myapp-1234` within the same [namespace](/docs/concepts/overview/working-with-objects/namespaces/), but you can have one Pod and one Deployment that are each named `myapp-1234`. --> -集群中的每一个对象都有一个[**名称**](#names)来标识在同类资源中的唯一性。 +集群中的每一个对象都有一个[**名称**](#names)来标识在同类资源中的唯一性。 -每个 Kubernetes 对象也有一个 [**UID**](#uids) 来标识在整个集群中的唯一性。 +每个 Kubernetes 对象也有一个 [**UID**](#uids)来标识在整个集群中的唯一性。 比如,在同一个[名字空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/) 中有一个名为 `myapp-1234` 的 Pod,但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`。 @@ -172,7 +172,7 @@ Some resource types have additional restrictions on their names. Kubernetes UIDs are universally unique identifiers (also known as UUIDs). UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667. --> -Kubernetes UIDs 是全局唯一标识符(也叫 UUIDs)。 +Kubernetes UID 是全局唯一标识符(也叫 UUIDs)。 UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667。 ## {{% heading "whatsnext" %}} diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md b/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md index 46c3f35ac9207..83cc8329abe10 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/namespaces.md @@ -18,7 +18,7 @@ weight: 30 -在 Kubernetes 中,“名字空间(Namespace)”提供一种机制,将同一集群中的资源划分为相互隔离的组。 +在 Kubernetes 中,**名字空间(Namespace)**提供一种机制,将同一集群中的资源划分为相互隔离的组。 同一名字空间内的资源名称要唯一,但跨名字空间时没有这个要求。 名字空间作用域仅针对带有名字空间的对象,例如 Deployment、Service 等, 这种作用域对集群访问的对象不适用,例如 StorageClass、Node、PersistentVolume 等。 @@ -28,7 +28,7 @@ In Kubernetes, _namespaces_ provides a mechanism for isolating groups of resourc -## 何时使用多个名字空间 +## 何时使用多个名字空间 {#when-to-use-multiple-namespaces} -## 使用名字空间 +## 使用名字空间 {#working-with-namespaces} 名字空间的创建和删除在[名字空间的管理指南文档](/zh-cn/docs/tasks/administer-cluster/namespaces/)描述。 @@ -83,7 +83,7 @@ Avoid creating namespaces with the prefix `kube-`, since it is reserved for Kube You can list the current namespaces in a cluster using: --> -### 查看名字空间 +### 查看名字空间 {#viewing-namespaces} 你可以使用以下命令列出集群中现存的名字空间: @@ -123,11 +123,11 @@ Kubernetes 会创建四个初始名字空间: -### 为请求设置名字空间 +### 为请求设置名字空间 {#setting-the-namespace-for-a-request} 要为当前请求设置名字空间,请使用 `--namespace` 参数。 @@ -144,22 +144,23 @@ kubectl get pods --namespace=<名字空间名称> You can permanently save the namespace for all subsequent kubectl commands in that context. --> -### 设置名字空间偏好 +### 设置名字空间偏好 {#setting-the-namespace-preference} 你可以永久保存名字空间,以用于对应上下文中所有后续 kubectl 命令。 ```shell kubectl config set-context --current --namespace=<名字空间名称> # 验证 -kubectl config view | grep namespace: +kubectl config view --minify | grep namespace: ``` -## 名字空间和 DNS +## 名字空间和 DNS {#namespaces-and-dns} 当你创建一个[服务](/zh-cn/docs/concepts/services-networking/service/)时, Kubernetes 会创建一个相应的 [DNS 条目](/zh-cn/docs/concepts/services-networking/dns-pod-service/)。 @@ -214,12 +215,13 @@ TLDs](https://data.iana.org/TLD/tlds-alpha-by-domain.txt). -## 并非所有对象都在名字空间中 +## 并非所有对象都在名字空间中 {#not-all-objects-are-in-a-namespace} 大多数 kubernetes 资源(例如 Pod、Service、副本控制器等)都位于某些名字空间中。 From efa26546c842ad5224cf112b7e985e08f903323c Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 14:55:57 +0800 Subject: [PATCH 261/292] [zh-cn] resync /concepts/storage/persistent-volumes.md --- .../concepts/storage/persistent-volumes.md | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/content/zh-cn/docs/concepts/storage/persistent-volumes.md b/content/zh-cn/docs/concepts/storage/persistent-volumes.md index 1d2b50735bd22..4bca21f6c7625 100644 --- a/content/zh-cn/docs/concepts/storage/persistent-volumes.md +++ b/content/zh-cn/docs/concepts/storage/persistent-volumes.md @@ -818,12 +818,12 @@ The following types of PersistentVolume are deprecated. This means that support 以下的持久卷已被弃用。这意味着当前仍是支持的,但是 Kubernetes 将来的发行版会将其移除。 -* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder(OpenStack 块存储)(于 v1.18 **弃用**) +* [`cinder`](/zh-cn/docs/concepts/storage/volumes/#cinder) - Cinder(OpenStack 块存储)(于 v1.18 **弃用**) * [`flexVolume`](/zh-cn/docs/concepts/storage/volumes/#flexVolume) - FlexVolume (于 v1.23 **弃用**) -* [`flocker`](/docs/concepts/storage/volumes/#flocker) - Flocker 存储(于 v1.22 **弃用**) -* [`quobyte`](/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷 +* [`flocker`](/zh-cn/docs/concepts/storage/volumes/#flocker) - Flocker 存储(于 v1.22 **弃用**) +* [`quobyte`](/zh-cn/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷 (于 v1.22 **弃用**) -* [`storageos`](/docs/concepts/storage/volumes/#storageos) - StorageOS 卷(于 v1.22 **弃用**) +* [`storageos`](/zh-cn/docs/concepts/storage/volumes/#storageos) - StorageOS 卷(于 v1.22 **弃用**) 你可以将 `volumeMode` 设置为 `Block`,以便将卷作为原始块设备来使用。 这类卷以块设备的方式交给 Pod 使用,其上没有任何文件系统。 -这种模式对于为 Pod 提供一种使用最快可能方式来访问卷而言很有帮助,Pod 和 -卷之间不存在文件系统层。另外,Pod 中运行的应用必须知道如何处理原始块设备。 -关于如何在 Pod 中使用 `volumeMode: Block` 的卷,可参阅 -[原始块卷支持](#raw-block-volume-support)。 +这种模式对于为 Pod 提供一种使用最快可能方式来访问卷而言很有帮助, +Pod 和卷之间不存在文件系统层。另外,Pod 中运行的应用必须知道如何处理原始块设备。 +关于如何在 Pod 中使用 `volumeMode: Block` 的卷, +可参阅[原始块卷支持](#raw-block-volume-support)。 访问模式有: -`ReadWriteOnce` +`ReadWriteOnce` : 卷可以被一个节点以读写方式挂载。 -ReadWriteOnce 访问模式也允许运行在同一节点上的多个 Pod 访问卷。 +ReadWriteOnce 访问模式也允许运行在同一节点上的多个 Pod 访问卷。 `ReadOnlyMany` : 卷可以被多个节点以只读方式挂载。 @@ -1039,7 +1038,7 @@ Kubernetes 使用卷访问模式来匹配 PersistentVolumeClaim 和 PersistentVo | AzureFile | ✓ | ✓ | ✓ | - | | AzureDisk | ✓ | - | - | - | | CephFS | ✓ | ✓ | ✓ | - | -| Cinder | ✓ | - | - | - | +| Cinder | ✓ | - | ([如果多次挂接卷可用](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/features.md#multi-attach-volumes)) | - | | CSI | 取决于驱动 | 取决于驱动 | 取决于驱动 | 取决于驱动 | | FC | ✓ | ✓ | - | - | | FlexVolume | ✓ | ✓ | 取决于驱动 | - | @@ -1081,7 +1080,6 @@ it will become fully deprecated in a future Kubernetes release. `storageClassName` 属性。这一注解目前仍然起作用,不过在将来的 Kubernetes 发布版本中该注解会被彻底废弃。 - 每个 PV 卷可以通过设置节点亲和性来定义一些约束,进而限制从哪些节点上可以访问此卷。 使用这些卷的 Pod 只会被调度到节点亲和性规则所选择的节点上执行。 -要设置节点亲和性,配置 PV 卷 `.spec` 中的 `nodeAffinity`。 -[持久卷](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) +要设置节点亲和性,配置 PV 卷 `.spec` 中的 `nodeAffinity`。 +[持久卷](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) API 参考关于该字段的更多细节。 ### 卷模式 {#volume-modes} -申领使用[与卷相同的约定](#access-modes)来表明是将卷作为文件系统还是块设备来使用。 +申领使用[与卷相同的约定](#volume-mode)来表明是将卷作为文件系统还是块设备来使用。 ## 理解 Kubernetes 对象 {#kubernetes-objects} -在 Kubernetes 系统中,*Kubernetes 对象* 是持久化的实体。 +在 Kubernetes 系统中,**Kubernetes 对象**是持久化的实体。 Kubernetes 使用这些实体去表示整个集群的状态。 比較特别地是,它们描述了如下信息: @@ -83,7 +81,7 @@ its _desired state_. 几乎每个 Kubernetes 对象包含两个嵌套的对象字段,它们负责管理对象的配置: 对象 **`spec`(规约)** 和 对象 **`status`(状态)**。 对于具有 `spec` 的对象,你必须在创建对象时设置其内容,描述你希望对象所具有的特征: -*期望状态(Desired State)*。 +**期望状态(Desired State)**。 @@ -187,9 +187,8 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to * `spec` - 你所期望的该对象的状态 对每个 Kubernetes 对象而言,其 `spec` 之精确格式都是不同的,包含了特定于该对象的嵌套字段。 diff --git a/content/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource.md index 6aa46f560ea1d..71ac184837ddc 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -17,7 +17,7 @@ This page shows how to assign a memory *request* and a memory *limit* to a Container. A Container is guaranteed to have as much memory as it requests, but is not allowed to use more memory than its limit. --> -此页面展示如何将内存 *请求* (request)和内存 *限制* (limit)分配给一个容器。 +此页面展示如何将内存**请求**(request)和内存**限制**(limit)分配给一个容器。 我们保障容器拥有它请求数量的内存,但不允许使用超过限制数量的内存。 ## {{% heading "prerequisites" %}} From 8d10e62c82afa5248301dbac525a90bba72081e8 Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 15:44:37 +0800 Subject: [PATCH 263/292] [zh-cn] updated /concepts/architecture/nodes.md --- .../zh-cn/docs/concepts/architecture/nodes.md | 88 +++++++++---------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/content/zh-cn/docs/concepts/architecture/nodes.md b/content/zh-cn/docs/concepts/architecture/nodes.md index b50ded16816cd..90634a7d9bbd9 100644 --- a/content/zh-cn/docs/concepts/architecture/nodes.md +++ b/content/zh-cn/docs/concepts/architecture/nodes.md @@ -17,13 +17,13 @@ weight: 10 Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行你的工作负载。 节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。 -每个节点包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务; +每个节点包含运行 {{< glossary_tooltip text="Pod" term_id="pod" >}} 所需的服务; 这些节点由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。 -通常集群中会有若干个节点;而在一个学习用或者资源受限的环境中,你的集群中也可能 -只有一个节点。 +通常集群中会有若干个节点;而在一个学习所用或者资源受限的环境中,你的集群中也可能只有一个节点。 节点上的[组件](/zh-cn/docs/concepts/overview/components/#node-components)包括 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}、 @@ -50,7 +49,7 @@ Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行 There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}: 1. The kubelet on a node self-registers to the control plane -2. You, or another human user, manually add a Node object +2. You (or another human user) manually add a Node object After you create a Node {{< glossary_tooltip text="object" term_id="object" >}}, or the kubelet on a node self-registers, the control plane checks whether the new Node object is @@ -61,7 +60,7 @@ valid. For example, if you try to create a Node from the following JSON manifest 向 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}添加节点的方式主要有两种: 1. 节点上的 `kubelet` 向控制面执行自注册; -2. 你,或者别的什么人,手动添加一个 Node 对象。 +2. 你(或者别的什么人)手动添加一个 Node 对象。 在你创建了 Node {{< glossary_tooltip text="对象" term_id="object" >}}或者节点上的 `kubelet` 执行了自注册操作之后,控制面会检查新的 Node 对象是否合法。 @@ -83,14 +82,14 @@ valid. For example, if you try to create a Node from the following JSON manifest Kubernetes 会在内部创建一个 Node 对象作为节点的表示。Kubernetes 检查 `kubelet` 向 API 服务器注册节点时使用的 `metadata.name` 字段是否匹配。 如果节点是健康的(即所有必要的服务都在运行中),则该节点可以用来运行 Pod。 -否则,直到该节点变为健康之前,所有的集群活动都会忽略该节点。 +否则,直到该节点变为健康之前,所有的集群活动都会忽略该节点。 {{< note >}} -启用[Node 鉴权模式](/zh-cn/docs/reference/access-authn-authz/node/)和 +启用 [Node 鉴权模式](/zh-cn/docs/reference/access-authn-authz/node/)和 [NodeRestriction 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)时, 仅授权 `kubelet` 创建或修改其自己的节点资源。 @@ -216,7 +215,7 @@ You can create and modify Node objects using When you want to create Node objects manually, set the kubelet flag `--register-node=false`. You can modify Node objects regardless of the setting of `--register-node`. -For example, you can set labels on an existing Node, or mark it unschedulable. +For example, you can set labels on an existing Node or mark it unschedulable. --> ### 手动节点管理 {#manual-node-administration} @@ -226,15 +225,15 @@ For example, you can set labels on an existing Node, or mark it unschedulable. 如果你希望手动创建节点对象时,请设置 kubelet 标志 `--register-node=false`。 你可以修改 Node 对象(忽略 `--register-node` 设置)。 -例如,修改节点上的标签或标记其为不可调度。 +例如,你可以修改节点上的标签或并标记其为不可调度。 {{< table caption = "节点状况及每种状况适用场景的描述" >}} @@ -364,7 +363,7 @@ Condition,被保护起来的节点在其规约中被标记为不可调度(Un In the Kubernetes API, a node's condition is represented as part of the `.status` of the Node resource. For example, the following JSON structure describes a healthy node: --> -在 Kubernetes API 中,节点的状况表示节点资源中`.status` 的一部分。 +在 Kubernetes API 中,节点的状况表示节点资源中`.status` 的一部分。 例如,以下 JSON 结构描述了一个健康节点: ```json @@ -393,7 +392,7 @@ for all Pods assigned to that node. The default eviction timeout duration is `pod-eviction-timeout` 值(一个传递给 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} 的参数),[节点控制器](#node-controller)会对节点上的所有 Pod 触发 -{{< glossary_tooltip text="API-发起的驱逐" term_id="api-eviction" >}}。 +{{< glossary_tooltip text="API 发起的驱逐" term_id="api-eviction" >}}。 默认的逐出超时时长为 **5 分钟**。 节点控制器在确认 Pod 在集群中已经停止运行前,不会强制删除它们。 @@ -461,7 +460,8 @@ Node that is available to be consumed by normal Pods. 可以在学习如何在节点上[预留计算资源](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) 的时候了解有关容量和可分配资源的更多信息。 @@ -505,7 +505,7 @@ Kubernetes 节点发送的心跳帮助你的集群确定每个节点的可用性 --> * 更新节点的 `.status` * `kube-node-lease` {{}}中的 - [Lease(租约)](/docs/reference/kubernetes-api/cluster-resources/lease-v1/)对象。 + [Lease(租约)](/zh-cn/docs/reference/kubernetes-api/cluster-resources/lease-v1/)对象。 每个节点都有一个关联的 Lease 对象。 @@ -586,7 +586,7 @@ This period can be configured using the `--node-monitor-period` flag on the 第三个是监控节点的健康状况。节点控制器负责: - 在节点不可达的情况下,在 Node 的 `.status` 中更新 `Ready` 状况。 - 在这种情况下,节点控制器将 NodeReady 状况更新为 `Unknown` 。 + 在这种情况下,节点控制器将 NodeReady 状况更新为 `Unknown`。 - 如果节点仍然无法访问:对于不可达节点上的所有 Pod 触发 [API 发起的逐出](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)操作。 默认情况下,节点控制器在将节点标记为 `Unknown` 后等待 5 分钟提交第一个驱逐请求。 @@ -598,7 +598,7 @@ This period can be configured using the `--node-monitor-period` flag on the ### Rate limits on eviction In most cases, the node controller limits the eviction rate to -`-node-eviction-rate` (default 0.1) per second, meaning it won't evict pods +`--node-eviction-rate` (default 0.1) per second, meaning it won't evict pods from more than 1 node per 10 seconds. --> ### 逐出速率限制 {#rate-limits-on-eviction} @@ -627,7 +627,7 @@ the same time: - 如果不健康节点的比例超过 `--unhealthy-zone-threshold` (默认为 0.55), 驱逐速率将会降低。 - 如果集群较小(意即小于等于 `--large-cluster-size-threshold` 个节点 - 默认为 50), - 驱逐操作将会停止。 + 驱逐操作将会停止。 - 否则驱逐速率将降为每秒 `--secondary-node-eviction-rate` 个(默认为 0.01)。 @@ -743,7 +743,7 @@ Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown. --> -kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的 Pods。 +kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的所有 Pod。 在节点终止期间,kubelet 保证 Pod 遵从常规的 [Pod 终止流程](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。 @@ -763,7 +763,7 @@ Graceful node shutdown is controlled with the `GracefulNodeShutdown` enabled by default in 1.21. --> 节点体面关闭特性受 `GracefulNodeShutdown` -[特性门控](/docs/reference/command-line-tools-reference/feature-gates/)控制, +[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)控制, 在 1.21 版本中是默认启用的。 为了缓解上述情况,用户可以手动将具有 `NoExecute` 或 `NoSchedule` 效果的 `node kubernetes.io/out-of-service` 污点添加到节点上,标记其无法提供服务。 -如果在 `kube-controller-manager` 上启用了 `NodeOutOfServiceVolumeDetach` +如果在 `kube-controller-manager` 上启用了 `NodeOutOfServiceVolumeDetach` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), 并且节点被通过污点标记为无法提供服务,如果节点 Pod 上没有设置对应的容忍度, 那么这样的 Pod 将被强制删除,并且该在节点上被终止的 Pod 将立即进行卷分离操作。 @@ -1058,7 +1058,7 @@ their respective shutdown periods. --> 如果此功能特性被启用,但没有提供配置数据,则不会出现排序操作。 -使用此功能特性需要启用 `GracefulNodeShutdownBasedOnPodPriority` +使用此功能特性需要启用 `GracefulNodeShutdownBasedOnPodPriority` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), 并将 [kubelet 配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/) 中的 `shutdownGracePeriodByPodPriority` 设置为期望的配置, @@ -1074,7 +1074,7 @@ the feature is Beta and is enabled by default. {{< note >}} 在节点体面关闭期间考虑 Pod 优先级的能力是作为 Kubernetes v1.23 中的 Alpha 功能引入的。 在 Kubernetes {{< skew currentVersion >}} 中该功能是 Beta 版,默认启用。 -{{< /note >}} +{{< /note >}} -**容忍度(Toleration)** 是应用于 Pod 上的。容忍度允许调度器调度带有对应污点的节点。 +**容忍度(Toleration)** 是应用于 Pod 上的。容忍度允许调度器调度带有对应污点的 Pod。 容忍度允许调度但并不保证调度:作为其功能的一部分, 调度器也会[评估其他参数](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)。 @@ -89,7 +90,7 @@ tolerations: ``` 这里是一个使用了容忍度的 Pod: @@ -104,7 +105,7 @@ The default value for `operator` is `Equal`. A toleration "matches" a taint if the keys are the same and the effects are the same, and: * the `operator` is `Exists` (in which case no `value` should be specified), or -* the `operator` is `Equal` and the `value`s are equal +* the `operator` is `Equal` and the `value`s are equal. --> 一个容忍度和一个污点相“匹配”是指它们有一样的键名和效果,并且: @@ -112,6 +113,7 @@ A toleration "matches" a taint if the keys are the same and the effects are the * 如果 `operator` 是 `Equal` ,则它们的 `value` 应该相等 {{< note >}} + @@ -147,7 +149,7 @@ remaining un-ignored taints have the indicated effects on the pod. In particular 你可以给一个节点添加多个污点,也可以给一个 Pod 添加多个容忍度设置。 Kubernetes 处理多个污点和容忍度的过程就像一个过滤器:从一个节点的所有污点开始遍历, 过滤掉那些 Pod 中存在与之相匹配的容忍度的污点。余下未被过滤的污点的 effect 值决定了 -Pod 是否会被分配到该节点,特别是以下情况: +Pod 是否会被分配到该节点。需要注意以下情况: -假定有一个 Pod,它有两个容忍度: +假定某个 Pod 有两个容忍度: ```yaml tolerations: @@ -207,7 +209,7 @@ one of the three that is not tolerated by the pod. -* **专用节点**:如果你想将某些节点专门分配给特定的一组用户使用,你可以给这些节点添加一个污点(即, +* **专用节点**:如果想将某些节点专门分配给特定的一组用户使用,你可以给这些节点添加一个污点(即, `kubectl taint nodes nodename dedicated=groupName:NoSchedule`), 然后给这组用户的 Pod 添加一个相对应的容忍度 (通过编写一个自定义的[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/), @@ -282,7 +284,7 @@ toleration to pods that use the special hardware. As in the dedicated nodes use it is probably easiest to apply the tolerations using a custom [admission controller](/docs/reference/access-authn-authz/admission-controllers/). For example, it is recommended to use [Extended -Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) +Resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources) to represent the special hardware, taint your special hardware nodes with the extended resource name and run the [ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration) @@ -326,7 +328,7 @@ when there are node problems, which is described in the next section. {{< feature-state for_k8s_version="v1.18" state="stable" >}} 前文提到过污点的效果值 `NoExecute` 会影响已经在节点上运行的 Pod,如下 -* 如果 Pod 不能忍受这类污点,Pod 会马上被驱逐 +* 如果 Pod 不能忍受这类污点,Pod 会马上被驱逐。 * 如果 Pod 能够忍受这类污点,但是在容忍度定义中没有指定 `tolerationSeconds`, 则 Pod 还会一直在这个节点上运行。 * 如果 Pod 能够忍受这类污点,而且指定了 `tolerationSeconds`, 则 Pod 还能在这个节点上继续运行这个指定的时间长度。 @@ -457,7 +459,8 @@ This ensures that DaemonSet pods are never evicted due to these problems. ## Taint Nodes by Condition The control plane, using the node {{}}, -automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions). +automatically creates taints with a `NoSchedule` effect for +[node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions). --> ## 基于节点状态添加污点 {#taint-nodes-by-condition} @@ -496,9 +499,8 @@ onto the affected node. 视为能够应对内存压力,而新创建的 `BestEffort` Pod 不会被调度到受影响的节点上。 -DaemonSet 控制器自动为所有守护进程添加如下 `NoSchedule` 容忍度以防 DaemonSet 崩溃: +DaemonSet 控制器自动为所有守护进程添加如下 `NoSchedule` 容忍度,以防 DaemonSet 崩溃: * `node.kubernetes.io/memory-pressure` * `node.kubernetes.io/disk-pressure` @@ -525,7 +527,8 @@ arbitrary tolerations to DaemonSets. ## {{% heading "whatsnext" %}} * 阅读[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/), From ef4f51348f1499f175ef534574ba0b3a1916911f Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 16:26:41 +0800 Subject: [PATCH 265/292] [zh-cn] updated /overview/working-with-objects/owners-dependents.md --- .../working-with-objects/owners-dependents.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents.md b/content/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents.md index a02810877f3e5..5ae4b8dddc293 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/owners-dependents.md @@ -11,7 +11,6 @@ weight: 60 - ## 属主关系与 Finalizer {#ownership-and-finalizers} -当你告诉 Kubernetes 删除一个资源,API 服务器允许管理控制器处理该资源的任何 +当你告诉 Kubernetes 删除一个资源,API 服务器允许管理控制器处理该资源的任何 [Finalizer 规则](/zh-cn/docs/concepts/overview/working-with-objects/finalizers/)。 -{{}} +{{}} 防止意外删除你的集群所依赖的、用于正常运作的资源。 例如,如果你试图删除一个仍被 Pod 使用的 `PersistentVolume`,该资源不会被立即删除, 因为 `PersistentVolume` 有 `kubernetes.io/pv-protection` Finalizer。 @@ -152,7 +154,7 @@ object. --> 当你使用[前台或孤立级联删除](/zh-cn/docs/concepts/architecture/garbage-collection/#cascading-deletion)时, Kubernetes 也会向属主资源添加 Finalizer。 -在前台删除中,会添加 `foreground` Finalizer,这样控制器必须在删除了拥有 +在前台删除中,会添加 `foreground` Finalizer,这样控制器必须在删除了拥有 `ownerReferences.blockOwnerDeletion=true` 的附属资源后,才能删除属主对象。 如果你指定了孤立删除策略,Kubernetes 会添加 `orphan` Finalizer, 这样控制器在删除属主对象后,会忽略附属资源。 @@ -166,4 +168,4 @@ Kubernetes 也会向属主资源添加 Finalizer。 --> * 了解更多关于 [Kubernetes Finalizer](/zh-cn/docs/concepts/overview/working-with-objects/finalizers/)。 * 了解关于[垃圾收集](/zh-cn/docs/concepts/architecture/garbage-collection)。 -* 阅读[对象元数据](/docs/reference/kubernetes-api/common-definitions/object-meta/#System)的 API 参考文档。 +* 阅读[对象元数据](/zh-cn/docs/reference/kubernetes-api/common-definitions/object-meta/#System)的 API 参考文档。 From 9163cfc19c643c2fbd50f5f8030bb440f584415e Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 14:03:13 +0800 Subject: [PATCH 266/292] [zh-cn] updated /overview/working-with-objects/kubernetes-objects.md --- .../kubernetes-objects.md | 23 ++++++++++--------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md index 6a2729b88cfc3..bf52f1b340e1a 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -36,13 +36,13 @@ entities to represent the state of your cluster. Specifically, they can describe --> ## 理解 Kubernetes 对象 {#kubernetes-objects} -在 Kubernetes 系统中,**Kubernetes 对象**是持久化的实体。 +在 Kubernetes 系统中,**Kubernetes 对象** 是持久化的实体。 Kubernetes 使用这些实体去表示整个集群的状态。 -比較特别地是,它们描述了如下信息: +比较特别地是,它们描述了如下信息: * 哪些容器化应用正在运行(以及在哪些节点上运行) * 可以被应用使用的资源 -* 关于应用运行时表现的策略,比如重启策略、升级策略,以及容错策略 +* 关于应用运行时表现的策略,比如重启策略、升级策略以及容错策略 -Kubernetes 对象是“目标性记录”——一旦创建对象,Kubernetes 系统将不断工作以确保对象存在。 +Kubernetes 对象是“目标性记录” —— 一旦创建该对象,Kubernetes 系统将不断工作以确保该对象存在。 通过创建对象,你就是在告知 Kubernetes 系统,你想要的集群工作负载状态看起来应是什么样子的, 这就是 Kubernetes 集群所谓的 **期望状态(Desired State)**。 -操作 Kubernetes 对象 —— 无论是创建、修改,或者删除 —— 需要使用 +操作 Kubernetes 对象 —— 无论是创建、修改或者删除 —— 需要使用 [Kubernetes API](/zh-cn/docs/concepts/overview/kubernetes-api)。 比如,当使用 `kubectl` 命令行接口(CLI)时,CLI 会调用必要的 Kubernetes API; 也可以在程序中使用[客户端库](/zh-cn/docs/reference/using-api/client-libraries/), @@ -109,8 +109,8 @@ a replacement instance. 当创建 Deployment 时,可能会去设置 Deployment 的 `spec`,以指定该应用要有 3 个副本运行。 Kubernetes 系统读取 Deployment 的 `spec`, 并启动我们所期望的应用的 3 个实例 —— 更新状态以与规约相匹配。 -如果这些实例中有的失败了(一种状态变更),Kubernetes 系统会通过执行修正操作 -来响应 `spec` 和状态间的不一致 —— 意味着它会启动一个新的实例来替换。 +如果这些实例中有的失败了(一种状态变更),Kubernetes 系统会通过执行修正操作来响应 +`spec` 和状态间的不一致 —— 意味着它会启动一个新的实例来替换。 例如,参阅 Pod API 参考文档中 -[`spec` 字段](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)。 +[`spec` 字段](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)。 对于每个 Pod,其 `.spec` 字段设置了 Pod 及其期望状态(例如 Pod 中每个容器的容器镜像名称)。 另一个对象规约的例子是 StatefulSet API 中的 -[`spec` 字段](/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetSpec)。 +[`spec` 字段](/zh-cn/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetSpec)。 对于 StatefulSet 而言,其 `.spec` 字段设置了 StatefulSet 及其期望状态。 -在 StatefulSet 的 `.spec` 内,有一个为 Pod 对象提供的[模板](/zh-cn/docs/concepts/workloads/pods/#pod-templates)。该模板描述了 StatefulSet 控制器为了满足 StatefulSet 规约而要创建的 Pod。 +在 StatefulSet 的 `.spec` 内,有一个为 Pod 对象提供的[模板](/zh-cn/docs/concepts/workloads/pods/#pod-templates)。 +该模板描述了 StatefulSet 控制器为了满足 StatefulSet 规约而要创建的 Pod。 不同类型的对象可以由不同的 `.status` 信息。API 参考页面给出了 `.status` 字段的详细结构, 以及针对不同类型 API 对象的具体内容。 From 70c2e3004cdc62f950b564a08b5d1dc1dc1711fc Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 17:18:23 +0800 Subject: [PATCH 267/292] [zh-cn] updated /extend-kubernetes/api-extension/apiserver-aggregation.md --- .../api-extension/apiserver-aggregation.md | 37 +++++++++---------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index 52a0c64e7e39e..08659e89a4395 100644 --- a/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -1,11 +1,11 @@ --- -title: 通过聚合层扩展 Kubernetes API +title: Kubernetes API 聚合层 content_type: concept weight: 20 --- -使用聚合层(Aggregation Layer),用户可以通过额外的 API 扩展 Kubernetes, +使用聚合层(Aggregation Layer),用户可以通过附加的 API 扩展 Kubernetes, 而不局限于 Kubernetes 核心 API 提供的功能。 +这里的附加 API 可以是现成的解决方案,比如 +[metrics server](https://github.com/kubernetes-sigs/metrics-server), +或者你自己开发的 API。 -这里的附加 API 可以是现成的解决方案比如 -[metrics server](https://github.com/kubernetes-sigs/metrics-server), -或者你自己开发的 API。 - 聚合层不同于 [定制资源(Custom Resources)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。 后者的目的是让 {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} -能够认识新的对象类别(Kind)。 +能够识别新的对象类别(Kind)。 ## 聚合层 {#aggregation-layer} 聚合层在 kube-apiserver 进程内运行。在扩展资源注册之前,聚合层不做任何事情。 -要注册 API,用户必须添加一个 APIService 对象,用它来“申领” Kubernetes API 中的 URL 路径。 -自此以后,聚合层将会把发给该 API 路径的所有内容(例如 `/apis/myextension.mycompany.io/v1/…`) +要注册 API,你可以添加一个 **APIService** 对象,用它来 “申领” Kubernetes API 中的 URL 路径。 +自此以后,聚合层将把发给该 API 路径的所有内容(例如 `/apis/myextension.mycompany.io/v1/…`) 转发到已注册的 APIService。 -APIService 的最常见实现方式是在集群中某 Pod 内运行 *扩展 API 服务器*。 +APIService 的最常见实现方式是在集群中某 Pod 内运行 **扩展 API 服务器**。 如果你在使用扩展 API 服务器来管理集群中的资源,该扩展 API 服务器(也被写成“extension-apiserver”) 一般需要和一个或多个{{< glossary_tooltip text="控制器" term_id="controller" >}}一起使用。 apiserver-builder 库同时提供构造扩展 API 服务器和控制器框架代码。 - -### 反应延迟 {#response-latency} +### 响应延迟 {#response-latency} 扩展 API 服务器与 kube-apiserver 之间需要存在低延迟的网络连接。 发现请求需要在五秒钟或更短的时间内完成到 kube-apiserver 的往返。 @@ -77,8 +74,8 @@ If your extension API server cannot achieve that latency requirement, consider m ## {{% heading "whatsnext" %}} + 输出类似于: + ```none INFO Kubernetes file "frontend-service.yaml" created INFO Kubernetes file "frontend-service.yaml" created @@ -205,7 +210,7 @@ you need is an existing `docker-compose.yml` file. ``` ```bash - kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml, + kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml ``` {{< note >}} -如果使用 ``oc create -f`` 手动推送 Openshift 工件,则需要确保在构建配置工件之前推送 -imagestream 工件,以解决 Openshift 的这个问题: https://github.com/openshift/origin/issues/4518 。 +如果使用 ``oc create -f`` 手动推送 OpenShift 工件,则需要确保在构建配置工件之前推送 +imagestream 工件,以解决 OpenShift 的这个问题: https://github.com/openshift/origin/issues/4518 。 {{< /note >}} `*-replicationcontroller.yaml` 文件包含 Replication Controller 对象。 如果你想指定副本数(默认为 1),可以使用 `--replicas` 参数: @@ -503,7 +508,7 @@ INFO Kubernetes file "web-daemonset.yaml" created `*-daemonset.yaml` 文件包含 DaemonSet 对象。 From a1c0b729e00f30e50604fb01c881248810e50016 Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 18:47:56 +0800 Subject: [PATCH 269/292] [zh-cn] updated /config-api/kube-proxy-config.v1alpha1.md --- .../config-api/kube-proxy-config.v1alpha1.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md index d0c388f054b77..be58a4d51fe97 100644 --- a/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md +++ b/content/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1.md @@ -2,7 +2,6 @@ title: kube-proxy 配置 (v1alpha1) content_type: tool-reference package: kubeproxy.config.k8s.io/v1alpha1 -auto_generated: true --- -

healthzBindAddress 字段是健康状态检查服务器提供服务时所使用的的 IP 地址和端口, +

healthzBindAddress 字段是健康状态检查服务器提供服务时所使用的 IP 地址和端口, 默认设置为 '0.0.0.0:10256'。

clusterCIDR [必需]
@@ -192,7 +191,7 @@ the range [-1000, 1000] in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen. -->

portRange 字段是主机端口的范围,形式为 ‘beginPort-endPort’(包含边界), - 用来设置代理服务所使用的端口。如果未指定(即‘0-0’),则代理服务会随机选择端口号。

+ 用来设置代理服务所使用的端口。如果未指定(即 ‘0-0’),则代理服务会随机选择端口号。

udpIdleTimeout [必需]
@@ -244,8 +243,8 @@ An empty string slice is meant to select all network interfaces.

nodePortAddresses 字段是 kube-proxy 进程的 --nodeport-addresses 命令行参数设置。 此值必须是合法的 IP 段。所给的 IP 段会作为参数来选择 NodePort 类型服务所使用的接口。 - 如果有人希望将本地主机(Localhost)上的服务暴露给本地访问,同时暴露在某些其他网络接口上 - 以实现某种目标,可以使用 IP 段的列表。 + 如果有人希望将本地主机(Localhost)上的服务暴露给本地访问, + 同时暴露在某些其他网络接口上以实现某种目标,可以使用 IP 段的列表。 如果此值被设置为 "127.0.0.0/8",则 kube-proxy 将仅为 NodePort 服务选择本地回路(loopback)接口。 如果此值被设置为非零的 IP 段,则 kube-proxy 会对 IP 作过滤,仅使用适用于当前节点的 IP 地址。 @@ -270,7 +269,7 @@ An empty string slice is meant to select all network interfaces. ShowHiddenMetricsForVersion is the version for which you want to show hidden metrics. -->

showHiddenMetricsForVersion 字段给出的是一个 Kubernetes 版本号字符串, - 用来设置你希望显示隐藏度量值的版本。

+ 用来设置你希望显示隐藏指标的版本。

detectLocalMode [必需]
@@ -1088,10 +1087,10 @@ during leader election cycles. LoggingConfiguration 包含日志选项。 -参考 [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) 以了解更多信息。 +参考 [Logs Options](https://git.k8s.io/component-base/logs/api/v1/options.go) 以了解更多信息。 From 86b42664eed2c249ca88db4428aeb4fef88fdcbd Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Sun, 24 Jul 2022 18:49:11 +0800 Subject: [PATCH 270/292] Update managing-secret-using-config-file.md --- .../managing-secret-using-config-file.md | 72 +++++++++++++------ 1 file changed, 51 insertions(+), 21 deletions(-) diff --git a/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 7b796ebf90dba..8fbd7bae829a8 100644 --- a/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -19,10 +19,12 @@ description: Creating Secret objects using resource configuration file. - + ## 创建配置文件 {#create-the-config-file} - @@ -50,7 +52,9 @@ the strings to base64 as follows: echo -n 'admin' | base64 ``` - + 输出类似于: ``` @@ -61,14 +65,18 @@ YWRtaW4= echo -n '1f2d1e2e67df' | base64 ``` - + 输出类似于: ``` MWYyZDFlMmU2N2Rm ``` - + 编写一个 Secret 配置文件,如下所示: ```yaml @@ -86,7 +94,7 @@ data: Note that the name of a Secret object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). --> -注意,Secret 对象的名称必须是有效的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +注意,Secret 对象的名称必须是有效的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 {{< note >}} 对于某些场景,你可能希望使用 `stringData` 字段。 -这字段可以将一个非 base64 编码的字符串直接放入 Secret 中, +这个字段可以将一个非 base64 编码的字符串直接放入 Secret 中, 当创建或更新该 Secret 时,此字段将被编码。 + 例如,如果你的应用程序使用以下配置文件: ```yaml @@ -130,7 +140,9 @@ username: "" password: "" ``` - + 你可以使用以下定义将其存储在 Secret 中: ```yaml @@ -146,24 +158,32 @@ stringData: password: ``` - + ## 创建 Secret 对象 {#create-the-secret-object} - + 现在使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) 创建 Secret: ```shell kubectl apply -f ./secret.yaml ``` - + 输出类似于: ``` secret/mysecret created ``` - + ## 检查 Secret {#check-the-secret} + 输出类似于: ```yaml @@ -204,7 +226,7 @@ To check the actual content of the encoded data, please refer to --> 命令 `kubectl get` 和 `kubectl describe` 默认不显示 `Secret` 的内容。 这是为了防止 `Secret` 意外地暴露给旁观者或者保存在终端日志中。 -检查编码数据的实际内容,请参考[解码 secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret). +检查编码数据的实际内容,请参考[解码 secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret)。 + 结果有以下 Secret: ```yaml @@ -242,13 +266,19 @@ metadata: type: Opaque ``` - + 其中 `YWRtaW5pc3RyYXRvcg==` 解码成 `administrator`。 - + ## 清理 {#clean-up} - + 删除你创建的 Secret: ```shell From 8542a1bb4bbdceb5b3c90acaf3dc63fc850644f9 Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 20:35:50 +0800 Subject: [PATCH 271/292] [zh-cn] updated /tools/kubeadm/control-plane-flags.md --- .../tools/kubeadm/control-plane-flags.md | 39 +++++++++---------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md index 5fee60793a186..9aaa23c7ca4d9 100644 --- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md +++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md @@ -4,13 +4,11 @@ content_type: concept weight: 40 --- @@ -29,8 +27,8 @@ For more details on each field in the configuration you can navigate to our 你可以使用 `KubeletConfiguration` 和 `KubeProxyConfiguration` 结构分别定制 kubelet 和 kube-proxy 组件。 所有这些选项都可以通过 kubeadm 配置 API 实现。 -有关配置中的每个字段的详细信息,你可以导航到我们的 -[API 参考页面](/docs/reference/config-api/kubeadm-config.v1beta3/) 。 +有关配置中的每个字段的详细信息,你可以导航到我们的 +[API 参考页面](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/) 。 {{< note >}} kubeadm 目前不支持对 CoreDNS 部署进行定制。 -你必须手动更新 `kube-system/coredns` {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} -并在更新后重新创建 CoreDNS {{< glossary_tooltip text="Pods" term_id="pod" >}}。 +你必须手动更新 `kube-system/coredns` {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} +并在更新后重新创建 CoreDNS {{< glossary_tooltip text="Pod" term_id="pod" >}}。 或者,你可以跳过默认的 CoreDNS 部署并部署你自己的 CoreDNS 变种。 -有关更多详细信息,请参阅[在 kubeadm 中使用 init phases](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases). +有关更多详细信息,请参阅[在 kubeadm 中使用 init phase](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases). {{< /note >}} {{< note >}} @@ -94,8 +92,8 @@ To override a flag for a control plane component: {{< note >}} 你可以通过运行 `kubeadm config print init-defaults` 并将输出保存到你所选的文件中, 以默认值形式生成 `ClusterConfiguration` 对象。 @@ -129,12 +127,13 @@ To workaround that you must use [patches](#patches). -有关详细信息,请参阅 [kube-apiserver 参考文档](/docs/reference/command-line-tools-reference/kube-apiserver/)。 +有关详细信息,请参阅 [kube-apiserver 参考文档](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)。 使用示例: + ```yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration @@ -154,12 +153,13 @@ apiServer: -有关详细信息,请参阅 [kube-controller-manager 参考文档](/docs/reference/command-line-tools-reference/kube-controller-manager/)。 +有关详细信息,请参阅 [kube-controller-manager 参考文档](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/)。 使用示例: + ```yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration @@ -178,12 +178,13 @@ controllerManager: -有关详细信息,请参阅 [kube-scheduler 参考文档](/docs/reference/command-line-tools-reference/kube-scheduler/)。 +有关详细信息,请参阅 [kube-scheduler 参考文档](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/)。 使用示例: + ```yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration @@ -222,8 +223,6 @@ etcd: 补丁目录必须包含名为 `target[suffix][+patchtype].extension` 的文件。 -例如,`kube-apiserver0+merge.yaml` 或只是 `etcd.json`。 +例如,`kube-apiserver0+merge.yaml` 或只是 `etcd.json`。 - `target` 可以是 `kube-apiserver`、`kube-controller-manager`、`kube-scheduler` 和 `etcd` 之一。 -- `patchtype` 可以是 `strategy`、`merge` 或 `json` 之一,并且这些必须匹配 +- `patchtype` 可以是 `strategy`、`merge` 或 `json` 之一,并且这些必须匹配 [kubectl 支持](/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch) 的补丁格式。 默认补丁类型是 `strategic` 的。 - `extension` 必须是 `json` 或 `yaml`。 @@ -308,7 +307,7 @@ To customize the kubelet you can add a `KubeletConfiguration` next to the `Clust --> ## 自定义 kubelet {#customizing-the-kubelet} -要自定义 kubelet,你可以在同一配置文件中的 `ClusterConfiguration` 或 `InitConfiguration` +要自定义 kubelet,你可以在同一配置文件中的 `ClusterConfiguration` 或 `InitConfiguration` 之外添加一个 `KubeletConfiguration`,用 `---` 分隔。 然后可以将此文件传递给 `kubeadm init`。 @@ -341,10 +340,10 @@ For more details you can navigate to our [API reference pages](/docs/reference/c --> ## 自定义 kube-proxy {#customizing-kube-proxy} -要自定义 kube-proxy,你可以在 `ClusterConfiguration` 或 `InitConfiguration` 之外添加一个 -由 `---` 分隔的 `KubeProxyConfiguration`, 传递给 `kubeadm init`。 +要自定义 kube-proxy,你可以在 `ClusterConfiguration` 或 `InitConfiguration` +之外添加一个由 `---` 分隔的 `KubeProxyConfiguration`, 传递给 `kubeadm init`。 -可以导航到 [API 参考页面](/docs/reference/config-api/kubeadm-config.v1beta3/) 查看更多详情, +可以导航到 [API 参考页面](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/)查看更多详情, {{< note >}} ## 生产环境考量 {#production-considerations} -通常,一个生产用 Kubernetes 集群环境与个人学习、开发或测试环境所使用的 -Kubernetes 相比有更多的需求。生产环境可能需要被很多用户安全地访问,需要 -提供一致的可用性,以及能够与需求变化相适配的资源。 +通常,一个生产用 Kubernetes 集群环境与个人学习、开发或测试环境所使用的 Kubernetes 相比有更多的需求。 +生产环境可能需要被很多用户安全地访问,需要提供一致的可用性,以及能够与需求变化相适配的资源。 -在你决定在何处运行你的生产用 Kubernetes 环境(在本地或者在云端),以及 -你希望承担或交由他人承担的管理工作量时,需要考察以下因素如何影响你对 -Kubernetes 集群的需求: +在你决定在何处运行你的生产用 Kubernetes 环境(在本地或者在云端), +以及你希望承担或交由他人承担的管理工作量时, +需要考察以下因素如何影响你对 Kubernetes 集群的需求: - **规模**:如果你预期你的生产用 Kubernetes 环境要承受固定量的请求, 你可能可以针对所需要的容量来一次性完成安装。 - 不过,如果你预期服务请求会随着时间增长,或者因为类似季节或者特殊事件的 - 原因而发生剧烈变化,你就需要规划如何处理请求上升时对控制面和工作节点 - 的压力,或者如何缩减集群规模以减少未使用资源的消耗。 + 不过,如果你预期服务请求会随着时间增长,或者因为类似季节或者特殊事件的原因而发生剧烈变化, + 你就需要规划如何处理请求上升时对控制面和工作节点的压力,或者如何缩减集群规模以减少未使用资源的消耗。 - **安全性与访问管理**:在你自己的学习环境 Kubernetes 集群上,你拥有完全的管理员特权。 - 但是针对运行着重要工作负载的共享集群,用户账户不止一两个时,就需要更细粒度 - 的方案来确定谁或者哪些主体可以访问集群资源。 + 但是针对运行着重要工作负载的共享集群,用户账户不止一两个时, + 就需要更细粒度的方案来确定谁或者哪些主体可以访问集群资源。 你可以使用基于角色的访问控制([RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/)) - 和其他安全机制来确保用户和负载能够访问到所需要的资源,同时确保工作负载及集群 - 自身仍然是安全的。 + 和其他安全机制来确保用户和负载能够访问到所需要的资源, + 同时确保工作负载及集群自身仍然是安全的。 你可以通过管理[策略](/zh-cn/docs/concepts/policy/)和 - [容器资源](/zh-cn/docs/concepts/configuration/manage-resources-containers)来 - 针对用户和工作负载所可访问的资源设置约束, + [容器资源](/zh-cn/docs/concepts/configuration/manage-resources-containers) + 来针对用户和工作负载所可访问的资源设置约束。 -在自行构建 Kubernetes 生产环境之前,请考虑将这一任务的部分或者全部交给 -[云方案承包服务](/zh-cn/docs/setup/production-environment/turnkey-solutions) -提供商或者其他 [Kubernetes 合作伙伴](/zh-cn/partners/)。 -选项有: +在自行构建 Kubernetes 生产环境之前, +请考虑将这一任务的部分或者全部交给[云方案承包服务](/zh-cn/docs/setup/production-environment/turnkey-solutions)提供商或者其他 +[Kubernetes 合作伙伴](/zh-cn/partners/)。选项有: -- **无服务**:仅是在第三方设备上运行负载,完全不必管理集群本身。你需要为 - CPU 用量、内存和磁盘请求等付费。 +- **无服务**:仅是在第三方设备上运行负载,完全不必管理集群本身。 + 你需要为 CPU 用量、内存和磁盘请求等付费。 - **托管控制面**:让供应商决定集群控制面的规模和可用性,并负责打补丁和升级等操作。 -- **托管工作节点**:配置一个节点池来满足你的需要,由供应商来确保节点始终可用, - 并在需要的时候完成升级。 +- **托管工作节点**:配置一个节点池来满足你的需要,由供应商来确保节点始终可用,并在需要的时候完成升级。 - **集成**:有一些供应商能够将 Kubernetes 与一些你可能需要的其他服务集成, 这类服务包括存储、容器镜像仓库、身份认证方法以及开发工具等。 @@ -136,9 +132,8 @@ partners, review the following sections to evaluate your needs as they relate to your cluster’s *control plane*, *worker nodes*, *user access*, and *workload resources*. --> -无论你是自行构造一个生产用 Kubernetes 集群还是与合作伙伴一起协作,请审阅 -下面章节以评估你的需求,因为这关系到你的集群的**控制面**、**工作节点**、 -**用户访问**以及**负载资源**。 +无论你是自行构造一个生产用 Kubernetes 集群还是与合作伙伴一起协作, +请审阅下面章节以评估你的需求,因为这关系到你的集群的**控制面**、**工作节点**、**用户访问**以及**负载资源**。 ## 生产用集群安装 {#production-cluster-setup} -在生产质量的 Kubernetes 集群中,控制面用不同的方式来管理集群和可以 -分布到多个计算机上的服务。每个工作节点则代表的是一个可配置来运行 -Kubernetes Pod 的实体。 +在生产质量的 Kubernetes 集群中,控制面用不同的方式来管理集群和可以分布到多个计算机上的服务。 +每个工作节点则代表的是一个可配置来运行 Kubernetes Pod 的实体。 如果你需要一个更为持久的、高可用的集群,那么就需要考虑扩展控制面的方式。 根据设计,运行在一台机器上的单机控制面服务不是高可用的。 -如果你认为保持集群的正常运行的并需要确保它在出错时可以被修复是很重要的, +如果你认为保持集群的正常运行并需要确保它在出错时可以被修复是很重要的, 可以考虑以下步骤: -- **管理证书**:控制面服务之间的安全通信是通过证书来完成的。证书是在部署期间 - 自动生成的,或者你也可以使用自己的证书机构来生成它们。 +- **管理证书**:控制面服务之间的安全通信是通过证书来完成的。 + 证书是在部署期间自动生成的,或者你也可以使用自己的证书机构来生成它们。 参阅 [PKI 证书和需求](/zh-cn/docs/setup/best-practices/certificates/)了解细节。 -- **为 API 服务器配置负载均衡**:配置负载均衡器来将外部的 API 请求散布给运行在 - 不同节点上的 API 服务实例。参阅 - [创建外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/) - 了解细节。 +- **为 API 服务器配置负载均衡**:配置负载均衡器来将外部的 API 请求散布给运行在不同节点上的 API 服务实例。 + 参阅[创建外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/)了解细节。 - **创建多控制面系统**:为了实现高可用性,控制面不应被限制在一台机器上。 - 如果控制面服务是使用某 init 服务(例如 systemd)来运行的,每个服务应该 - 至少运行在三台机器上。不过,将控制面作为服务运行在 Kubernetes Pods - 中可以确保你所请求的个数的服务始终保持可用。 + 如果控制面服务是使用某 init 服务(例如 systemd)来运行的,每个服务应该至少运行在三台机器上。 + 不过,将控制面作为服务运行在 Kubernetes Pod 中可以确保你所请求的个数的服务始终保持可用。 调度器应该是可容错的,但不是高可用的。 - 某些部署工具会安装 [Raft](https://raft.github.io/) 票选算法来对 Kubernetes - 服务执行领导者选举。如果主节点消失,另一个服务会被选中并接手相应服务。 + 某些部署工具会安装 [Raft](https://raft.github.io/) 票选算法来对 Kubernetes 服务执行领导者选举。 + 如果主节点消失,另一个服务会被选中并接手相应服务。 -- **跨多个可用区**:如果保持你的集群一直可用这点非常重要,可以考虑创建一个跨 - 多个数据中心的集群;在云环境中,这些数据中心被视为可用区。 - 若干个可用区在一起可构成地理区域。 - 通过将集群分散到同一区域中的多个可用区内,即使某个可用区不可用,整个集群 - 能够继续工作的机会也大大增加。 +- **跨多个可用区**:如果保持你的集群一直可用这点非常重要,可以考虑创建一个跨多个数据中心的集群; + 在云环境中,这些数据中心被视为可用区。若干个可用区在一起可构成地理区域。 + 通过将集群分散到同一区域中的多个可用区内,即使某个可用区不可用,整个集群能够继续工作的机会也大大增加。 更多的细节可参阅[跨多个可用区运行](/zh-cn/docs/setup/best-practices/multiple-zones/)。 -- **管理演进中的特性**:如果你计划长时间保留你的集群,就需要执行一些维护其 - 健康和安全的任务。例如,如果你采用 kubeadm 安装的集群,则有一些可以帮助你完成 +- **管理演进中的特性**:如果你计划长时间保留你的集群,就需要执行一些维护其健康和安全的任务。 + 例如,如果你采用 kubeadm 安装的集群, + 则有一些可以帮助你完成 [证书管理](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) 和[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade) 的指令。 @@ -301,13 +291,12 @@ for information on making an etcd backup plan. 如要了解运行控制面服务时可使用的选项,可参阅 [kube-apiserver](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)、 [kube-controller-manager](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/) 和 -[kube-scheduler](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/) +[kube-scheduler](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/) 组件参考页面。 -如要了解高可用控制面的例子,可参阅 -[高可用拓扑结构选项](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/)、 -[使用 kubeadm 创建高可用集群](/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/) 以及[为 Kubernetes 运维 etcd 集群](/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/)。 -关于制定 etcd 备份计划,可参阅 -[对 etcd 集群执行备份](/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster)。 +如要了解高可用控制面的例子,可参阅[高可用拓扑结构选项](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/)、 +[使用 kubeadm 创建高可用集群](/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/) +以及[为 Kubernetes 运维 etcd 集群](/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/)。 +关于制定 etcd 备份计划,可参阅[对 etcd 集群执行备份](/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster)。 - **配置节点**:节点可以是物理机或者虚拟机。如果你希望自行创建和管理节点, - 你可以安装一个受支持的操作系统,之后添加并运行合适的 - [节点服务](/zh-cn/docs/concepts/overview/components/#node-components)。 - 考虑: + 你可以安装一个受支持的操作系统,之后添加并运行合适的[节点服务](/zh-cn/docs/concepts/overview/components/#node-components)。考虑: - - 在安装节点时要通过配置适当的内存、CPU 和磁盘读写速率、存储容量来满足 - 你的负载的需求。 - - 是否通用的计算机系统即足够,还是你有负载需要使用 GPU 处理器、Windows 节点 - 或者 VM 隔离。 + - 在安装节点时要通过配置适当的内存、CPU 和磁盘读写速率、存储容量来满足你的负载的需求。 + - 是否通用的计算机系统即足够,还是你有负载需要使用 GPU 处理器、Windows 节点或者 VM 隔离。 -- **验证节点**:参阅[验证节点配置](/zh-cn/docs/setup/best-practices/node-conformance/) - 以了解如何确保节点满足加入到 Kubernetes 集群的需求。 +- **验证节点**:参阅[验证节点配置](/zh-cn/docs/setup/best-practices/node-conformance/)以了解如何确保节点满足加入到 Kubernetes 集群的需求。 - **添加节点到集群中**:如果你自行管理你的集群,你可以通过安装配置你的机器, - 之后或者手动加入集群,或者让它们自动注册到集群的 API 服务器。参阅 - [节点](/zh-cn/docs/concepts/architecture/nodes/)节,了解如何配置 Kubernetes - 以便以这些方式来添加节点。 + 之后或者手动加入集群,或者让它们自动注册到集群的 API 服务器。 + 参阅[节点](/zh-cn/docs/concepts/architecture/nodes/)节,了解如何配置 Kubernetes 以便以这些方式来添加节点。 - **扩缩节点**:制定一个扩充集群容量的规划,你的集群最终会需要这一能力。 参阅[大规模集群考察事项](/zh-cn/docs/setup/best-practices/cluster-large/) - 以确定你所需要的节点数;这一规模是基于你要运行的 Pod 和容器个数来确定的。 + 以确定你所需要的节点数; + 这一规模是基于你要运行的 Pod 和容器个数来确定的。 如果你自行管理集群节点,这可能意味着要购买和安装你自己的物理设备。 - **节点自动扩缩容**:大多数云供应商支持 [集群自动扩缩器(Cluster Autoscaler)](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme) - 以便替换不健康的节点、根据需求来增加或缩减节点个数。参阅 - [常见问题](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) + 以便替换不健康的节点、根据需求来增加或缩减节点个数。 + 参阅[常见问题](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) 了解自动扩缩器的工作方式,并参阅 [Deployment](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment) 了解不同云供应商是如何实现集群自动扩缩器的。 @@ -395,9 +379,8 @@ that the nodes and pods running on those nodes are healthy. Using the [Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/) daemon, you can ensure your nodes are healthy. --> -- **安装节点健康检查**:对于重要的工作负载,你会希望确保节点以及在节点上 - 运行的 Pod 处于健康状态。通过使用 - [Node Problem Detector](/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health/), +- **安装节点健康检查**:对于重要的工作负载,你会希望确保节点以及在节点上运行的 Pod 处于健康状态。 + 通过使用 [Node Problem Detector](/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health/), 你可以确保你的节点是健康的。 ### 生产级用户环境 {#production-user-management} -在生产环境中,情况可能不再是你或者一小组人在访问集群,而是几十 -上百人需要访问集群。在学习环境或者平台原型环境中,你可能具有一个 -可以执行任何操作的管理账号。在生产环境中,你会需要对不同名字空间 -具有不同访问权限级别的很多账号。 +在生产环境中,情况可能不再是你或者一小组人在访问集群,而是几十上百人需要访问集群。 +在学习环境或者平台原型环境中,你可能具有一个可以执行任何操作的管理账号。 +在生产环境中,你会需要对不同名字空间具有不同访问权限级别的很多账号。 建立一个生产级别的集群意味着你需要决定如何有选择地允许其他用户访问集群。 -具体而言,你需要选择验证尝试访问集群的人的身份标识(身份认证),并确定 -他们是否被许可执行他们所请求的操作(鉴权): +具体而言,你需要选择验证尝试访问集群的人的身份标识(身份认证), +并确定他们是否被许可执行他们所请求的操作(鉴权): -- **认证(Authentication)**:API 服务器可以使用客户端证书、持有者令牌、身份 - 认证代理或者 HTTP 基本认证机制来完成身份认证操作。 - 你可以选择你要使用的认证方法。通过使用插件,API 服务器可以充分利用你所在 - 组织的现有身份认证方法,例如 LDAP 或者 Kerberos。 - 关于认证 Kubernetes 用户身份的不同方法的描述,可参阅 - [身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)。 +- **认证(Authentication)**:API 服务器可以使用客户端证书、持有者令牌、 + 身份认证代理或者 HTTP 基本认证机制来完成身份认证操作。 + 你可以选择你要使用的认证方法。通过使用插件, + API 服务器可以充分利用你所在组织的现有身份认证方法, + 例如 LDAP 或者 Kerberos。 + 关于认证 Kubernetes 用户身份的不同方法的描述, + 可参阅[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)。 -- **鉴权(Authorization)**:当你准备为一般用户执行权限判定时,你可能会需要 - 在 RBAC 和 ABAC 鉴权机制之间做出选择。参阅 - [鉴权概述](/zh-cn/docs/reference/access-authn-authz/authorization/),了解 - 对用户账户(以及访问你的集群的服务账户)执行鉴权的不同模式。 +- **鉴权(Authorization)**:当你准备为一般用户执行权限判定时, + 你可能会需要在 RBAC 和 ABAC 鉴权机制之间做出选择。 + 参阅[鉴权概述](/zh-cn/docs/reference/access-authn-authz/authorization/), + 了解对用户账户(以及访问你的集群的服务账户)执行鉴权的不同模式。 - **基于角色的访问控制**([RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/)): 让你通过为通过身份认证的用户授权特定的许可集合来控制集群访问。 访问许可可以针对某特定名字空间(Role)或者针对整个集群(ClusterRole)。 - 通过使用 RoleBinding 和 ClusterRoleBinding 对象,这些访问许可可以被 - 关联到特定的用户身上。 + 通过使用 RoleBinding 和 ClusterRoleBinding 对象,这些访问许可可以被关联到特定的用户身上。 - **基于属性的访问控制**([ABAC](/zh-cn/docs/reference/access-authn-authz/abac/)): - 让你能够基于集群中资源的属性来创建访问控制策略,基于对应的属性来决定 - 允许还是拒绝访问。策略文件的每一行都给出版本属性(apiVersion 和 kind) - 以及一个规约属性的映射,用来匹配主体(用户或组)、资源属性、非资源属性 - (/version 或/apis)和只读属性。 + 让你能够基于集群中资源的属性来创建访问控制策略,基于对应的属性来决定允许还是拒绝访问。 + 策略文件的每一行都给出版本属性(apiVersion 和 kind)以及一个规约属性的映射, + 用来匹配主体(用户或组)、资源属性、非资源属性(/version 或 /apis)和只读属性。 参阅[示例](/zh-cn/docs/reference/access-authn-authz/abac/#examples)以了解细节。 -作为在你的生产用 Kubernetes 集群中安装身份认证和鉴权机制的负责人, -要考虑的事情如下: +作为在你的生产用 Kubernetes 集群中安装身份认证和鉴权机制的负责人,要考虑的事情如下: -- **设置鉴权模式**:当 Kubernetes API 服务器 - ([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)) - 启动时,所支持的鉴权模式必须使用 `--authorization-mode` 标志配置。 - 例如,`kube-apiserver.yaml`(位于 `/etc/kubernetes/manifests` 下)中对应的 - 标志可以设置为 `Node,RBAC`。这样就会针对已完成身份认证的请求执行 Node 和 RBAC - 鉴权。 +- **设置鉴权模式**:当 Kubernetes API 服务器([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/))启动时, + 所支持的鉴权模式必须使用 `--authorization-mode` 标志配置。 + 例如,`kube-apiserver.yaml`(位于 `/etc/kubernetes/manifests` 下)中对应的标志可以设置为 `Node,RBAC`。 + 这样就会针对已完成身份认证的请求执行 Node 和 RBAC 鉴权。 -- **创建用户证书和角色绑定(RBAC)**:如果你在使用 RBAC 鉴权,用户可以创建 - 由集群 CA 签名的 CertificateSigningRequest(CSR)。接下来你就可以将 Role - 和 ClusterRole 绑定到每个用户身上。 - 参阅[证书签名请求](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/) - 了解细节。 +- **创建用户证书和角色绑定(RBAC)**:如果你在使用 RBAC 鉴权,用户可以创建由集群 CA 签名的 + CertificateSigningRequest(CSR)。接下来你就可以将 Role 和 ClusterRole 绑定到每个用户身上。 + 参阅[证书签名请求](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/)了解细节。 -- **创建组合属性的策略(ABAC)**:如果你在使用 ABAC 鉴权,你可以设置属性组合 - 以构造策略对所选用户或用户组执行鉴权,判定他们是否可访问特定的资源 - (例如 Pod)、名字空间或者 apiGroup。进一步的详细信息可参阅 - [示例](/zh-cn/docs/reference/access-authn-authz/abac/#examples)。 +- **创建组合属性的策略(ABAC)**:如果你在使用 ABAC 鉴权, + 你可以设置属性组合以构造策略对所选用户或用户组执行鉴权, + 判定他们是否可访问特定的资源(例如 Pod)、名字空间或者 apiGroup。 + 进一步的详细信息可参阅[示例](/zh-cn/docs/reference/access-authn-authz/abac/#examples)。 - **考虑准入控制器**:针对指向 API 服务器的请求的其他鉴权形式还包括 [Webhook 令牌认证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)。 - Webhook 和其他特殊的鉴权类型需要通过向 API 服务器添加 - [准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/) - 来启用。 + Webhook 和其他特殊的鉴权类型需要通过向 API + 服务器添加[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)来启用。 - **设置名字空间限制**:为每个名字空间的内存和 CPU 设置配额。 - 参阅[管理内存、CPU 和 API 资源](/zh-cn/docs/tasks/administer-cluster/manage-resources/) - 以了解细节。你也可以设置 - [层次化名字空间](/blog/2020/08/14/introducing-hierarchical-namespaces/) - 来继承这类约束。 + 参阅[管理内存、CPU 和 API 资源](/zh-cn/docs/tasks/administer-cluster/manage-resources/)以了解细节。 + 你也可以设置[层次化名字空间](/blog/2020/08/14/introducing-hierarchical-namespaces/)来继承这类约束。 -- **为 DNS 请求做准备**:如果你希望工作负载能够完成大规模扩展,你的 DNS 服务 - 也必须能够扩大规模。参阅 - [自动扩缩集群中 DNS 服务](/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)。 +- **为 DNS 请求做准备**:如果你希望工作负载能够完成大规模扩展,你的 DNS 服务也必须能够扩大规模。 + 参阅[自动扩缩集群中 DNS 服务](/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)。 -- **创建额外的服务账户**:用户账户决定用户可以在集群上执行的操作,服务账号则定义的 - 是在特定名字空间中 Pod 的访问权限。 - 默认情况下,Pod 使用所在名字空间中的 default 服务账号。 - 参阅[管理服务账号](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/) - 以了解如何创建新的服务账号。例如,你可能需要: +- **创建额外的服务账户**:用户账户决定用户可以在集群上执行的操作,服务账号则定义的是在特定名字空间中 + Pod 的访问权限。默认情况下,Pod 使用所在名字空间中的 default 服务账号。 + 参阅[管理服务账号](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/)以了解如何创建新的服务账号。 + 例如,你可能需要: - 为 Pod 添加 Secret,以便 Pod 能够从某特定的容器镜像仓库拉取镜像。 - 参阅[为 Pod 配置服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) - 以获得示例。 - - 为服务账号设置 RBAC 访问许可。参阅 - [服务账号访问许可](/zh-cn/docs/reference/access-authn-authz/rbac/#service-account-permissions) - 了解细节。 + 参阅[为 Pod 配置服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)以获得示例。 + - 为服务账号设置 RBAC 访问许可。参阅[服务账号访问许可](/zh-cn/docs/reference/access-authn-authz/rbac/#service-account-permissions)了解细节。 ## {{% heading "whatsnext" %}} @@ -591,31 +559,25 @@ and set up high availability for features such as and the [API server](/docs/setup/production-environment/tools/kubeadm/ha-topology/). --> -- 决定你是想自行构造自己的生产用 Kubernetes 还是从某可用的 - [云服务外包厂商](/zh-cn/docs/setup/production-environment/turnkey-solutions/) - 或 [Kubernetes 合作伙伴](/zh-cn/partners/)获得集群。 -- 如果你决定自行构造集群,则需要规划如何处理 - [证书](/zh-cn/docs/setup/best-practices/certificates/) - 并为类似 - [etcd](/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) - 和 - [API 服务器](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/) - 这些功能组件配置高可用能力。 +- 决定你是想自行构造自己的生产用 Kubernetes, + 还是从某可用的[云服务外包厂商](/zh-cn/docs/setup/production-environment/turnkey-solutions/)或 + [Kubernetes 合作伙伴](/zh-cn/partners/)获得集群。 +- 如果你决定自行构造集群,则需要规划如何处理[证书](/zh-cn/docs/setup/best-practices/certificates/)并为类似 + [etcd](/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) 和 + [API 服务器](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/)这些功能组件配置高可用能力。 - 选择使用 [kubeadm](/zh-cn/docs/setup/production-environment/tools/kubeadm/)、 [kops](/zh-cn/docs/setup/production-environment/tools/kops/) 或 - [Kubespray](/zh-cn/docs/setup/production-environment/tools/kubespray/) - 作为部署方法。 + [Kubespray](/zh-cn/docs/setup/production-environment/tools/kubespray/) 作为部署方法。 -- 通过决定[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)和 - [鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/)方法来配置用户管理。 +- 通过决定[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)和[鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/)方法来配置用户管理。 - 通过配置[资源限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/)、 - [DNS 自动扩缩](/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling/) - 和[服务账号](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/) - 来为应用负载作准备。 + [DNS 自动扩缩](/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)和[服务账号](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/)来为应用负载作准备。 From 30636edd3d89e23d9b928386e6bda89c26663479 Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Jul 2022 23:08:39 +0800 Subject: [PATCH 273/292] [zh-cn] resync /kubeadm/create-cluster-kubeadm.md --- .../tools/kubeadm/create-cluster-kubeadm.md | 44 +++++++++++-------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 6f05c91de39aa..0dca978174820 100644 --- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -17,14 +17,14 @@ weight: 30 Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the -[Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). +[Kubernetes Conformance tests](/blog/2017/10/software-conformance-certification/). `kubeadm` also supports other cluster lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades. --> 使用 `kubeadm`,你能创建一个符合最佳实践的最小化 Kubernetes 集群。 事实上,你可以使用 `kubeadm` 配置一个通过 -[Kubernetes 一致性测试](https://kubernetes.io/blog/2017/10/software-conformance-certification)的集群。 +[Kubernetes 一致性测试](/blog/2017/10/software-conformance-certification/)的集群。 `kubeadm` 还支持其他集群生命周期功能, 例如[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)和集群升级。 @@ -68,7 +68,7 @@ To follow this guide, you need: - 一台或多台运行兼容 deb/rpm 的 Linux 操作系统的计算机;例如:Ubuntu 或 CentOS。 - 每台机器 2 GB 以上的内存,内存不足时应用会受限制。 -- 用作控制平面节点的计算机上至少有2个 CPU。 +- 用作控制平面节点的计算机上至少有 2 个 CPU。 - 集群中所有计算机之间具有完全的网络连接。你可以使用公共网络或专用网络。 {{< note >}} -如果你已经安装了kubeadm,执行 `apt-get update && -apt-get upgrade` 或 `yum update` 以获取 kubeadm 的最新版本。 +如果你已经安装了kubeadm,执行 `apt-get update && apt-get upgrade` 或 `yum update` +以获取 kubeadm 的最新版本。 升级时,kubelet 每隔几秒钟重新启动一次, 在 crashloop 状态中等待 kubeadm 发布指令。crashloop 状态是正常现象。 @@ -162,7 +163,8 @@ to not download the default container images which are hosted at `k8s.gcr.io`. Kubeadm has commands that can help you pre-pull the required images when creating a cluster without an internet connection on its nodes. -See [Running kubeadm without an internet connection](/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection) for more details. +See [Running kubeadm without an internet connection](/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection) +for more details. Kubeadm allows you to use a custom image repository for the required images. See [Using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images) @@ -171,7 +173,7 @@ for more details. 这个步骤是可选的,只适用于你希望 `kubeadm init` 和 `kubeadm join` 不去下载存放在 `k8s.gcr.io` 上的默认的容器镜像的情况。 当你在离线的节点上创建一个集群的时候,Kubeadm 有一些命令可以帮助你预拉取所需的镜像。 -阅读[离线运行 kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images) +阅读[离线运行 kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection) 获取更多的详情。 Kubeadm 允许你给所需要的镜像指定一个自定义的镜像仓库。 @@ -519,7 +521,8 @@ Once a Pod network has been installed, you can confirm that it is working by checking that the CoreDNS Pod is `Running` in the output of `kubectl get pods --all-namespaces`. And once the CoreDNS Pod is up and running, you can continue by joining your nodes. --> -安装 Pod 网络后,你可以通过在 `kubectl get pods --all-namespaces` 输出中检查 CoreDNS Pod 是否 `Running` 来确认其是否正常运行。 +安装 Pod 网络后,你可以通过在 `kubectl get pods --all-namespaces` 输出中检查 +CoreDNS Pod 是否 `Running` 来确认其是否正常运行。 一旦 CoreDNS Pod 启用并运行,你就可以继续加入节点。 * SSH 到机器 * 成为 root (例如 `sudo su -`) +* 必要时[安装一个运行时](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime) * 运行 `kubeadm init` 输出的命令,例如: ```bash @@ -662,7 +668,8 @@ The output is similar to this: ``` 如果你没有 `--discovery-token-ca-cert-hash` 的值,则可以通过在控制平面节点上执行以下命令链来获取它: @@ -717,9 +724,9 @@ on the first control-plane node. To provide higher availability, please rebalanc with `kubectl -n kube-system rollout restart deployment coredns` after at least one new node is joined. --> {{< note >}} -由于集群节点通常是按顺序初始化的,CoreDNS Pods 很可能都运行在第一个控制面节点上。 +由于集群节点通常是按顺序初始化的,CoreDNS Pod 很可能都运行在第一个控制面节点上。 为了提供更高的可用性,请在加入至少一个新节点后 -使用 `kubectl -n kube-system rollout restart deployment coredns` 命令,重新平衡 CoreDNS Pods。 +使用 `kubectl -n kube-system rollout restart deployment coredns` 命令,重新平衡这些 CoreDNS Pod。 {{< /note >}} kubeadm 可以与 Kubernetes 组件一起使用,这些组件的版本与 kubeadm 相同,或者比它大一个版本。 Kubernetes 版本可以通过使用 `--kubeadm init` 的 `--kubernetes-version` 标志或使用 `--config` 时的 -[`ClusterConfiguration.kubernetesVersion`](/zh-cn/docs/reference/configapi/kubeadm-config.v1beta3/) +[`ClusterConfiguration.kubernetesVersion`](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/) 字段指定给 kubeadm。 这个选项将控制 kube-apiserver、kube-controller-manager、kube-scheduler 和 kube-proxy 的版本。 @@ -1051,7 +1058,7 @@ or {{< skew currentVersion >}} 要了解更多关于不同 Kubernetes 组件之间的版本偏差,请参见 [版本偏差策略](/zh-cn/releases/version-skew-policy/)。 @@ -1126,7 +1133,8 @@ supports your chosen platform. ## 故障排除 {#troubleshooting} 如果你在使用 kubeadm 时遇到困难, 请查阅我们的[故障排除文档](/zh-cn/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)。 From 3f28bb4289a313ef3378872cec79bb94969e3d2f Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Fri, 29 Jul 2022 01:20:11 +0800 Subject: [PATCH 274/292] [zh-cn]Update content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md --- .../tasks/configmap-secret/managing-secret-using-kubectl.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 2dcf7ad266b33..be100e17d8a7c 100644 --- a/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -159,6 +159,11 @@ accidentally, or from being stored in a terminal log. `kubectl get` 和 `kubectl describe` 命令默认不显示 `Secret` 的内容。 这是为了防止 `Secret` 被意外暴露或存储在终端日志中。 + +查看编码数据的实际内容,请参考[解码 Secret](#decoding-secret)。 + ## 解码 Secret {#decoding-secret} From f22b616faf57e1019db62c2713402f6c0edce870 Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Fri, 29 Jul 2022 01:31:35 +0800 Subject: [PATCH 275/292] [zh-cn]Update content/zh-cn/docs/concepts/services-networking/ingress-controllers.md --- .../docs/concepts/services-networking/ingress-controllers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md b/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md index 1e2d6ae078ba5..0e21abff8c006 100644 --- a/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md +++ b/content/zh-cn/docs/concepts/services-networking/ingress-controllers.md @@ -103,6 +103,7 @@ Kubernetes 作为一个项目,目前支持和维护 * [用于 Kubernetes 的 Kong Ingress 控制器](https://github.com/Kong/kubernetes-ingress-controller#readme) 是一个用来驱动 [Kong Gateway](https://konghq.com/kong/) 的 Ingress 控制器。 +* [Kusk Gateway](https://kusk.kubeshop.io/) 是基于 [Envoy](https://www.envoyproxy.io) OpenAPI 驱动的入口控制器。 * [用于 Kubernetes 的 NGINX Ingress 控制器](https://www.nginx.com/products/nginx-ingress-controller/) 能够与 [NGINX](https://www.nginx.com/resources/glossary/nginx/) 网页服务器(作为代理)一起使用。 From 919a54083cb88566fc09ea233c1efb419fecc6a4 Mon Sep 17 00:00:00 2001 From: Kinzhi Date: Fri, 29 Jul 2022 01:34:26 +0800 Subject: [PATCH 276/292] [zh-cn]Update content/zh-cn/examples/service/networking/dual-stack-preferred-svc.yaml --- .../examples/service/networking/dual-stack-preferred-svc.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh-cn/examples/service/networking/dual-stack-preferred-svc.yaml b/content/zh-cn/examples/service/networking/dual-stack-preferred-svc.yaml index 8fb5bfa3d349f..66d42b961291d 100644 --- a/content/zh-cn/examples/service/networking/dual-stack-preferred-svc.yaml +++ b/content/zh-cn/examples/service/networking/dual-stack-preferred-svc.yaml @@ -3,11 +3,11 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 From 8268eb35735d316ee760d1908dd2e91c0d67bac8 Mon Sep 17 00:00:00 2001 From: Arhell Date: Fri, 29 Jul 2022 00:30:26 +0300 Subject: [PATCH 277/292] [pl] update KubeCon date --- content/pl/_index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pl/_index.html b/content/pl/_index.html index 0bb06fe15fe07..93df2863a951c 100644 --- a/content/pl/_index.html +++ b/content/pl/_index.html @@ -49,7 +49,7 @@

The Challenges of Migrating 150+ Microservices to Kubernetes




- Weź udział w KubeCon Europe 17-21.04.2023 + Weź udział w KubeCon Europe 17-21.04.2023
From b01385728e488f6d7bcc411107917979e62bf167 Mon Sep 17 00:00:00 2001 From: "Sang Hong, Kim" <58922834+bconfiden2@users.noreply.github.com> Date: Fri, 29 Jul 2022 10:44:58 +0900 Subject: [PATCH 278/292] [en] Fix unclosed code block --- .../en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 66f59a3dadb10..1601c8316044b 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -266,6 +266,7 @@ without compromising the minimum required capacity for running your workloads. apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \ apt-mark hold kubelet kubectl + ``` {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} ```shell From 594713b5b03b4483a409070f73d3403e45f76524 Mon Sep 17 00:00:00 2001 From: Michael Date: Fri, 29 Jul 2022 09:46:02 +0800 Subject: [PATCH 279/292] [zh-cn] updated /production-environment/tools/kubeadm/ha-topology.md --- .../tools/kubeadm/ha-topology.md | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology.md index 4110bdd673794..4694d741217dc 100644 --- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology.md +++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology.md @@ -44,8 +44,6 @@ kubeadm 静态引导 etcd 集群。 阅读 etcd [集群指南](https://github.com/etcd-io/etcd/blob/release-3.4/Documentation/op-guide/clustering.md#static)以获得更多详细信息。 {{< /note >}} - - -堆叠(Stacked)HA 集群是一种这样的[拓扑](https://en.wikipedia.org/wiki/Network_topology), +堆叠(Stacked)HA 集群是一种这样的[拓扑](https://zh.wikipedia.org/wiki/%E7%BD%91%E7%BB%9C%E6%8B%93%E6%89%91), 其中 etcd 分布式数据存储集群堆叠在 kubeadm 管理的控制平面节点上,作为控制平面的一个组件运行。 -每个控制平面节点运行 `kube-apiserver`、`kube-scheduler` 和 `kube-controller-manager` 实例。 - +每个控制平面节点运行 `kube-apiserver`、`kube-scheduler` 和 `kube-controller-manager` 实例。 `kube-apiserver` 使用负载均衡器暴露给工作节点。 -就像堆叠的 etcd 拓扑一样,外部 etcd 拓扑中的每个控制平面节点都运行 `kube-apiserver`,`kube-scheduler` 和 `kube-controller-manager` 实例。 +就像堆叠的 etcd 拓扑一样,外部 etcd 拓扑中的每个控制平面节点都会运行 +`kube-apiserver`、`kube-scheduler` 和 `kube-controller-manager` 实例。 同样,`kube-apiserver` 使用负载均衡器暴露给工作节点。但是 etcd 成员在不同的主机上运行, 每个 etcd 主机与每个控制平面节点的 `kube-apiserver` 通信。 @@ -137,11 +134,9 @@ the cluster redundancy as much as the stacked HA topology. -但此拓扑需要两倍于堆叠 HA 拓扑的主机数量。 - +但此拓扑需要两倍于堆叠 HA 拓扑的主机数量。 具有此拓扑的 HA 集群至少需要三个用于控制平面节点的主机和三个用于 etcd 节点的主机。 Service 是软件服务(例如 mysql)的命名抽象,包含代理要侦听的本地端口(例如 3306)和一个选择算符, @@ -77,7 +77,7 @@ ServiceSpec 描述用户在服务上创建的属性。 - **selector** (map[string]string) 将 Service 流量路由到具有与此 selector 匹配的标签键值对的 Pod。 @@ -89,9 +89,9 @@ ServiceSpec 描述用户在服务上创建的属性。 @@ -119,9 +119,13 @@ ServiceSpec 描述用户在服务上创建的属性。 Service 将公开的端口。 - + + + - **ports.targetPort** (IntOrString) 在 Service 所针对的 Pod 上要访问的端口号或名称。 编号必须在 1 到 65535 的范围内。名称必须是 IANA_SVC_NAME。 @@ -134,7 +138,8 @@ ServiceSpec 描述用户在服务上创建的属性。 + *IntOrString is a type that can hold an int32 or a string. When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type. This allows you to have, for example, a JSON field that can accept a name or number.* + --> IntOrString 是一种可以保存 int32 或字符串的类型。 在 JSON 或 YAML 编组和解组中使用时,它会生成或使用内部类型。 @@ -162,7 +167,8 @@ ServiceSpec 描述用户在服务上创建的属性。 - **ports.nodePort** (int32) + The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + --> 当类型为 NodePort 或 LoadBalancer 时,Service 公开在节点上的端口, 通常由系统分配。如果指定了一个在范围内且未使用的值,则将使用该值,否则操作将失败。 @@ -331,14 +337,15 @@ ServiceSpec 描述用户在服务上创建的属性。 - **loadBalancerClass** (string) - loadBalancerClass 是此 Service 所属的负载均衡器实现的类。 如果设置了此字段,则字段值必须是标签风格的标识符,带有可选前缀,例如 ”internal-vip” 或 “example.com/internal-vip”。 无前缀名称是为最终用户保留的。该字段只能在 Service 类型为 “LoadBalancer” 时设置。 如果未设置此字段,则使用默认负载均衡器实现。默认负载均衡器现在通常通过云提供商集成完成,但应适用于任何默认实现。 - 如果设置了此字段,则假定负载均衡器实现正在监视具有对应负载均衡器类的 Service。 + 如果设置了此字段,则假定负载均衡器实现正在监测具有对应负载均衡器类的 Service。 任何默认负载均衡器实现(例如云提供商)都应忽略设置此字段的 Service。 只有在创建或更新的 Service 的 type 为 “LoadBalancer” 时,才可设置此字段。 一经设定,不可更改。当 Service 的 type 更新为 “LoadBalancer” 之外的其他类型时,此字段将被移除。 @@ -486,7 +493,7 @@ ServiceStatus 表示 Service 的当前状态。 lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. --> - - **conditions.lastTransitionTime**(时间),必需 + - **conditions.lastTransitionTime**(Time),必需 lastTransitionTime 是状况最近一次状态转化的时间。 变化应该发生在下层状况发生变化的时候。如果不知道下层状况发生变化的时间, @@ -498,7 +505,7 @@ ServiceStatus 表示 Service 的当前状态。 *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.* --> - time 是 time.Time 的包装类,支持正确地序列化为 YAML 和 JSON。 + Time 是 time.Time 的包装类,支持正确地序列化为 YAML 和 JSON。 为 time 包提供的许多工厂方法提供了包装类。 ## 操作 {#operations} @@ -705,12 +712,12 @@ ServiceList 包含一个 Service 列表。
-### `get` 读取指定的 APIService +### `get` 读取指定的 Service #### HTTP 请求 @@ -756,7 +763,7 @@ GET /api/v1/namespaces/{namespace}/services/{name} #### HTTP 请求 -获取 /api/v1/namespaces/{namespace}/services/{name}/status +GET /api/v1/namespaces/{namespace}/services/{name}/status -### `list` 列出或观察 Service 类型的对象 +### `list` 列出或监测 Service 类型的对象 #### HTTP 请求 -获取 /api/v1/namespaces/{namespace}/services +GET /api/v1/namespaces/{namespace}/services -### `list` 列出或观察 Service 类型的对象 +### `list` 列出或监测 Service 类型的对象 #### HTTP 请求 -获取 /api/v1/服务 +GET /api/v1/services ### `patch` 部分更新指定 Service 的状态 #### HTTP 请求 From 0aa515845ff9eb7306ff3b95f647ee63d7b8c2b5 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Fri, 29 Jul 2022 00:10:55 +0800 Subject: [PATCH 281/292] Update securing-a-cluster.md --- .../administer-cluster/securing-a-cluster.md | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/content/zh-cn/docs/tasks/administer-cluster/securing-a-cluster.md b/content/zh-cn/docs/tasks/administer-cluster/securing-a-cluster.md index 7ec999ec5c130..2cc223d515c48 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/zh-cn/docs/tasks/administer-cluster/securing-a-cluster.md @@ -224,6 +224,56 @@ or **Restricted** Pod Security Standard. 类似地,希望阻止客户端应用程序从其容器中逃逸的管理员,应该应用 **Baseline** 或 **Restricted** Pod 安全标准。 + + +### 防止容器加载不需要的内核模块 {#preventing-containers-from-loading-unwanted-kernel-modules} + +如果在某些情况下,Linux 内核会根据需要自动从磁盘加载内核模块, +这类情况的例子有挂接了一个硬件或挂载了一个文件系统。 +与 Kubernetes 特别相关的是,即使是非特权的进程也可能导致某些网络协议相关的内核模块被加载, +而这只需创建一个适当类型的套接字。 +这就可能允许攻击者利用管理员假定未使用的内核模块中的安全漏洞。 + + +为了防止特定模块被自动加载,你可以将它们从节点上卸载或者添加规则来阻止这些模块。 +在大多数 Linux 发行版上,你可以通过创建类似 `/etc/modprobe.d/kubernetes-blacklist.conf` +这种文件来做到这一点,其中的内容如下所示: + +``` +# DCCP is unlikely to be needed, has had multiple serious +# vulnerabilities, and is not well-maintained. +blacklist dccp + +# SCTP is not used in most Kubernetes clusters, and has also had +# vulnerabilities in the past. +blacklist sctp +``` + + +为了更大范围地阻止内核模块被加载,你可以使用 Linux 安全模块(如 SELinux) +来彻底拒绝容器的 `module_request` 权限,从而防止在任何情况下系统为容器加载内核模块。 +(Pod 仍然可以使用手动加载的模块,或者使用由内核代表某些特权进程所加载的模块。) + + -## 时区 {#time-zones} -对于没有指定时区的 CronJob,kube-controller-manager 会根据其本地时区来解释其排期表(schedule)。 - -{{< feature-state for_k8s_version="v1.24" state="alpha" >}} - -如果启用 `CronJobTimeZone` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/), -你可以为 CronJob 指定时区(如果你不启用该特性门控,或者如果你使用的 Kubernetes 版本不支持实验中的时区特性, -则集群中的所有 CronJob 都属于未指定时区)。 - - -当你启用该特性时,你可以将 `spec.timeZone` 设置为有效的[时区](https://zh.wikipedia.org/wiki/%E6%97%B6%E5%8C%BA%E4%BF%A1%E6%81%AF%E6%95%B0%E6%8D%AE%E5%BA%93)名称。 -例如,设置 `spec.timeZone: "Etc/UTC"` 表示 Kubernetes -使用协调世界时(Coordinated Universal Time)进行解释排期表。 - -Go 标准库中的时区数据库包含在二进制文件中,并用作备用数据库,以防系统上没有外部数据库可用。 - 要确定移除 dockershim 是否会对你或你的组织的影响,可以查阅: -[检查弃用 Dockershim 对你的影响](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) +[检查弃用 Dockershim 对你的影响](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) 这篇文章。 @@ -65,7 +65,7 @@ These tasks will help you to migrate: 下面这些任务可以帮助你完成迁移: -* [检查弃用 Dockershim 是否影响到你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) +* [检查弃用 Dockershim 是否影响到你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) * [将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) * [从 dockershim 迁移遥测和安全代理](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/) From 2c57c71930942bde03ebbddb001e3b9aa1588456 Mon Sep 17 00:00:00 2001 From: Mengjiao Liu Date: Fri, 29 Jul 2022 15:44:14 +0800 Subject: [PATCH 284/292] Update dockershim removal link --- .../2022-04-07-Kubernetes-1-24-removals-and-deprecations.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md b/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md index 7bc79b38d1e83..6428e64ad5921 100644 --- a/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md +++ b/content/en/blog/_posts/2022-04-07-Kubernetes-1-24-removals-and-deprecations.md @@ -84,7 +84,7 @@ As stated earlier, there are several guides about You can start with [Finding what container runtime are on your nodes](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/). If your nodes are using dockershim, there are other possible Docker Engine dependencies such as Pods or third-party tools executing Docker commands or private registries in the Docker configuration file. You can follow the -[Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) guide to review possible +[Check whether Dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) guide to review possible Docker Engine dependencies. Before upgrading to v1.24, you decide to either remain using Docker Engine and [Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) or migrate to a CRI-compatible runtime. Here's a guide to [change the container runtime on a node from Docker Engine to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/). From 4495a87f196e5c437a2c14975041bc0ba7fbee93 Mon Sep 17 00:00:00 2001 From: sarazqy Date: Thu, 21 Jul 2022 17:26:43 +0800 Subject: [PATCH 285/292] translate content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/ into Chinese --- .../index.md | 286 ++++++++++++++++++ .../ingress-post-chroot.png | Bin 0 -> 60860 bytes .../ingress-pre-chroot.png | Bin 0 -> 51860 bytes 3 files changed, 286 insertions(+) create mode 100644 content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/index.md create mode 100644 content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/ingress-post-chroot.png create mode 100644 content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/ingress-pre-chroot.png diff --git a/content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/index.md b/content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/index.md new file mode 100644 index 0000000000000..4e2082cde65e2 --- /dev/null +++ b/content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/index.md @@ -0,0 +1,286 @@ +--- +layout: blog +title: '在 Ingress-NGINX v1.2.0 中提高安全标准' +date: 2022-04-28 +slug: ingress-nginx-1-2-0 +--- + + + + +**作者:** Ricardo Katz (VMware), James Strong (Chainguard) + + +[Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 可能是 Kubernetes 最容易受攻击的组件之一。 +Ingress 通常定义一个 HTTP 反向代理,暴露在互联网上,包含多个网站,并具有对 Kubernetes API +的一些特权访问(例如读取与 TLS 证书及其私钥相关的 Secret)。 + + +虽然它是架构中的一个风险组件,但它仍然是正常公开服务的最流行方式。 + + +Ingress-NGINX 一直是安全评估的重头戏,这类评估会发现我们有着很大的问题: +在将配置转换为 `nginx.conf` 文件之前,我们没有进行所有适当的清理,这可能会导致信息泄露风险。 + + +虽然我们了解此风险以及解决此问题的真正需求,但这并不是一个容易的过程, +因此我们在当前(v1.2.0)版本中采取了另一种方法来减少(但不是消除!)这种风险。 + + +## 了解 Ingress NGINX v1.2.0 和 chrooted NGINX 进程 + + +主要挑战之一是 Ingress-NGINX 运行着 Web 代理服务器(NGINX),并与 Ingress 控制器一起运行 +(后者是一个可以访问 Kubernetes API 并创建 `nginx.conf` 的组件)。 + + +因此,NGINX 对控制器的文件系统(和 Kubernetes 服务帐户令牌,以及容器中的其他配置)具有相同的访问权限。 +虽然拆分这些组件是我们的最终目标,但该项目需要快速响应;这让我们想到了使用 `chroot()`。 + + +让我们看一下 Ingress-NGINX 容器在此更改之前的样子: + +![Ingress NGINX pre chroot](ingress-pre-chroot.png) + + +正如我们所见,用来提供 HTTP Proxy 的容器(不是 Pod,是容器!)也是是监视 Ingress +对象并将数据写入容器卷的容器。 + + +现在,见识一下新架构: + +![Ingress NGINX post chroot](ingress-post-chroot.png) + + +这一切意味着什么?一个基本的总结是:我们将 NGINX 服务隔离为控制器容器内的容器。 + + +虽然这并不完全正确,但要了解这里所做的事情,最好了解 Linux 容器(以及内核命名空间等底层机制)是如何工作的。 +你可以在 Kubernetes 词汇表中阅读有关 cgroup 的信息:[`cgroup`](/zh-cn/docs/reference/glossary/?fundamental=true#term-cgroup), +并在 NGINX 项目文章[什么是命名空间和 cgroup,以及它们如何工作?](https://www.nginx.com/blog/what-are-namespaces-cgroups-how-do-they-work/) +中了解有关 cgroup 与命名空间交互的更多信息。(当你阅读时,请记住 Linux 内核命名空间与 +[Kubernetes 命名空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/)不同)。 + + +## 跳过谈话,我需要什么才能使用这种新方法? + + +虽然这增加了安全性,但我们在这个版本中把这个功能作为一个选项,这样你就可以有时间在你的环境中做出正确的调整。 +此新功能仅在 Ingress-NGINX 控制器的 v1.2.0 版本中可用。 + + +要使用这个功能,在你的部署中有两个必要的改变: +* 将后缀 "-chroot" 添加到容器镜像名称中。例如:`gcr.io/k8s-staging-ingress-nginx/controller-chroot:v1.2.0` +* 在你的 Ingress 控制器的 Pod 模板中,找到添加 `NET_BIND_SERVICE` 权能的位置并添加 `SYS_CHROOT` 权能。 + 编辑清单后,你将看到如下代码段: + +```yaml +capabilities: + drop: + - ALL + add: + - NET_BIND_SERVICE + - SYS_CHROOT +``` + +如果你使用官方 Helm Chart 部署控制器,则在 `values.yaml` 中更改以下设置: + +```yaml +controller: + image: + chroot: true +``` + +Ingress 控制器通常部署在集群作用域(IngressClass API 是集群作用域的)。 +如果你管理 Ingress-NGINX 控制器但你不是整个集群的操作员, +请在部署中启用它**之前**与集群管理员确认你是否可以使用 `SYS_CHROOT` 功能。 + + +## 好吧,但这如何能提高我的 Ingress 控制器的安全性呢? + +以下面的配置片段为例,想象一下,由于某种原因,它被添加到你的 `nginx.conf` 中: + +``` +location /randomthing/ { + alias /; + autoindex on; +} +``` + +如果你部署了这种配置,有人可以调用 `http://website.example/randomthing` 并获取对 Ingress 控制器的整个文件系统的一些列表(和访问权限)。 + +现在,你能在下面的列表中发现 chroot 处理过和未经 chroot 处理过的 Nginx 之间的区别吗? + +| 不额外调用 `chroot()` | 额外调用 `chroot()` | +|----------------------------|--------| +| `bin` | `bin` | +| `dev` | `dev` | +| `etc` | `etc` | +| `home` | | +| `lib` | `lib` | +| `media` | | +| `mnt` | | +| `opt` | `opt` | +| `proc` | `proc` | +| `root` | | +| `run` | `run` | +| `sbin` | | +| `srv` | | +| `sys` | | +| `tmp` | `tmp` | +| `usr` | `usr` | +| `var` | `var` | +| `dbg` | | +| `nginx-ingress-controller` | | +| `wait-shutdown` | | + + +左侧的那个没有 chroot 处理。所以 NGINX 可以完全访问文件系统。右侧的那个经过 chroot 处理, +因此创建了一个新文件系统,其中只有使 NGINX 工作所需的文件。 + + +## 此版本中的其他安全改进如何? + + +我们知道新的 `chroot()` 机制有助于解决部分风险,但仍然有人可以尝试注入命令来读取,例如 `nginx.conf` 文件并提取敏感信息。 + + +所以,这个版本的另一个变化(可选择取消)是 **深度探测(Deep Inspector)**。 +我们知道某些指令或正则表达式可能对 NGINX 造成危险,因此深度探测器会检查 Ingress 对象中的所有字段 +(在其协调期间,并且还使用[验证准入 webhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook)) +验证是否有任何字段包含这些危险指令。 + + +Ingress 控制器已经通过注解做了这个工作,我们的目标是把现有的验证转移到深度探测中,作为未来版本的一部分。 + + +你可以在 [https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/inspector/rules.go](https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/inspector/rules.go) 中查看现有规则。 + + +由于检查和匹配相关 Ingress 对象中的所有字符串的性质,此新功能可能会消耗更多 CPU。 +你可以通过使用命令行参数 `--deep-inspect=false` 运行 Ingress 控制器来禁用它。 + + +## 下一步是什么? + +这不是我们的最终目标。我们的最终目标是拆分控制平面和数据平面进程。 +事实上,这样做也将帮助我们实现 [Gateway](https://gateway-api.sigs.k8s.io/) API 实现, +因为一旦它“知道”要提供什么,我们可能会有不同的控制器 数据平面(我们需要一些帮助!!) + + +Kubernetes 中的其他一些项目已经采用了这种方法(如 [KPNG](https://github.com/kubernetes-sigs/kpng), +建议替换 `kube-proxy`),我们计划与他们保持一致,并为 Ingress-NGINX 获得相同的体验。 + + +## 延伸阅读 + +如果你想了解如何在 Ingress NGINX 中完成 chrooting,请查看 +[https://github.com/kubernetes/ingress-nginx/pull/8337](https://github.com/kubernetes/ingress-nginx/pull/8337)。 +包含所有更改的版本 v1.2.0 可以在以下位置找到 +[https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.2.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.2.0) diff --git a/content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/ingress-post-chroot.png b/content/zh-cn/blog/_posts/2022-04-28-Increasing-the-security-bar-in-Ingress-NGINX/ingress-post-chroot.png new file mode 100644 index 0000000000000000000000000000000000000000..d5d588a3bb61ce4db5f593f8ef69fe35fd35a07f GIT binary patch literal 60860 zcmcF~1zS|#7w&+83W9)uw6ugEAe~A|mvnb`HzHk9(jX$;&CoS;igXO!-Ob(e```N! z?(wl^4tuY?@?GybCqzz0?BxrB7Z3>KiV>H%7!fW+TSKGNLmkSb?1Bg#nyAmrW zdS0^8pV;SQDH=GROP`T{!7vnlL^;lRNsQk%+#rIqH@Fw1;d zFh_7M$NRJse!IAM%7DvM*MGlIu>|b3oUbArM)d@S!wHg3dF|p<-#M4fwG|oWbt-l; zi;Ta6XE>=XHNw=2|Ecbz-v%r-)>I5QB@2n`V@kdv6BK=Eq@+tGQz#l5@^O!b_trmB zhcZE`gyjtvu3W&2-Xs5qwgcXXI0@Mc<+Q#dl8g|$twS1LaQ zf1A$f5fKqGg-D_gupV;-EG;yEkMJEuR2_wEtgMWz9U(&YMtY7$ zhHqWW98KSfeUX+^@k7UhK;A;W2!B#?o!OmtaaA-T?K-mg>v-^G+T)-J)9{@lrjnqx zu$g5#GM~&W|3#5 zNf{5w;~5ae#>Sj^$O;RuBI%cNPBq@~##*w+4o(l!_V!NG4lOVKSX^InA6Qqy@tv+xu09o%drKeinqy(2#!F`FvTL$Bn@)>wt*;pRpKC2*5^x6O~wkH^)@F&@9} zlit5=TkRtaFKnQf8H&7odI?!1T=b8v%o;rs0o%TyrOg*kg+c@^9T}MA?K-==9IiX@ zWdZ^nt}pT1Kg-SzeNSV3^5jP6^!#GsXKLzk_vl@w4@6IN11}mSHk6qB_`?T*(H!s6 zuni_YJnv)6X@?~|UdgWvqSNNt?{u0zx}mOh4R(xghLe+%+hb(sT2YzlWJH-J?^!NmCRiL8EgH?s-&bSLj|$&Lo1tTX$nSP6YKQ zD&WYIf`X-$OP=9|`Hvn63k!GZ(x3;Gd-QuN4`^7toOXLRE)euk5N1Nr)d`~-LWzkO zq9N~+FBcLfd3TmkqVtxP+InF0QKqydd&h#FqMG_=-(E7@ zS$wtMY{SmqUm)>G74W*(S}Yr#7GAwqL@e|yB6*zNgL(UJ=N%9F*TTuF!2Hp%DESI~ z<}8Dd;NTvi&AF|Ufd?mAAL)#Y&E@4r#^pKKeJgdALrBkEmVQw1n*z}lx@rRogKlC} zF8TG4ny8d?w|p-C`vuW%EKJO3F+u&$=ukGt=PCzg6Fihr^(1K{4GL`MR2t^@ze@uTk)i; z3BIy%M*-oViJb?rRKTb5SaXN0#DgU*F@)cweG%SAKLdh$6c|Fu*#?YHBFVE+aqeu} zd2&9(P6&y=08e-ffkQ}>`Qam6>k5`bSvOHmlG5eY$dUMIlEV9`z+3Hy;|f%FW6MSk zzi3@IEnc_pSlRvifrBu7YvQ0W-6ZXSDv-POWj}lE7wPi)S<4z65N?*__BS8HMTGaC zT-=Hr7gcbo)1eW~cgNBDl#RN9eTA**Sja<{Y&=QuD>i!4S36gY1>;7Ogjdh=OoD>v zto*o+AMd}{f*OuqG<(mk?rj)3l7Ky<8SLyFa_n}l88*H5DKuD#i9K=GHP5|ZgbN%0Z#Auc63vU+};b3Irg3%9@@i?^fR2B zrDy(eaEBqz?j>JiPP^fdJFM{;Bi+=pT0Iw;9XNbXH6oyD`*!y>s&{*9qVU{GX2+3(v&&iIH6t33WtvOEyv*Pjk8i{UTMq98F5(^5GFfKHwXRI^+sGJM)n#ms_12RoV z_F69A*<3%Vhc+}LfqkQbHB*2&VvKOZ)}E(?XbUxO?^rsIlwtr4^`?8?Hof_{F6|PQ zWrVwE1AL-aszX(nvhbLbi>JiZN7LuRGvMS+8c5W_opM|!Yahf!L~}_NT#Ay}#<@E- zdC6m<`IB*?i#|X-pu=YHtO$>!KC1s|=U8xarZmeXQq%9NUqS>>7(U{V9BYeLJo zlu#RAjNGVKs&rJuONf8D0sPjI&ue-O7jKs~V1lf||S^-V3TL#mC7B?*G35|(Z7R5Yn z%DCudNM-7am)-j1i_7blj+B^g5Y7h9mF;!&*=jSL?0<3pNW+IH$Pu|@(!}1`jsWhk zNxxrW%_wSu2|ZZw^nO;{=k$>~aOr8&$8ck+S>}Tc*d$+$P}+Xu_tM+b1Q{@Wcl^|KD zb)X-Um+))>Q#SnGt(_=2-m-i1(@Ke(r^06}zVz+k%JF)8>fJf%QX#`ms%B#DB$Gmy z%pL3|x05R}$GowKN%iKzda9KDeh0hre{Rl2a0iVGuY7r?Yp2`A=GE7|CF+i*%fKPk zo|+m1)g!7`Nol;#4;{HZ(QzqJ6imz&X#y?^NfEz(1w}>t`!2k265jN>DNb$IruTQE zr>lFz+8aqjgIw-!<34ExmT$wI)T~bqGH}Z*#ugZeZ|3M_<#Qyy;NF#g8%Jd~oOu`& zi{-+#Pv!4uh^25@d^!7q&-ELL?3u{|Q&|4OFP5etCzHiZe#h-Q-q8g8%e;+^TRwv} zzNULj6Q;jSR~ozX&Wknsq_1|PYjXxKCA%8<(`*JdIXq?Ey***CYRg9k-gjUi9p?;OrIjy8@|zJIAcv2>*Dv^7&o z0l(!!2D!cY-WKbq{77wx#r@C~YZ<#T4hj#Rw7YOL*}cII;dCJLg8vp!E;Bv1S8qOi zTurFbpvUtiq}TJ`iNmt~@|*iR-Rp?WUTiA@xr<(cDvxS50}~$!-606IH+Q^ZcgMlB z!;iQh!usulq!(x5srN|!U5#$T$05O5^E4KTXrvi`u#@Jz7AuiW<@8j&J5A6z32fOk zb3c~N*4}yfaw+yTlgb+tJQH#XM!Q?vH`#Q!25T)l!JHn^1!r_}zf}1lF|jdfErzV! zQQwy~SH-i{jc66><#H@R80xJY7k3JGFE4G$RA0yUBr%Bv8Y~;EsMQ*>=E|nF>6gn@ ze3tL~Iy}Sc&RD|u&rBgg#HUDip$PWT27VdX-7O4bxe4;=iDydW2xRQ_`9fdJ~FIbA9vp&|Wa|~;>LreJ> zyb{!L>AYzOYD7=}oejT~*-}KZ`p}zg&WVuNT&)oAi2nLn6nj#%9Z${b??qpB`orO>V$Px zx-UI_fjcZ4@id;9XR`&Xmwy1=7~rHCr zdmPqdl)xy=ya>yz@-D}he9)G*RAP`}lmTn9dhM2a@}E#3f4hSOM?|i5Oo~4Vg5hRVlioLQMVCb8CHz)>wIdGVB>0bYGw{xPpA(7YoY5dMl zSsx1T_d7o=shP>5yN3U@=*>oG2#9tlJB`7s;8I}0=hIG!KF05V%fxQ0(M|lbP>&s? zudQ?)dU2n6&V=ft6X_2d@EvWkbF9Cj%-_MC-B`(kH=a91<-;4n#-w_JI0>qipjRlk*D?Y z2@*Ue?#Fi@bb;UG28q1eG}`;{Xl3FIDFJK2jVRN(dj7)oQW}1C5)>4aoZ>e@Q7tMz z({-Z7V6nse)=Av!8&3uGKo8+2TtOc{BeH59<#FS_S~8wjplpE!4dhaM$d*p{d0Xy9 z*J7P0wb8uCg-6mxs<@s*dNQ9oHdVO?vGFa}~Zh6tNK< zy!;W_lukuU7J{Ko_({V41;}fCaa2027JI@7TYiHl9PRvRfrtmot~Lzk$TxW#8JFBe zlHx`g%^cg%YL`3{0REHLV+zUfzGBU%&6m=?q!M5t=@0+9;+AQm0SZ({1=OPAv~Hen z-pyc>zO$R^!#Z_`8v1_4gOyIbLUckF8 zb7iQUkY1fm&?pB9-PrR^BXEVIWGb*W1Fv#I#&DuO#FO9Q9Xc**tmq)*m-Se(l=>kR z?mr68C=5_Fx1pek?hdz6pi z&GkI~jOJ;#JwdH^ICBe{NzYU>ouhco>h7{sVWh0O=A7mNGk2J{a}L0;hCne5yyN-M zSa+vjO^vNSD%06&Ds*)8!~yCVXS0yXFxGC2Af$6LqW*jJ_^q*fb}e-|sLYi`&2*!e z!8Tj*mwX8rt!4$J5VvSUYKp7A@$Pe=9)_%oxhQlyJLzOaV9JvHHxmpLu=!| zc`7`c>k?ggo8~J$d0TJcKe0#?5cPpf_S4OeS0Eg;-3uSV`f%5W^B-bT<5ImF{rw+# z33$I`(vgKoD8Cb3<6h&$A&`-f*mfF@P5dEzavlMSzJWUiLF{xK|G?&4{t3t6jECvP zDan^7VH;V_vF5^B;p_U}@3?L~x;$fI6ULZZDtkaR2?cz%oQ zwZ(rZ5*q(r>g;K}%~pK~uS9aS#j2sSv=s_(>W*a?FEXOFD_J#SPf0hw9WWpP=*mqF zhb~&6!>ZCBUzXG*25ZkoTgaK1nu;p^c=C$=kQ9a&}I6vSN2fODhH4qsU#0`vKXFA7jOOdeo554qI zDE6kJo1l{fI>?``T7NHaZ~ch7+-gt8`|_5i;9s$z?ZQBZ8Y2eA7YV7ps)KFqmyjs7 zLJ`mgRiFS-1uJ|fxJfo6r{p2$^+3Cm^P`aN9>N5K9HnQL8ak z8Qy|KE>7Y9Wrbu&_+b0*Z;)<6IXRZsmy)-xPR=VYI_osUr-35yIPefPYM~fk<=fHG zh5kdN5Pl8Ld#w*%!SUkGxQVR3yiXj|qINN?9^ZoIA$nIF(GUK0fYHB(ZmzB5>j(8X zaq1XM1%x~-%xRHu+n;^UhU}+XPg;)6Q>>hCN-{$@cA>AlXaiGMbvIoeU2dhoRj&@* zjUPTFObZOa#^>X6dDyZIN==xPq^h?5B7@p~p9Brg_9@oi(kc9qB&}4e63r3lEcwvM zPd~>u@4C`q-8gxtD2hQcoGZ|rvX!WLwI?JTNs2C+VR=>ol7{pHKQ-M??P0XKwT;#| z_SChriGvTuJ+94n4xLU>(qCkj#%h}>nfarKO}sm6Nu`bNOC|F(Yj34h^pSd^d4BL| zTwpsfb2n|ksu3CKGt~=S=MUJ}IAIOBH*7TPr>Et$n0Ukq&4bfd1f}`?Py7kd@G7P) zZ@G_Tjb*Qb^1{w1)1q4NsOVo0|8;V#pKu#_%=$i-dL#E@YrY zZ8-$ag$D|T-sS>6E2lTh_~Ro@UyU7dYH}AT2f!;Bi7-U|jQL7hxM}J5DmqB5yyD!* zh~qu{rx`qFO{4Ok=xEiNQEgEl@2b&uf&i=g2xkmd3}@BGGtvwEG=9rW@vYCi7k!G2 z4^}>q$XyI7Hnw;s_+4buit(dYuXe>&CZd=f+abqzy&X$lVm3Q=&BsFz0G}B2;jyy5 zJb4ESwNc?u{ky;K0Y5qJG|TC;-cGEw13W%UDSP?$rXMSkgfrdj;hHMt%3{V70qH8W zG?31<+bTEZ@3$dxMN)b| z>Zyi3!JL)a;@N=M%9WQiyiGqXa4?nMt46^_+kKhb74w~;r)2NCKg;|vX;2iAz(eFb zkRnVx=adO4ZhQvgMl1I(iO8eqNPTQTZi##Gkrh!3|A1PkIq_RzuwsKr*7O~DZTI|G z#80M=dSiW7Km#^#aj7UF1;YRtF~|kH5N?opsL4c90w}>YH{uaWP zGRDrpxcu4k%)^E%WxT7ZyrpdSVBBS{xhHEMw9vd@vY_*UI|!&>kAd7?t;2oL{ON)l zEBW7iSsupJFvIn_=`B7w`&INmpfVMy)h#7EaB22-^aJ#8!%{Sf*SK&^9Qu<{*wT>* z5H+#2w$|6)e;<{#VESwopGy(2`9JWT69h=Se#^3qMH7|&+qk3Pqh~z{gim0z?^iY7YC_~{^;%+ zLvj=E81s|RwXBnrJ!>E?X9i*Hm*$iMs=)MC_jEWF2jflXdh|% zS_gh!WB|@2ROq730%Aa4Fu^#}lRsiBMU8zGx|*rHV$PAqVX%{&rdj_=#Q&#L2~>cQ zI=woKV|i~wWZWI6)NKIV&0ahwbJ@!ZxkR}F%FG_BS1PGSILOTrf{@*ir`?sGE26JNj`Jnq0 zOqL2Sv_q$%dKf0M85>7Goq%}&>UuQ9IHZ)XLU!jC5fVE1F}H8YtD6(hZ8OpZCazYg zs8PCr_fCDu6Bbm9l1+z`t3OzQzuy3wV*B;M=;qJi!Bf@6*9!WYQl5Hm2LL-d9Reyf zvfM1wh{^hC{K^RdvGA&%iF7p!;1NEWeoe?3W*aCwNRttI4s4WU(K03+2?*y-l+uV` zc%7$(if+!Rns5d7O9MGGHv$X(j=@Bi;GjjdkaGlDglAz!ND2RhR$Fxu)5z5 z*eSpkMMMIA{b_|N(c$r#M5FD#KGNbyS`7`knnCjkweTvLO8YE)r)ZQ6F|Hu_>h|Js ziZ8j#+>c;G1U;ib?sC8Ank9r_^mz-8aeG&mcXx}Nl0^0?fS`YbqZI-^YWxOY_VC3t z2+G9fmWokjIRqQh0Ck8jea?PiIw6dpzda~T7Kk8#Z?-2zu<86@0Kh03gcv8gDY>~q z4Jv0EBZkOg((=HU=_K0f7>YoeRc+AZlCiOk4{j3u&&bPL`)ef+o(Nih9}@s(1x1Ck zrI;QsP#VSAO<0l|=jOEDi*EdHU1&Y@M@;Inr6YRut}wvLNA?bralmfmDT9MBX>b!lRv2tNDQ@-k*&iHe6aa#lmA!^6-2R9|QuG_L=v7Xiq9pvc zCqgUW;#EN0_1_gO77DcXzw1~WK|cTQBB2G_{O{_a5czK(#Qko65C{I>74kYl@&7g)sO4j;n4-G8=j)S_CoHg{P;D!#?+?j zw7RAguLlR|Vr!27cVa&2sUwte@L<9Jo&`7>>+I~b<0t>N8uvPDC4{$%h=}Ofi)XDJ z9TBNehgYcrD6m4@Ov7yb%S#VJVp-vRq*x$Hr)`bfjWK@Ok{`n9*JX72a=xa}YS;5t z?{H*V!ssrqpDg0N64#AP6%~68EgTBWSk~`!BOl5z>c1Xm&^n}HmQOIfbmW_9o+ARS zL>7fHs(kcl(VYtg{e!w#&wWQ6lX^@jtWSvbSMRG4NzZGJHS?LCiCR7k9~rj32L!{_kfs7K!vA7u&H zSHD3xs?N0@cC@A1E`rn5vkC8-ULG`oOHsVtTyWCBaZ8lG-DdU9;0keR*_Y15dJ-Oa zGNJ*vZ+N3X2zC60IY>K<1%FyNf0+bY`yT z*6qWgH3OLm8j?3ZppJ$fe9vunea0`=QY44|pZyoUy z6IQLKMG67xbP5P%kBXSTS{j}HI(khnpI#dn{((ERM<`owOu5Po95fOTM=R8Q@Vbc zIfh2a_ayd9lemcWIK54ye})ao`h8W;<~xjl%eu!4JJ-*hB$d|E@Z5DL4v3?Jl<$t& zF8|y>__vuuvBhNYiN%B~1VOm>;sOA2<=yFc-6WtuUlKg$*DS=pUVE^CMM!Mo7>Ba3 ziL~+#Un#D3)`^2VG z(mL1mw`&`F#3>w{myW&d0UQbMR1e=T4&gH3^?oZZbWwQdrA{0yI0&74AH`20<}z+nUU-$HK)VryEZ#~+1|H(@B`YPT#RE$_v+d*KXN8xYosDZ@ zc(or3)ME9YyjVO~5gHQoDj<8{ayl=Xokqo*J16GCYP20p?xPt_y<+GYwsE;kVrA;4 zqA6thHHS^sfIgPfME&eFG=)y3bg-(I0wF>Y^Erom^?p&jig1M=$>4)n6tlnCbmnth zv6pJtZX^_Dsrh*73I~ITb?8SOJ*Ij+{-{w6sN*Lv6 znF28|MUR&posRtRMqdYLoAPfAUm_gnt91j*((dBOT>t!?VAT1Zr1kw8p5!d`HaTPM zQ157T!fcfud=46EmGUuvXKxE1IgX7xfma<&7O+$4TSl3MNobKian;%0e;Z{cc^Brb zGsJq9-uXwHn0#7vXGU6;-z5Uvv`S!I$h`!HpsX=$W5`7`Y;@ZCX*q`UIK`$K~T%5F?t%B!acnJU` zHbz0oG5{5N03BUbQ4wJw&bp0fT*%1Jj?ey!G_id~>(}&3!CtdM;lN8(pyT>Mu|X-b zHgF8$ZS`zNv#iQ~iUPMLceMS5`F_AOF$vBG0&L<}>#(A29+z+Eh=$25xQ4Yh)v)G^ zI6qyj((Z?pL9fD}v@7Ek-d;^e@g4<}s7$KIYeQ)$fc*f^14O5Ac^^Kea=>Hb7Oi1f z-}rJ{Y25oV3ZK{eamx9APucs`PWfwZuGhYxVv7!6rw?^pAFn5n65Rb*CHu-^f*DE5 zZwB^-_7%V8NhNy!vrXqE!K#s?soQCi6g9QiX>>~#>=@t0EW{9jXVyA16<_z|k0xA< z#S9isJGnkMUCe?EP^8&feu#}SZap#K+M*@G7s#u5-ph}bQjNsLj3 zl*izIyETSbY8DClb$)X@c6~#vUJ-N*3rjhEf{Wfnu^-zsfq_{Pp^elr0JdJ9F zQgmqP{l!1UNK(%eX$=IjibhfO)I=t+wifNKGw_HEw58<^)xUaiv{4X1gn^Oen7PdR zwegW+x0-lJZnC z-LGrbG$xl0_9UmnVhMQI@hUewA^7HbH+GPb zoZNSN)<5lS%k-)FzRIi7Eyp=)MSt4PmNd(wWq4V-kjs6W^LfbgTc|zRTG-Fuf9;k6PJSa~Rx-k4bJ9u=uG^s_#@oGBp^;H*0HWpQBd%G_FU7 z@{TiQu&aQWaJtck1~Y+C;jIY?!Pi2`wyFie4GN|*;|=5F3Cq^hc+@GE-M6q~oaLlz zX)p#IP%HQ46BBx;@`uuDn*+8K`lVJjL!iIWsh<-IS7NmBiP_g$yTHa#JLcOacV;me z;_kue`1-X%`D!IMND(`sPpP`_U`4!IiwFJeXXjX&cQnP@?9?MPD~1h)E|&UZ&BUou zje_^7Zl;VoYniF2&v!a>i=*_w}bhz|FCo?bYK0HiDdusGp|FqVY^wdXQ3yu5Z zR@}|X`!guMuoCsi?zw39tLQ>Qw>M#7l@72zqdraoV%3U-r{jevTwL7B%-lf$Xh(E* zcdxAPnmdN`)ac&5;7>Y#Y4y~}WMyGbxJ#JTvQOodh>h|tF<27MjA!#}^Ys}@kuo6} zP2}-y1K*|7mL|lRWJau|G$_`*8u>?Dsua95@bAM;)L1QyhI-_pKQoon<2sp(dhjlZ z3?A`&a^qmxwEUI#5)tvjL9b2!dNwKR`o_W7#%|e+<>?!?#3=8}m_MtOH37Ss^9`jY z<}QE|c-^n^%Ev~>nRsCs+Ve~wdwCjo*#@KN;YMG8Bf(6N5KYvwE<3BmXa?`yP(M)I zIZJ;??-w+J2Yj2zs~@tt-a?L=Xki@1%TO}DDUu4ByE<*1MzM#|AZ#FrHDk$bxjw(u zQ%3`Mxe$bi&*@tmw&;k_`u6_7YWrS|ZTu;myVZtY)tIVD(dnu>R zlLreD{oYK>K6fmsM8rw8&a}nXPGqZ>n2=?xWjAke3~%*I8s)-xgIw8>vz6MU%MDq` zpFe*-nBJN+vg^N6T)&g>jUo+WFInEYXL2~~96dcb#XlkQY9Z>|3a+e!)*Vk4h*aTE zMa&01q{N-X=EK1<-roZ=I?bzDc2RD2x{NEc`E+78PxVM6)A>J1nA2r>Ckt-qd3}+R zVYxQ)>DEls5|{e-w~`zqnc-qrTv~cCiT+*BW@F6Y>I<|^8`p4xpEOQo0+_#rPK{g#U=sYUGGw>F3dGPXW!d>L2%uTU=S zX}krq`^3dAZ&d>5Gs@l@azvuz>aQ3~`|Q32Pe7X7?H8+BDR2KQb)F~6%#}U9p}~Ha zNKXoJ^S5W#pW#f;)5egk4bLjrDA69M*V0Hl8?B9_GtGGC{1+~80vkABA*wa+rZ!I~ ze9@Y+r@gMPM2R9pq<2|(hvd3+r(IPrmZ&#>11Mz7hnq5tS&aQ``El;a!Q|BB_C}^0 zgu`Vd3dSxleK1-3O!u08Fvp8WwZ`auEK1aG_pMM!XJ_}3YE%uwLKSzDSoolEmTL32 z#>s39h=EU9`=vn6Xss7VeT^rHZSQO4aD3E$ooW-tXha zE7h2f@LBXIThK_mulY2dzr-+DtAih2oIhDif%_Ql_|4-Vrwmf(_KCXitNqa_<2>w-%A6;0&EtzdXU<)zwV3cg`fH1G)$>DL z>!CUvV)HM?|DqB@_6TYz*~uc$8fuNXov$XG;%Jq~n?{gEGJ6%10>z7_p&>JCC2+lu zJxsnv8WSu|3pPYNURbgm80((rcQ3&LgXZkw8K?iorvrYwp<>}{SyOws)7c7$ z{N5>yp2iuPQb}+3+zx-nOuj+>`Xb|NN5C-u162akaqYlyK47T zexJZHhb6>c*YV~R8>Y{4hcl!z#EhrnN>wWzu%CYkX~PDl`A6rK7z3uk zRwOoEf%6(6(!p7uM-?m+PhGB^B*i6G7kjfnpkgJw!f34k4~Cr_XPQMkR7|{641s3I zW&#Ien5%oCBWu%#N1Jq?{6#`0$GrC#INNoNP9ZrtN6cC}v2x+K=UO69^i5lvC5ULg zTYoOVy!_XP;ah7E?fNk0s26L7hLAIorEi^>uzKC^N?p8+;zVs-`D$M7JZbFCH=E?t zQ2V#$6YFiG7pTI7TI#&reO5be)n^d`gTo0FE`hUt{P&!RgWq*HGq$E!@rJQ?Gwi;p zs+i#2mL@hV+p4~kXJ%GnP`<9;AezdjIq>F)iDvCOnLf2!eU6s%wTar%Y8oAnW@Bez zc$Q+N${800l|6;+?)>}*-+<~R3R!oQ?RvW#KqIdbUS;riuCQd#&y*+b`P|n*Z#j7T zJYn$V+@V>O2iKLbi;|?RfdB`kcgo3rY1$_3srU?YWmq64%;O6`*XDA|F%}0KwM^ji zj6S`2^25B^et-GNoI`#2$)a`+)207iW)PRd5)l9f!ric(4FT|Pt~-C0GwXU@nqHtD z=o_yP>L#6@xUtDGLV(IXNxFvuMo7G#@MV{sbA`|KLzjb=seFT8&Rv3s`NurXFtdky z8B^CcH<5vIS(CcAjh>edYhVPu&<*WD1>f%kaxKTQ$Xq7=NE9y^4-K+d57C`Z)e4p! z;Qh23fAXnlp*&ENBMm_dtCr@n8^&bP3adL4hymR62x_F=*V>XWEm%@qm5NgpvkY|*Vb6RtV zyzCK4^wve!_8PCX0UohD(kll6ki$Qrfd(W*Df9h7&Nw_N zbAu0|S(VOL2Ox8uEFSFjw@1()*`6$pkvz)R$)|=Q+uJf+87HS<&-F?pJQ2D%Jlp&fgfO675R9Xt+C{F8Ztpr< z_>~)0=91nnzo&0)^L22CUgqmBDJJc0mO0u1+YG8L-w3klkQ97@Y6NPior7JrQJQF#_wIA##=Fh4D-TPI z>5ksYT>P7z*TxK&{-2>I^Ajt0m{5Gg-~e2ZMn2wNyYV?ZnaIdhbhNJC9b!!pwdE?=`?F%(bxqfEw>Nd8N{B-q#uy6QVUbUxpT%M2qZuEh&y%yxsqPzKrnRmQ}9`vrKmN(=MlZ-Hc~%JB_G*nA)zL9-0K$;2+tfI#k=e|~U?G1d#*A&bcpn+%WopIhu2wsb|Rpq1O% zIa9mneoz@o+DKP!t+Q0E_4d5#rwo+EmsGO+2%u|K-u#byVx337ESIo6Z)YIQ_k-H1 z5Rc)+kVpI-V}t&H-hj+}H0@e#@i9KU#9Tc@)itfHHB;;6S2MrqlC^t#`T>#Y!EMKn zpPuxMG08PK2Aqo*YH#jJc`m02;Li z|5c}l*7c!d6V3ivzq)WmzJ?mbAa12w<)|AK;c=eA6YITdy4o&w8#qH2>zv}CM2*4h z(I$_LxG+mz;&80k-G!Jk8~t=7pTK)&?k+Er%J~#3mj>@S3>z2O=FPi>G>$;8{7KLh z+?FJk{R{4PE-g_}o@t)A2SU>|Us87H>}6#C?wIfMVJh{PmvAw49fJpfT^`*@)7!(r zdjf_W#J(mam8DpR+o3G?W*c-)4@}XZHcap`*r@Vp&Rge?*p-sZ zHZ^Ks(ZCQamDap&f5BV zv!h3903F#xc}q@yoOg0@-n6DiW4{C79`upFodYxtv!0wBgB^#5vSIp}a4dA{h|>0X z;-g)u2Cr=Qsj#;RVN^qcEmh}3f4r<9^*VMLF24L`@Tf<|n&nK8H%LSfazEl$Lc{c` zK4diHb7ag`^^SXMvf=rP31Q$C{i<(>-jnyA{7)hAt!`POtHs_y=OW#?X$CB1q@2%QKdC`(7~9{8Y7&oO-MhS-LVcry>{gB^ z&q(;VQ!hj8@h{f%Z&FseO`~{-!4#l>KK)52jv;=J5#yl|15 z%JGC4!;tzOJsN9HZ@YJx9oh_g_up9P_?N;kdT->CNg$Egd$OI)3X=X+vr$Opt;zH=oaj}p3n^J2DK2S@&_1PBMv-xIQP~LOUnqG`k+ocyJV@Q$e)ULE5(m|pQSD;_ z)U@rb{bTo~wWV(vQ9Kanw2SR|_?g={h;g2V56YFT|3)Xwwd z()~{$5x+&Ho@8&zm`z4(mQh;`U$=>#j?z+Qrhbs2N|@jIl!sH@3EY!7L91DdH1Bbv z7p$4143GO-H$&-^1!Xtj2jLlba&JL{;(_fv{!9PI>CG^T|16F1TIJVQ+g~mhJ$ZG> zZJ!nKe~=2u$7*%hEcEuB*N zxCi$stsfqn^03mpa&U61PM{A;V|aWY@wLaoN$f`NmSoVx_~uAGfo}&i*fhu|VUJe8VW_K=FOn;Y zR{eviN5LSs^G14^nqbT*-i|q)v$(Spkw@kd^KKLGa*5pl@-8fLrL~#9rWDrvc-M4M z=q-{5Ml{(J4PH}4k-OX0%tbk#swh&2??kJo?uYYq5@}!U){ndWu9Q+V5fsdv#qe2mhl!9%(pom zxOYoxu8tNt@c(YapU%cMlfdYQIU1Ov5Jn-9c+$wHMO|0V2?)f3zF!d!XvZ^$!-5z4 z#5?%fn|12G87T8S69%vYn-t~Ytpm*}T{#mQrb!1jdOUTxce1*pJOUkQ4I6wsG8sJG ze_s;jea%FEaGT3SM0==0MlS-sSw#59I6is`F8q|#yskxa~zTuoLYT9S@ zna25k(>lMbi_5lhNqA|3O5k2Uhc)Hyu3Ea+@^e5x9h_yy^Kg6skM-Qk+|(!IY@(vI zH$S#Sdl*z|e7)%cD=!yZv8XD4bR)rd?@M#OKpdj73lH$J|7Ss~cN!4OgYCZhDrK^m zoS$)G^Ic$r;}Nn~gOlz~Alu`OKA|f*Hd|-s^~L)pPmXoP`@}1X6`$*{&msr&{r9)i zw(gFe$tDjMEW`H)YYh|o_w%#R*H`c$LNSbhGPij;Fx{md8^pCjyfH8KajN`bQ@-Bk5DD&ex%4uO_$VZ#aio+Kb+QCDHcwnQE@A1(JIr@Qi5-LvYxBI| z&J0iIjz93edSXIDb3U9V`71*w$Ka>+l;Fd#K#MgYql}E_NF@y)kE1yJe6L7tKm!`H zezu)I70pL^2)D1UrT_L2SRZUi4~V?x(khdkuIf|wjYr%qGboiEK4Y^34<{;5FWdq+ zbZB}F@BiPk0M^|$;8~;1O-EV#?-?&v7Vr5)br4TI^m-G$$91p~3+uSini(%g+nf9g z>+aus*qwJ$RhlMjJvw+y9D{`w`s?k~e28CYxwA`VNrSUXQGmc~b;Cq^>wfj41`VVy zpST0zr2g+-{0{r@ZsUZPx2MTu$nD^X-g=tDLe_TTJ?ly1wG`=e_!7z4pIE*N6|JVT zxk3UQ)KXrz`!>j1{)Uxa=6+RU2A!cLhmFL=CBK8-*voWoSSvd5l=IR zSoz-KCxXemPnMcQJB^LaGu{iqm~%(R5?6=Q&{(#y96dwb)?laum(x{6flb=mXDx{r zOL}v4?wAi8rGKrk@$mY`#4g-iFK|OUOAq9Th;d-^DU-n=k=MVYP+Y9-ABJ8o>l^&_ z@|5!xP#MZtx{4LkAzqoN?w@OM@ZZdP*9==~;;D1qB5JALk>H4G%f}*yoTXI#`Pxm= z=z0FV!=pvh3Ky54y*ljS_;@-Ntvl3}bZD?J;n3^uxS@sg;PW>(+YRcF=g-Z~uU41% zUcZ33T-@X@w$^HL`2_y*Ep55N#3SrLh!py|^4sH2uO2VasHmujov59?2$ z0ff$Q?L`aFlR{Yq>4@;M>ME(LvfA=Yk@y$?nN%r{y$881yCHpwltXKl}-Vs_uDC-kAr4S-%~xbS~)GKz+}d4e*ZXzXE;UI z9KH05*j$Nz=OqV4(#H)K98|1lWt&S*$2mD$tZ8@a%-4ROq%$@vI!>(>A?9JZue>wZ z%ar4O$Z6NrEG&PMOw>L!g-bf^za}U02UjloU05>I!(_xbyCAMB8{Pc)sA83t@ICGE zad)j#B>3kOE_?OdN`{%;dfdWf96;l-)QTg)E-xB$-na63P0mc^DgjKi?WDir&A!9# z>yxvUTS0MizT;c7-R8qWI6N$Z$?$YxX1h0b*KZ>0(J>2!2I}5cH*xhBBX)*w)DcT% z>U9shlCOb3tC?Byqcd+4uBii#dTf2nmD z$1BL813w1d$Fa1eCE!5xkE>=ZV$dI-x;cP>h9T}MB+JTR%{3cIrrL<~lFzw5(D!3% zlqcLT*X+IXdibN)NLyAcH+D>zQpE%zq=M~F^eUmZ!eL+sf~n0nAy#7p|yS&1Wm>(U=Q|cUGQ2tfDP#< z*RL4(VQCsrk|sL5aM9a)I8_V&<3b3F@6Au8TrER=B4?A!1s<}k!EmUbFR@I8udw6c z+$MFG&8YT(S!RrubN{Swe@;4;&*2kGN_oE6S_m-ri(+s@Ud{<-|EwVEuElD%jNu7H zaofr=)m4-F2|@vr5VG~PKMQx^ki#9`Wi=fINM74q0K!wi0AwjvyM?`s&JA!-r?Z_V zAp(M|DZ`q-a?LN^R=?1$e#BfeD~?72-q3(}2dS62$o$&A}Q4+oEo_rq;O#PyES=&{-idVq$HKTG#Lu5 zjFJGCdP><3!#_GW))s~#GSt02oD3n)9*ol*-K4Q!lgUjrQg>DFDuB77kc6_Vb&ULlRa|FF&|&ShR;A=phPl;0Rf7_%m{Hbs4ToUw$Y|wx`ESiglh3x1f(M92^T{@j z3lfs0HgiR(_zAi5^H(k>awAMy1Luo%QZh0oQ;>JhdRMXZOc7NqVV6De4Ttq_tC^x( zyM>tID!r=*fp`1cR}XL3)9&vAZ$k78Z?Bv#d$9Pp9Ad({S@Q@ujJAIv)Xo>L)`ntu zVhJG)Y-~ZgvC>jfc}M=>?U#3j`YVkZ`(q=8860U%zI+^n{KgMIKZA*gi7PjWBb^I) zi7l9pQK}X|A5`j_ZRd&Vw0b1S)|nE=ON7OQZ;j5CeF_e_X6M8b=i$jSGNQ^95GZp! z-@Q$unWa>xS6$Wa=)j&SM*1z|JbO4D+$1P?Yd!@r0P&gA;Rq$rY;3zf5Q-Lu2gCm| z40>FrWB6jG7!?IW44j4rg@wz82lW^%!GGFc?28SZ;<^7X>}N_;nlxf?=J9?LT86kK zbu4?g}fpDf^(UMmdS+2M($kDy)d$? zcKTE5=EA0c_=fhJij|jXvv|;bAcZjir$lA;^~2hs#SJt9;%@3g%E^IfyU-8WI`r2N)0N2ssTG`Rg^ zT7&m@G+_}@VBH79p9_?zvb>Olw8f$P6_xoUWn{X$2T9+oqGL&bY#p2x>9YW66n#H; zR_QZ44RPz>vO^N(G|)+Ncuo~~!tBf9j}*NTUZ`8kz>lzB^p1YhALi@uK1WNLpjk;x z*W5f*f&kOz@fubsPq%D`RkdO=S>K@2C*-LP%}{69Vsge?dqI-q>NGeLs4vwH_{>5v z*=*8gSISTeWc?qx#Ma#QU&If#sU8-YbJ!|Q)D34!`)+ZW*$X?88xf`1J#s6vr>gjf zo4oGkQ$nkM!wv($+t3srO;3Ox{ zOU&(J;b#14f%04UPDX)d+~>`9;lTg#v%bXKv38rxKtT5I`tVc@8ZK z@xH#+#AjP!qoEk_julAZ_0+P~n2a84V>m&?s(Z5UI36!U%xaxsx#25j&`!Vy2?{pf z9SIw&psh=%Y@3^&GfEXe60goWIU!h;Z_as3wfnjdbd_#ZAlE+<8;P3nzy1EY&r!9s!(8dx96BC^S? z)Oa8n5-Lurkm|u;db^L;?tbb?3`Y(t-RQyToc=|LqFc#bXG zY@vC}#l{_FRWOXPqc}_lZ>#xr;D z8%03Lv zx3@!9wnI-d14?8(z|5mY!SBKR*(=lnxK@N#Zm^v0 zU&6NXg1Xk!e80S!o1Lp^WJm>e%-QO}<|k)jidUb=zPbhJX7jsuCxjw#)?ero@Op%0 zCnL%|ol%`>L@5R9QvTW?9wUt;g6@$QsjQ@bSy`U`vvg(Scf7bea_aR*ps7)IM5)DE z&|=0m(o2QjNdI<&CbkR=CheRPVJZ349T!g~aJ>U2Hf-}U0)5f1ao6}Pn@>iw70t}c zYnjbjLPrSVq@x`RSY_t-_nTGAzWCP!fMeRN8{Y&#vQ{cW>e|?(x3x~M0@}qL3No@9 zhYkMI@`@6TJdxomAiAA4Z!Od=R)?B>^(%1+S&hWf-%83#n1DzxO{BilefX=OPgtTd zyblRQxpgSfheD>1Fmi>*&#T(zhe+7J-cL?8btOpQ<|gLhS$lFgr7{%rtXgVdTJG)0 zkR-*##9H7xS9;3ji#PF=WO91)d#*w?(1VqxPz5%$Z~+zk$Mm@Fizp=J=099%Wm8U} zGg_F+FZqQpYdI&Rug+nTS$;os8jtBuj~v15k(=hNbGni0VRg(}k^Q6>Br`KJanr}t zq9dpD>-t>~XuBf*iO5|0rlqpgji#99@*#@p;~l{sa&2rocAx^Bk$MT?;pW=*egwk!vg%7(>CYp+3-CPzl(F6 z@okZzz>u5nf9kw@sjSd|+;}!J{gJePK`2dQ1#}kBu-+NJP9GTZl)En?nt$C}(v4y3JRSP^Kr{IqF{y7oQwwq8id$sLc@5yl^Nc2A>6K7|A; z6uDbSrBF?5#+*^9RYG{c!>i!$s{?j|>y?#+jrDA;Xr&yl`$ys?dj0L`B^DVSo*Z>M zJ>|cJXsXhQ`D#Z}SVsq^R;Zr@AEP>piqh4Cl|DxR*3XUa>f-Gd;06g3VKY3T(~TPu zz{qRx%?t@&-5lt@UtWI#;w`n_o#_e&#@xdD3BmX1VBq)B43&0053xzT{5+l3zd-CY{DIdYa*)bE34hd~qOVGKH{HK`=?jXz(LJ!bSFJ+H4#^d6s; z`j^sb)Z!snqva$i$6-OtRB&w0yJhnEQc@8 z*V1YXKim2||1PVLy<;{p*U0}ucRVkut&AEh^mk^VN-<@`!PV_OFLtq+GrQ9_>sSNi zt=~C$wAjp~HxRqy^UMkyoXckq)(!$9mDK?3cR4-^4#D^s6w)*v)aFo`I(AG#691Cp z;wO7>YOq3oE&n;cQ#EAAf+)SqvND#dpW^(Scz3)nz#|7P?o&vq(CFNlgL^w$KfjEMvnzc?(p~nOFN;>?)RVG zvw4S=CM^VSPURmwrBbSTKv@|c_7A4R%rO%k?$4*1vI^lBc_*Yb^^@!iw>i6*|3vXWd08B!)<@Fg=p21BF7ow5?Ym0_+lnTP~w7!V#0 z-Plde14+VOYWnt1*i~=uSbT~OF4reo*GAXbKdPY~kY5zqb^7bVe91qb!YOulBA?qN z)#(ch3voIaFi;@k8;|dIQCS{H+~C|f&}plVJ|cf2F0PxJRuP!rYPvR+E^3-V^&3#0 znVT|rDD8Y@`jIgQ%5TzR{d1GD+bK8Js-T(yj|(-Pg|_Ev!s4XwKN`CGQU5h$@WeHqi{ z1Bz@3htY2DusA;+#;~)yb#ZZ!@62xA-8PE(469Hjl~mtlmqT|ZX_aY>0;eMjdL$}S z&c_X4|1LIBbPOi$Ki=I+8_|bI3GZ%p#RkWfnpTJobY`pfPmUue+6!GaxYoElP+$OX zgLy|qu-Y<|5c*Kfpc6C*RMOd;K_>jT5y{ljZMJXbK&$9$r#D?MW`TINCRirN*8)Tk z005O7tp9wL)l%w`lm@H;hcoM0BL{M*D0pGm0r3btcgMO>q43vvYbRsr}X72aDLO|DYwRtWZfhNsYcd=e^ zGVGPvwaIa3qTLNHu%VtvsZH$i^ULDPgc$AhTNtxI&&2s);mm7?8EOp-OufZSd~SBO zhVj;fyPT1^vB(W#mc@E5OHB0GLRo<79^>Ehnp{rKJVA@QE^E14TZ6{To{iQ+<4-ppE~i*D0?@~ z7K*s;+b-?zvpEHy4EvgQ_U`%y0t$yImPX#6D74Rm^$m#Flar&4=OTB?)Ni&A6+N3Q z`|1{|mb^E?jx8-yv2gNAa{vC(>*#zuJUo}1NTy@K6DLoauKXb!lPwSnd>shcd;s{O zs(%(uBXn3mY{SIAZ#I4wj_ill#+Bk`zU& z{-Pf!;TId8b}rg5DoI?_0210NqbiNdb} zUzpv7|ESz~u(CA}Q&Eux)Og$}^&^~*oM28UEf4NG0tFQ-YlG66^>vG7!-;aKx5+)I zE?%^b9DoQD9OU`c{VKE%@e%#E1&Du?)h#G#AJ}6p;Iqpf@(P^;{Nx#F% zt;`*Ih%2LdzZVe`F-ycx5v|=N3AY~n zscygSVhxm~jrCwg2t?g;k#CUx%+A7u1_7W z+SL~CU)j(2{*5&;*b~l2mp#~(aI*>`2g#>d{3**yV=Sl%kZ|_S5ahGE5ob`3#%gor!f$!%YC)dXaG3xLfw^TFEgcjqOEUgDQb7rMb;DPQ%YE}bT<7HS~3y}L=`$T zjBdl`7_V#rjkc=p&KK-~V)Fg7qY(JIeB2LD_<+djaJ3#I@<8SXDC!Ep7ja8u#5iIF zzz@0XvA}fG=YE_gPSM_E#)uD?>To(`o)ZLirMt&TkdErLf&)F~2gcgE2YchSkQ@c- zfQl|dGno(C($HFBwA-6cTm$MfJDJ{no26P)YyfPn8Fk`D4ZSxqNJ;*v(yJxwI$yHW zwKa$nMl|{Tz5t*2x_Oads|_V?_Wa5e0f53&m5OW_6WOEsJ0X)(9pqYC+%h2JgXF0C zv-y;1^wMhcdD3S$P_#+*4ECEB5;^=)sEaSpGdxb3sDP-DoZDrOq~{oxT&yqTlE*RL z98R-I=Zt=o@DGY)+vV}mz7gyOZ;X|gixKZ)p!ZBOE6=TTwM-S)P54T1=fbj~dwykB z#eHk7FA8|wHnl60;IvIU#nG+#%E(jd%uh8eU5u-Vk&k9i!LD}^HHj|$3_(xar5Mt-&_j; z=*~u`(!curHXikhb(6jr{82n!&XLNE4pZ5FI*kwR0SK7izZ=-TDk=hD{K-aER%^Bl zs(VncW6|%KmG1p9u*TDJ#=ejmy>212B4KBLz>Q6QgPdkw^Bx-zK9K$;_>Nr9ff*@xA; zxy6Iifr%4AEPnCU8vTc2FPe7iem&$sx5@FFXn0LS4#I3YyclAbo79rhEZkbWCXb3bgfrlC#ZF=Mu)o|O=D&*UA`}jhK`t=I|Ar1nVnztRZ4*^Z;Uqt#ElX*Vj$a^>ih;wfi90qwGt)k1wPNe z!*SsA!FHi$+Y2RbGQ;lal`vosP@Mc5uPMEtJWxS}&BV(7BWuCo{;e(zScrWR+Or%$k$i3+Ey_1gdZkGcV?h#Tock?H5^37)oXu2 zBWoT&{M27;#g~Se&uQH$&tZO?ICB~+88NVL^Z2x^p0##UrgA9KW`hqT(83o&BwqeN3Qf@EL zLK>gWRY%;>^w#W!efoXOJuo?TeJulVJE8)|{`gEX`RH;aMe&0$byLsHW zt+BoWrSy#Vfs8JGUitvM&{f$x=uyp9&ZfEUNTcY*Lf{7xL12gyLv+`I(6bx?&uAu$ z(GsiJs06x7Z;S33ydPmJR|9xW!X|?>d4jFAM)9AMY}GdzmJRLm`J;z9UJnBIm(?T+ zQI;9`WD1_n6rtSBq%Ox5eq6Xn=VIdl3ertnWdC#>(44M@W9>tHd{9tuZgKvN=-(;% zU~v&-90EB4U0wLwyE@uzx2>%pQ4xbg3M9{*j)YPv0nVk|0A`C^B-T&#q=a^G<>G1e z=5gNEB$J5EAvSPkpBowzV)OqB;coaukF_0{FZSAa~4f&PDPB8<7dZwgv5gI z*2qzXGa;T1q)cE?qfEe3_OBVT5d&8ah{(|quGr;NgXrko$qq9fkT+I%nAhR;`qj`D zqx2{2`r*B90O@e8kttnE>y+PvPrES~;}h~wjQralWASjgyGxH-?lGH$;9F6V_TpwA ze(!mLQE#)KmxN));*}he^TWoc9m=RO^{lFmxxEY|+~nEAvY>A{Eq_aay@XuetKN2- z&HgEsplM4>Thq~XwchK<(htl}Ul+?dgB>0n8dzC8&51zycUICjCMRpAAA*^jA!|#I z;3XXnLPRT$A7V`mkb-xcCQAN%rZkiGF2x)cff~y36X(J$5au>w71r!ELL)@Q2P+GB z^M@KHGiLrxGK`Jm#Bbq-XYy?a=7>eOWtaTCZh#E}RD7I4Ym6V8yE8d8Ral{op1wdW zT$f&Gv86=-lt}uK3lSorM6bKss3^_$`}L*0k4HK!?%9vdmQ64}iHvI`Vk0a6R|{ad ztYw)XclM}E%I%9xC(GtqaXPYx%h1|rt5h`OcP4&1N4Cpuk|01RxwA@~7#{6NX=Vx* z6zAEd?eFxxhr-9FQs~-DICaXQxo;7j} z+Zn_7->I~<;!3B{l}Jo^0*K)=V_jT!QE_!zB=jKtzs>Q1HExc1{#hNR!cSC>Pn|zb zs|8f>Lx5D5P?N6H=(9kLM5j$-q&jddUDLp9+WXV?7b2uug>!|@<6*oX;iIn|;kPB? zU3Nlhp^zen^IoPm))*gZf<5-SBiV*0g|9Un7MQ#~F}3bTp;WB7-d%N+DHHIYEh+q7 zgPA2~P)&ry9z~Cik7f!~Lr2|6p@m{t?QhYv*h)thsrwL2164}pjxMU9)(A`PGHppyTJ zR(~DT7NHSg92&QB`Z+s~RzVAtuu~C0PkG|H`h3bn^T_$qgOQ5#>Gx^{8QqHF`80EM z>>Ur77C4ClIf;8>;}{;}hb z;@LObiRf~$n@{#RIG=8PxcE~N9v=0R=+D0(Pc|LWJ4;ThQtr6b?uK;K%DD*hLyJEb zpzMM0bbKLORH)|sRrEcGOCVps(4xh-BK{u%J(s*#cuN`B2!iZ8QM^;E5Fq;nRE63VABnIuS{vu>pen zP%80e@`O$=AxP@<*$3NCb?x4us0j2j#g-x(0+M1wW9!mm;w;|#v;Ft9Ov*&>(=Yb^vnzf5zh$CSq(O0H15FfMl}M%eIv7UqyuxW{CH~A; ziqu>og_NkYM}VIBBj_9fOx97=O0m+C?hE)|LGaaf?`&L5kjZ@j46nOA(i^V_<+c($ zj~CP_EVqIF;8z-4ZQ2{n5^irWfRgrjyZ$_p&A-2tAkORY8usF5>u@n{b9;B^;7l!5 zQj3nwWy3E=-xojRv@tX1_I2x&(=AGwgZ}z>j0OVY^<(-Kj_$Pb^cp zJ|C<)zIvyi&B3?zf%Da|KOGXT&^>;|AFYN_yXgCCNqD?xOD~EQ3OA3BHu9nMQuiZ{xI{+ork1EW~ z4G6c;lYj^p`B_-V{KL?~B3OldfHw!7{Ry8^s}Z)p$W&v$uAeujq0~+>hRvhb&^uee z@U2r>V`*32g_A2grji~}j-HdVWwKtD;@(DAexXEIgT)ik^~qWVKl{=OXRS_m&K{Ly z4FFDcI-Y)F!y!5DPaCa&2{E!PmxeL3;%+JxX)7{O{xzdXFi?Rhlj9dxmm<)wINcXQ zUqL-pB$;L%-uq7?g+HrQ64dc(OsBw}e-A8f(C@ky{PaFoVlB?1A4B0tnx+$zBtHW{nQ5#aCZe;WU|L*K2#Xj#C`_ zr+$_wkQI`ywtAjZnN;e*90eoiPpc+9Mxy|Z8+C#KD^-EfYg3!^y^ClwtOPvql=+gK zRCeHjMcN)wd}Yr+Ow;eUe*f*z)ogwiDfvYWSeq@T zwBHJl7Xa$a;feWmd>j8H_3l9|{&27$#D+W1$YAj31XBe+;_}2oAK=aWNFcMUu}`1R z)IUy*#?-CAWM?{+f`U~h+NDIjH%&ELm-5I^c5c5eEGlAb*IYk*XP&0t7+D+`)Kd10 ztU?EG1LNPq`P_X4Pf@bPrnA5*_i3vu@>3tas8klvmg;g7*xYo{ zlPg`!vJp?@EkvDki+9te-H1M!);XC0+zCUAY*#vx@K2ugWkR_>O=Yw!s!P$KeH~mLjN+8He9Ou%h3T(q!g-yfPOkS)^-|!S(p5Cex}l*r93j<5ctn1XAE5%!%U+^NZVf!BYjLVB^vD5w&-Zq z%>Ow$Fcxv_3)h8b>!uICd!Ofw*6!gRpNl(0$f#h{Y`I4QDL_$&`tLkm@2;h-zdSqn^dt}#8nEV{*M49J42+#y7k zGtB!*F#&nS@0&poxZrpK!j(J2`WfRoFUw7JWH>hab(qujv@2N*@1)shBPHP-etep9 zTB*tND}?Ymhr~tut2WOsslo`6U?J*+h}SSc4}}g(iXv&Kr1Yy)_+uUW&m>6!hMH~l zBx$hI5H05rh%7SQ^0bayXsgDo-U}NnQ}X9Tx*sD!f1@q5x;`x_vfdftg=)6FwKF3s z`wS-yP5|%J(#1^7Rc=dc#N*x+yMZzi@U?JoE&vr?)b-w2#u+B@t{$Mj7&NK<{TTE~ zHT2v6gsjlHA#sW%6U*EPNd0<93ccaSOx}5EC(@9^t%3_yNov}0er_dsAQoKr#m&tI zudJ{v{lfzSw8GsTPy*Y12SW+pZ7ALOZx{gsx!_Pcm?7Q{J^nWnPE(>(l~RMQ!f7M( zG%P&2!}sp~%w1JKDy}WF&6kgno*$I_f4@n!eP}>XaZD6+%f+SfJGWAg4ftmEX;|b+ z`}_!v&KxGmF1?lOMa*ZAq6^+foL#cuu>7GIKsq10ckrpy8JyigI$QhuFZZpM&lafV zCODHAOp1#>4v0==Ea)sFbX7m@Bi}GZf|Mh$;C`=FPKjJQ01yNT*&V(WPIwSMVK=m_ipg+ANB|awzedI(Lm*TdiIB${l zl5H%mV<^;9A23tgrGEDrrbW2wYUd3V1+^H`p)^MHu?66M0br^WQ5e~m3ZS?PnA3pF z;Q0ISkA!My+{s$2YZ`!$a``=LwA#qw-V{;db~2TJG>-@-Hqm`s(ZZ zh1>&Rw#$wgKW(L;;GhKM)_mk8e_-&Y+hj?kmX4Lz*Mq*kwV_~NU3HWIjKE}}CO&{f zD0ci|!&Re|;o`WDU8jBD&~H|!k@J93w$<$tG6uFQ40$N7P2eK4xiwd9f;f=Fm-UGH z#`Hl^DCRfQ0spoyJB(Vmj|N-4ZyA34>!V1C##+8Rnsj3Z#_3>AIA)ogR}f7wr&F%M z>*ZKVdK)hMVy@GG^>Mt7Kim@p0t$CFZzXHjEu#SI3V*T!bmexcESSUxVoAF|schDhtW_a|e(~e_l$%z`94qhu4($4&JSYhE5gR7?t0w zIP8z(g19`x#UE^0yr#nv*NzD2#|Q#4iS|2R9}$@IHS26(?~uLZT$juIWUTKjzwR~yZN)h-T)Ckto-;Q>aE zM=->T70KLh^ze9GFo0&`+GZV%PBX*`^WDUJBc}f1{BO&P-ILde@BPJU&$P!3_=oOP z_}Tfl%Yj~I$BFm@NSL$QtF6@TjDdLDhZ7LAJZ`<+<1t1F<`?~WzWzkabBKLtVO7Qr zP3-gRsRiii%Zgpjy%O-jFpUh2wApbPiT0=IY9vE0jdWXG;2wP6z8P(AFN0yK;>TWE zF*33AY`g~pkrSjZ1&H(DWdT)#Sf$(cBtHrYDi)IoB6q9jhv4;fR1}~+s0w9g=Cc4r z4;moNYxRH^qfWTGO-Ms699GRUstRDWnF*haBa*{g9=MdOt6|Xj1TZ4N$BK+wBCNF< z5`%)=8!PnXy8aqbs&+Cvw%{OSG!a5DSqDB>gCQ{A9|;27k=E8M8A<;M^-jAKt3tUWol*i zUeiSp3uaHP#XQG3-Y4y__lgbi`UdQo-(8{+K6@aYxb<6oIir}**FL4WnG=xBOLN`x z^G01(y9IsveEE7MjF+jS#MeJn_(`21^Nsj{l9Lu~hc`s?C3-OTOHAs&sG7hu77tSN zvbi4KoICZEfqs!X`+d^;^FHB9t?=`9Xf#^$F64rXV&^+*tejc0vgnC z?xIL8C?E}POy(9EFRUn(ld<+rPv8V8*Fq9ev%3=3)C8wkuX!53t#{m$c-<#*R4GH1 z+;+)ee08J&TU{F3-a%(PWzQ!riCMfGHnx7&Nu{BPn}CQSZ2bpo-huM-Uu@m1Z8!{9WP z22H#KrFscYw08abc4lV)G0>d;3kqT&NYMQIBe#69E!al+bK<43rWf-V`Q;*(_ z03=feZ$76F*VXTC9W<3shdGfAOyqSmDz%^D4;Rdy21v7Pvkp{>?1j3328mcOfJ#3! z|#B(BDk%0^c)FCGMZcR+{9JlatRAFc^TA>RhL+HHmuYSfyMd_w|X1 zow2)TR%LF1VVB4^AmCG%*Ff*V1eYg>7iuCqdO<(CwKe#B_QS<4X?nHyxjXIRx z$1h&qOWdr08GC^qB<-TUCldukQ>7lKbi5%MQ(b+}W!p7ZcdWu`!2M?TA? zNnW0Pyy0V~uVh+;s&Y&gXMOC9IFdrC4p{^m&5(l~10WewBJes{E`0Zv=O^WI&F1uv z_TV?vkT$I-Ax+lDZR zNU|=pfT*T>a#LjhO%nF?Cj{um#kcIZ(w5bFJ>-q2soLoV$Jc6sZOpYe#?uLO%k%|4 z=R06Pq8g~MzGktxCB%+F1*H2qM#WP#W{%NuuDandG27jh747cNSb$6!4=)Rw(VN$a zJlgoqP`-55*#xjTe7FkeeZIV1S_wEFv67^`iaJ}vD}}S}d0nhlx=P`u$%M2_E$CMP zDxV*}i*vTzo9NfZtOrZI6&VWXcu##2b>!8BXU_(6dq@l?=anKSB8;KHIy*b{{4osx zGG~TB{e0B{DK?5VHY#Qw5dm&y*!sPvs+Und+uTNWOo9&|UtlxGKQ0cTTUm90?De^n z?N4_jf;1L~Pa04i2j#32SNTyY7qU7w5MHwi4F)YEtZK5`aRbJU$&Pmgi1laS0U~JW zO()km&n)ai(l?E|soa*xfM`Wy1;b$F86Ci{Qjg%&N#Z3A*ZrUYS*e&nEO-okAE(P~ zM*g~&M!PR2SgWT801yDCBT1i!jYEK+)6G~gnZ5+ETCHJka4LL14n?Z` z^2Deu$z`&0rlrvhFl@x}MYn#O)Q_cQP*Rnk(b4cF&7; zIK`b_kt+I_RBFE<+Zs^_?&xdtk#~{ z@^Y|F=ZWwyAt5Ct6gPJK5$}T>2V3IdGZRtw1=;rLVF zW^ah+msAoIk3tsR=jRmic6h(sQyL_eGH4B^3HA0sBfu%UUSG3+vbYWn$!nYL;sK>jFBb+10WKka264O<$u}bM7(%?Sd&9kI+?7h z#iVo~kyagBm}HOE9g5uzA%vPDZj-O`Sy?8X zfwg7-F@C5fs})NA7w7zHCnhnO?A~aagRU1}v_`tHhgFx+jm8L{cLHH?>5R?lNm>tw z7nZ3BqwZAf8|>K%bkq>{YUhLnOdGu}@0fsGrq-M>gSe5oK{MsejD99&IX+<_KCT}H z3f?;Rr127|tZr05^}7v74@aN8Q2U1@bIJ|OEP5m2Szv%&NVe|TPJ!n%jr61VN^TKf z!fyYU|POTpgZ!!j8L3HOk7xSe$rflWV8iNM)eIB$R9ZR5?Swye-+d0_O z@%Xh?PtQr3d{pn(ehbx2vY&(X-}KmEk(p(F6(E=kFNj)iW&Jb*0O56WlXcEMSDMcF zAeECcK8;!{-n(`d$h8*G7P5FP^{Wh|H=aM{oQFCP2$SFAjdrW8Y1FvEX~%|bJb}%f zu>SsW(+H?=)9`WCeH{1!SyE8YBP(HWyH&T0czS9*kd~ zaEu!bJ`(s%cA2Em5=uW726Z67PW%Cd{}#IF$x}!fnbv6}D-DPJ8E3_6shx5lPXI`i zFh1H=fkMC;s7Cu5=}I6AAV@^UgPgI%y*#&jgRiosh-3p;><;7r4mp~^9>T$(Y&T?k zuqolb$n)x>yigDo9<#l7L)f&<$w}!iDa=>xWM80svBA}B#MKF|%gQ1PbqEoF|IoF) zXuimv_2qPE?L6!kmi7Xfvj5#QN>G+2K&@j{6&MxhG=&P+H`XR_df;YO@H`j-f$`x> z?5m@gX|pkEEZ2)@ZS6;*B6^+(Ny_ARjT`Uq8z>a9n4M!wW%PoHkN+C=R>uJw>@G{S z%jrhQ?^Ra((MYfP?55nFie3)=?|8^9S&~_-wJ~L7%%QRHejT3j z6nXstr&;EZn4Dnk8TL{{ab(9#K%3Y9I)yIKH4vrEiiOXo7{|{CF!8>FF@)o1mQHVl z8U!GHz|8t$;ifwTv+)3g<`qzD12!b07h{_B)I&;Y%$fqt6S=4~CjGgl;|!CBi9n~e zW#e1#;jt2gumydAe1naXx&jQ3I|LQ{u(;)wZu?W3*zm}(s5yy~_EZ4x6{=Ki)RHtU z$L~d%fVsWc9>1YLQU&B6yI4TOk{4`6wT%*(R%=)w;S4ks=jUTqA7)h_9_20reW26> zz=dJ8%E}L>MpX{Ts8<1{fykZCr>8#UwD1=NZhF>yG{8@oDo;lA>9H&|9U-r2&F7l( zaRB{OV1g6O2Q5eKaJOwSPn$7RCysQ)R*Gs;mZah$A;gXghyhFIO62jIib@eh2gTS< z*!su+#!L3{a<;ock$a4}UtjVANJ(GA50+zg%m1DPqTQlH?#||VWoeA?-knufO&CY1`K2N4c$ zJg|X-sxBYA{w76IaBcZOx3os}pPevS+sf}+Zd(Z1Is-OJz}HpE4Nsh5VPRFbenJYE z0Ctaq>wpq2Jjn)}`}Xp;Z$fe@qCB$7(O|_(C#nj5&?VUgY7;8N&viT5i*|Pm5&&M5 zQP&Segw&e;@m*Sv*RyY_7jx-m0_7*95U^b#*(8I+95HMUQejD9Kzs?Y+GkpOd#XW+ zi2%+d+JfV1z?Asa1|OgM9!Fw85AtI+?vl!*prU_WuRaXuI)1Minlx zDd}k=QVZ}r#0WrTyU49k#vP?&u}lO!M}$tWW5cqP8%^->2$1$pwOG)lH=Rk%uxBa7 zFl6&#Z!9pRG23wof`dk}cv1ls9x5iTa5yviM*@@+?b4~O2Wq3A5U>BS&KaSG-1dFT zLFjnL3I{Bs^%%IA|EmSa)XgTlnoIe4G`|tXfIHdkB z%L9;sGYsdnu!*oh8sL(u=vU?1auJ*R0T^i{zr`!$S(CZi=w>+5G7$#=i-&ThBLd(@ zedBrWf2n44&1oM#TB&ldcg%HL>2JUDx)JpQY;!z^dUM*W4Qvr-FdFp)NQXbF z5@1bk@qOZbuzmUSJP#;23JxrFybUkU1Nk|C^Z`aO&PT*gOhw7h4;uborXVhVaBdfP z&Po8lHmZKKwaU~QgD^lT#91w^S!eSeGuF&*9nu(s$dJw3Gq}jA`q!jHCE~-@`x3ii z2Uxa#mlS4QHvp1eU`8u)SI@d&5(P>CiqvVIgG?A(RHU|?MMmZ`b+634lp;?-h9bG_BT2ejHza={NYcfr z+RgufF_0u>@@{s~xl>EM0TttiMeTL52Z4@ltJCbXPg%h_R&K4a)MSxnQ^&lbxx_d! zJ1zCLx9)Wj4Nd(G*W}{v71T_Kj7+f!-m+eW$*|4);FNQcSv>0Gu(lwp_=7k^OiU}( zQJs>S_TOZ+*oHLjrk2;L8Q6e7Z$;pzc5pu;)J3=gU879wNzwN=O8YHtrx?iX0Jeox z3TRHm)j#cZba8N*<(my(o)>fjX1YtLTM6OhLi=RfGBe|i?W}&Ni#kPUCno+Hs&yHF zWv+8HTQe_LafK!h`s1M0Xd`FqN)J4+%N#!fxnbJ>2NLkKfW8TR`GKLnBz8evF!Qa2 zh}m7z@oX3>?2MF*-Yel3i{4gdPt#9uj)uyEncW3b5n`*e_DF;U$dzuC@s z-12=6Y@5Bc~MPH9N~_p0IH z;Saf*X);nSCb`jwEy>`dX`bQnhJSJiFzyC0ktJb(DpsaSwC&d2ZSi>6EA|%u`pBd+ zCO-pSYwy|Svi&Q&wV3?S`0Y&2wvwpa*(@Ra0~6o^Y*vgE_&$?bGUs zlU$g3v+j}cXJ4a?X??le(H~)AuPnZ!n~{0&N3_dX!lPh`~c-)z-H(0@MYTo|b|3}qVhE>%)Z-WR(ND4?xOLwP$bW2HhcXvp) zfOPkvyIX-ncXxMp^KSI{{jclg3vkZfD`w4_HF3|ZBB7pmoEvzvzTS_McF~HfnmW)7 z;xB5ZP(FU3{S9Tyv$W~!u%iIbUqi;}^3h-h}^ z7T->Y+{m+?$*xAUTm zFn5t}t{$gwXM-ikTf8IILIhB$P~}QkPx&W>-093H!R!Kio;ev%sd8{Zdjfn#G1&+V zZ#csDiy5(}xO0e2LAk=^3SgQ`@QVnUkN_=~xS*2Euj{v`)I|z;m*v3y%f-IV{Sn%b zEN}RrpCJO@3{VFje9_5zel8Wfdip58?)Uw-f||Hk z)UeQ?;!mrxSd!QBa7O)7ihHqkAY?k-)p#Cj_UX{$&B@?x6mQV*#ZeFm_I$Zg+#qhk zPmyY?lg&KiO%@T_;wGZPLlnVx^7>aeXoK5^3$;i>?}&=xA_6c*-V+Yw77&nAsI$2@ zXmZYdmvYg%;#FutZgu?B>5&P_DK~prJO-O}m3pwSz@wo_AtdamE5oRdxX;hPc zrc@Q)K}adlM~&$ml}f%^)qC(mT;{vN)u-+)ILN$6PDo3i+N~Zwgv+nAbA636QpX^z zPudXb<&4|eeAnY4kw&1t+AXO0TX!^%6=R;K zdyni51uFhCd^Gfo-~Q~FqmHwPYPFZElkqpjG-Hsb_o87Jw1c>VTjc2YpZbu9{83m{ zqrPa)ww^(_a>8l(cI z@Yt^9~Sp>8UqaX8;f2!A3SQaYA!7V-DGYw)A$LOe#A`QKkl zFN0A*%N}s-$;M?JvW;V1 z{tlEjgTdlW$%0#GHqPjPRKY~0uKT+`1TVf;H{O9ctz-rD(uViH)@IdGl=d2dBir?H zYsH?q`W&2j{20zQ! zBRqnfsB-&RSw&JNlFp^6X@aBzirRQ_jsRbY^hYr@j-K2?xz%UKl7MITuTSLHaX8hI zFC*RJu?uTa`}LL3wc&XjNoxafM;Z$dbN3tXsS;>MM*2_g7Ee57AB z)FDn8Pcdz%rN6w4dizRHc}C0^T~95v&|`d0$k48FJ#9GQse61dJE2Og_XlQo(#n(w z@W&~bzd4EN>BVI|u91f+n?zsK;di;K(jmzGLh!dnXAk~9;YC5qiBGLAS^5W6FxbpkU$^`A z3o?_}s2@O|q8V?E<}aqE?ow*Nq{O?k{YjyYAGbE`WA?Sikwd2UsB!#eKmjjA+s&Q#P_gj)2{Yn|6KElRs_ z_dd?&yKS0za)?9?VwM>Y=q$Hb^4#9d-h#c3#2atZ#~9Jf{^F^hjx!Na(BrUSV1jso zd7|5qXwAr42<}*ZM%+J=jT#;*;@SHdn=*3Ux>mC6@IxsjGB!PWAaU(WN$SlTNZQ5X z13O8JS%TlG0o+XE%yt!UO2CE<;UsboQ1m z!M!x=+5K@y)>*UL93_UzrIZ7ST9KGEHMHKIwoI6_@MW1Wj)roXu7;o}$pk9XxbOyp zo*7PnHlZ*|Ef77ztVS&vbrt0O;`LMs2jcs*^}zLEZ>O+{98E<7MkLD1>YD!u3P*&+ za&s1hGb|sdZO@)Bi#>cv`G?cx@Qrz0s=+*aafA0Gjfbbk-fTXjlnzHKwYFnaxCS=|xdl@&8 zW4W^Do9BG7e>0zP7h`{Z)U5=-peTZqUsL4b;Bp$><_1105{9R-C+0MaZ;0^XM(wvF zi?9mQmeQM1FMPYtP|2?QZTVw9ux{2*z^pLw{+kn4e3-xn{P5GKBFckUjX5``P1g0n z&HYki?##N@G*~<;#}-y9%1ibseRcWh0cu;|S6?4j+;(RkGd7kII^={pkMku;Na^=b0zGs^WNzCl+vJDh+- z6i`H;K~&86xqoh{ETa|V1jR4IQ@izK1;RHt{tSq32h){bbVn)ioawPCvKoRol&0Tv za}5(*hvvsWsGBpV;+Hb87PhtF*6)X?Y;X6F<(Em13_iCoUHrF*#wG@0H(%1`T0CI& z)4c@_Z2TKSv$7z$(vYD1xSwy-iTryUxbOo5r=C}yZ6>d)Gcxa0eI#^Rl=gQH#0AM9 zMY8H_@!yt|>1gno?fhEtDZkXtj4zccu!tQKTjf(E>(+XzBwlS+U|h<{xTy2y_C^qt zlUCZvbgHIwf>&ZiQyALu<|3tWT4;?LTx_s=Mz~q@J!l0gKktwAP7qYgaQ#YQy4%Qe zrDiAKPe_kuZmP2Ho2~}ruvx&eP*bJr9a8BZlx)HL{CtBgBNZ=VBH_~5xRi*(!oAWK zLo73F@J#(_I?E&zCRQfaD5}lZBSnQK>^)(s;~dDM{Z9N^g#PJ+RBFt)QbbU{GUatNHL>H)|tFxLnj}2Ak z(vJNwFeURIzg;)0+e^wXEgVOKv`YtHbdTdfm*J^ai;L~dGh}i2ucL-0d?EjcgM;Qq zle+D@tQJHfjGb?kZ(v}kYo(`r*XNH?D%eVxN2v(zK;<_?4%1sRl`Cl4eYxJOj(Z zdCzSnTsk1&>zx{`B02^p=Gh=F0@KQq#a#Db$zmkvLD}%%-kY{L18z~zq7HH*m;emO z{?qfbPoz{O;}^HArakEq=G45hn$_Mr%4#ft7)F#A4G<1k+uA5}{FuER^0w)C-n(wJ z_|K-_-giz@pr=>^gYEH?wekr}nq~LT)ZFE?Gup`x|7~;SJzmnpp#cNRFwplmeri=a zn`;sy7vn-3NoU8T#3KRbp5nD49u_PUD_(ISH&IhuALK=kI9-D~GUL^Ph|Ka?bmXKw z_H7e<#K(4s(XI1g`!(sAxI);T_@$N{PZwhb4;Pm~4~kp=CZ-HWFdFi-5cxBWl3P1C z`46}p4lV&rT&zq(wibz2{j9(DjOR+gd;0mNemX&E<7lBajw0}89J$z(yGV z*H4cmf60W?GxKc6&)M5l5b4AHeGhZZ1Kisg&lN^t(~zd7oiVMhv`y_Ps9t<$PpPSo z;H#SgwSQ8eRL@XfT;L~tGuz<|Kv{{8IN0NP+9(4~N?!GykZEq`0|yxus|)lWy|oDe z&`8PE))dFiZ<&kxYW<|^eS0bIdnFMVfas+1qO=RO>2yy&7+7M!RJKCefJ>k)mS1gu zvSopLK|!H9rUq$OMnZaIyN$w2%GIMdrKsbz(PtMUZ4 z*AF~91a;QfTlJpHdS#_WwNGb-a7h5Fp2LFKedO-$^0DEvX5wk)Apxpu3~*?0$e{~&jSSkq=h^iE}|ID;&|k=%yVy{jauZ@Aw{v@@czO1^eMT~qjhs51&1Fn zR5hL+YYArGc;sGP1v5RvEcz5Z{do-ah9Kz}8JgoVJ<6b;f2g;)8N1Ieo=>trKz`SPlYIHj@|j|Sct*TR=(dp$~Zm0UVR2113XJ4akyX?49)zk+gGE_zyi8& zHv+a_75(Ptg{UKAa3vHexpuzvL*!c+$If;~ZnfL`p8k1}g9V!f;*kup%`Vi)1c)i@#L7A|$LSfTVvdWL0Akb33Wei!{qSVZ%wgrJA_#n<+lGSb_pNwIkF z+z!Q>Q!y_XkjC@EwfR}weoBj<7F21<#7oxMuF;cRw=wFEf z<9Wh**(Bm(;ozyhxWF<37CWbc$}n!}#TJvfSy*a&$ss;=A}i&^En$(CCq#fC1nJU? zR`;G;G5F}d*q}T#%B?^(2wvaymsuIenf?By54&$a=G=e6E7b|d7!`ftC76A?8HcO? z)&UCVO=E}A)9iVw@3Bu3-*JS1NF(Fb)md}4S4ulVa6{k1K?MiTU|6x^g8qG2%j_J8(X)_Xnaa_(BTAKO5Vxvaf!b{NqHlPxPRz zgQyaNO%IW=3pR$z@8De3TCMR4FMsx=l@CEcQ2e3GWavEj5&1EbUzSNHOu~f+o@OS> z6td(CnRF=l1LO+SOLHgf{aY6yem?ZL)W&y+c@{zglpW0=VNu%_u+PKQ42%t7#RnH; z7j5qq=RNRzb_{&pi<~*6f2LbOi*)yZ>hgHIk;1-K`Wu+6A_oC7--7$S#?Sw(2@>PWidjFB%F&U<+E$rL_4e+$G)3f+fYVAJ0zG8wj_phC&)#`dTU~ zcBxlICwH_+{+?ugNb=@iQ$seNjEzqh0+YzvLol=RZw=OGI56kV*S?jT;!DK!_6_AN zRFll|nKJ_Sizj?@7T!;tPxf+vVbNKQTn6` zklNKL+tJ zkG~${6#A5OpU8u;NP!qf6^`eJMAL01j!ZJ-u>fQR<~v~}Z~wiW7q~c|J-;HVLNQTA zjBaC4t{`90xE_X0Ds_Zlh5*_7?KkBR;vk8i-aZbQ7*u25QV)9tX))l1$Os?h`f`^+ zMd5lAa^k;??nm-Hsy0w{(w})ovf_;pMaV*&_IB(iR|aTwNU1_}gCs6_-d@78{c1ku zm+QlGFDpZVj8ok&&p6RiF+Mom>A4#8a+9iPkM*0vKN(GT;Y=edMgn`^EW{7~JXHn% z{+(^uLHx9nuF|(xoC<`Q0T;lu+2kMs0=2yi0goo2zk>AWhY|(EJUB95uBxZA6hlaK zDf;^Rw#8=q`en(5X)o`z`O(n`hb5WG6`hK#?oYr2_|Bqa6k5V$TmrnAvJ)@%-=3V! zby)UwpP3agL?Eg9DlxA{-|uJf;Ky-qp`WzN$^)fYV`dWcgMZ=ZXTG=wJttG2r#?3kY)K%8Vt~5D z`KZ<|SXTXuhUW?hWBny0$TF!6`?0Z-Zv;9sThY@8y6njl9X zaU{6ki=_8btKSh+RCqH5;8~NjF)gBSSe&*g7vkx^HrJ&8V$(hUflncrQ}6*3lk;Fh zP3FCngOAz**{gM}#tREuZ90eaxbR>G_^%BUlao$6tz%seqD?;x(;JK@!cA2P4Ww1s zCEN!bE`Jl<-Sl?0cBK|J{mO-(L31xy)Pc^$eT@$n)doUNNhQoxcBwZ!)vfM5iaVB`RsM;ClFW(2weIW>! zu=%j?-AW_-`3X7mK7Vs}$Ly_#OT_Gc7L_dK9ysCJsL=a3Qi2OxV?)bnsy(0Dkw@)i zDWXc!kt(J%N*7f?#>%Uh7k0#P#r{;|W+tD`O3%dPr-h{{=FXN?3c>6+Z@H&V4$N{y zD&C>R5Dv$@I-thlG&VGbiIS_>dJfb0kD933i6F{MMTJq3Ant7E_(aIV0MV< zlgRT7xG*vH&ci|-+~wlv7C;RYQW0c-U7@vV<(-x>SVp zF1SGHqN2pQh7&;%=<*0CEg{yKw@R!w6^n`M{SgOGZ0(RmdY_^X?5Dl8ZPOT$Kt{1Q zz%g}y&<-IeFa87iK0{kXDUc@;V1L}HnGN>I^IztcmF{2ocfcSrLgc-u6CovG3lSx0 zA2mL^(m=`-Da)b%VF9+91yv^b^^h1+sR!hGvZ>^r8+%f(K^Um)_Vf{r*9!O$l21>R z%8w@in~djDL^WP{b8|8c=;@TeIt7Cl;%>T{=2$k!)Gt8OczB-Ik-2J*s=``nm*bbr&39UnF*dbO$=p(gSz_4C~=S0$gK)o?ca zaJ2~G#7D*IF<3eAs7v!MC68s?r2&gcBsQ3tI|=HW=k<{as(4=w!7JW7s5n)e!WWJC z94bAWoEROL`;%_)`~f&(vi$@|Q?BJ2gZyhN+hXnV=HjL#Sk3_Om9oCs*BoC&X&c&s+JgtKowr-!0Z z`W6<7aRtHS-t^i5ryToP&Yy}PcXSnrvy+~$)<1>1d)Ace9Or3>%cM?5QMP+oAyTcO zh0k|)y{%=*Yhjn~gV&$Du3LGt41&Gx2IUu$TKqCoD-f?Lj8!tE?XXpt|T8Gp=xZyt0DLv}b!> zf^erhWc>EZTm`>?>Uao6_1J}=h~MA5Q2!bPlad*OmPv45pOfqtpPqxeuj`FpMBjy{ zm=QLvE;zUE-4<>JbI)y7ILHnPjnKQv#>sIKPzwH-Rafa5RMi)|4*S3iBEyb?$WUO5 z-td+1_t)AzZ1H;F{U%SiNlhJD2y`G_MqmVWTHA{^C}N!8-H#nLt#LbMLQLahkcBAz88r4X)o|i(&20RmP_Klz$?{O zAlEwl`KaVs0T)+uO38Z7_;mIJ!dDLWCFr0mQ$+CyGE3%!iu31)Dq3X_vK&edlOX1S zdJNwmtALIJ?f#H>^TC?88a?i#E27d?CGyl{bur&`HwgA=M_y7;CI{_ZAjtCjH+f0Y zGDAAEy?$_qABZZ%2^1P)Esyazvu^(VB%wP_+%23StLKw{)k2?^?B^m|GZn=NrKR^h zvNKUnU%lJ)K(Uaj0G()szSI11*R(Ti_lTks zV05mHqT{_s&I*WP=Y@65+o*6-omW#A=#MMa_@bl`{*^UON|aV| zE_|ME8aEE{^JG-5Fs-L)&WkI5kzSSu`lv1_6d8RIv!*>0jWCaOB4bAA<$`ZgAu1ex zdp}IgDkf|`VoST996>F+2z&98pKu1OBwsx`zMoY$dGw=bx_4CD`z|m;1jI+)Dle&_ zaK(7|nKd8UETmtsOjCXwKTacgC{9y8lsD4sRE-PUFnIrDOq5WYluBIjxfSyMz~&3t zm+RdV!@D#M5Ba`A3V9`>?9jfKEPoV0VmSx=_#xPH?GkVOeD{%p9fZ2$?=fhnRtx`zzF^}WuGzZ`;FlzWj!!&FD_hhGi}P>4ZwVRm(Q z_eWt8<@R`r$KD?N?3@I;d8Zo@Mgaj!w zv)^Hf6i}i9_>`Q4H=@h`?YW&j;C~>aWDIe;im0rt?^*7{6jgx*MhAF2yXOADVL4N1 z@T4*@{A#Z=vHW@RdGiHZa53Gk4@pi5(o%j3k6lfK=$MYJG z5LI6|Gpf_(`mobXH5Z+fErYS8s5ewf!Cw`C1*1E+xKA|8k7EHm$7Sr2%(bXf2hK<&FTq_z9 z+9Gz?t*XJBc4o02=&sMslhQEO;3~v})2zJZ^i%MiEk!F_>T16og1mD}$Uggd&b061 z&|lZ*dfD1S6ZYNRoqq;bl=DE3T9Ey~_%~(_`$HOISy7wq+wa5ap`Uok{Cbw9Pff>u zY}0Zv)MW5Wdn8YQY$uo`Ma{T@vd+LTP4|~de$O~^;iK-1pb=mSn(V>}%BJf4S=SWy z&j642{K#*6Z;H7qJ%Jfvf;sy^(j1|bUi=-A)h1I>LSXE)Z!q^UiWzJYaVb<|iuh}I zt4lL)W>trcenp--1|MMeR1TEieNb7c95%_n1`2O$;XC*Ih+#xV4W&Gj>%>4Q-CCC46h@#)s-yh=Oa&`GIGlh zGXfA<=PGi95JV5FLCU0SaQ7l3^>5I0wj7DyF)9N!9aeUuKK9lpr4WK2P;mcq*ayH^EJN|%DkFv7?%flmtzT006 z7}j(jE0kmy@_ed6Ix9oPv5VntuB$ZPF}glkj)<-#vTTMK&oaiHcQ`BX6^AMnDv8TA z4nnCf|B;Rl1cFI{kVkNHd`=ke*RNcFTy{~aA|auTeNspImP@Vokj$$>#!Z(VwzjzU z4kqf@dC8tsQ-X-`*sgD6M@D;f2+X(HZ8|b}Zy>r`X@lu;L$4*xyUSLAF0IZo&Xt?8 zzTArei{&19%qtEYiE2SYMsDQXGe^cd^H8Z=9?8m<3*+@KC|ce}_Q@<)V_YrG%J$m) z$q)C8R*%&Xsoc0d`B%l?+f6vNTh5@`N>e5r1z>+XK#PwczXovWUNx02%~D5W?`iru zLA^sVJk4?oQ8$+zMtB5-Jw_11g7V3>ogmY-OdVM{;fgoBPo1W^rt>p9pSSPtAJhZ1 z==$_=b4}H3VO)ez9q8o|82@& zBEutsp!KA4P>P_Wb~LvTSvvXtbkqMOIxennS{nD(x=(vB=?txu8+zQ+MwIKyDkwTy zeoK}dP_^F;O_khh0kKFuORnOerPv_B0X+VSjn7VzY>VtxC|qfgLfrjyNxZh7-s2Spi8rRg3-oU(qUq7axoWE9ZVaAPTd23DjOwoe zR_XG1Z7zA9Ex;d6mb2bdi{0mT=Nyiof0Y(Hp?aO;acRz_9m2!IheXMcvQuCI&P?Dn z3+?J41YN9w@MgRWVFkU=WM=k^+ubhpNEOK?SSMYe>&@-W<*|$W0;5xJ`{{Fo6_wcr z6aJg`xZpN)<0MvNNGMq3SYgrkx<*FB9i4w}ZHEP~(>TDV4+O>=*{$Ix4@_a+nyxdX zGywp3u1Sgth(WtAjAjMcre^_=?Ke4ssae?eWvx z#~Eg$xyn0;R6d-ZopNtkDFcl@PT=5jJ(}1#GoM*+nvs)}>mA7$G#uhj`3&IwIQfA6 zD&=AAA+8-NJy3UXLE-%~K1PeFC#MoO)<~)g+ztm(8GXiMe?rZ}dIyxWY7RkIlPA+( zfUaLpODb1b$>sZ&p4iiEPnC6XCaDei>e;Hz@5=#%3Bb3N?i;=_BduJ9`NmZS*Idd^ zt?@)R{JoFf$`UUTYQGm3f1r|p8T2+k3JS@bKOBDN;aydN;h7DBO>a_(Zvdj#D7C*{ zV}i$Kbx3S1muH)kSvU-swFJ z^Gs1Y`|9CwFLegbfIfZH@4lg|!GjxqwqPuN4;-J66aKzKR_)@uY&4gLz9>c(;BsVW z!U`$_qLr5wL>fyPSL@#eWPP#bcSje{CAaqYLOqwpW9f5`tv4!xd`v#9FBS%UC)5zL z?MDpSEq72_(o1(%dj%yNXA~1gvPA6M3H3eR4oYObC)&>ez%mZ1U1Rf?KE_xvY;<$h$C7 zY7$%@Kki699TR~CI;tdq*FGifCHdn4pq33-L5au&=Ym^1!<7{VgLqD~bZ}&?nwL$A0|y5yg!K5yIV=yTpPPvcj>VXKtx-`ig0Hv4EFI zboi$oXQ|qD`-g7dL!I#{7-1^LG$u=#APGPozckrPoak9_4aNjUzgn; zzoF^VD|zVw{3N)@_-gMiUa2|5Wr8Y#C+=bI{1&2CH-I~pb=xdw1OOl~N7cjI%=ik` zIkokrD%&JikH=&kJ-34eC4wdQH@NQR3Umdw7Au7@<%z&8wrQ4}btjXoEAHD}-|g(} z_Qu&kgSJpmP;DBwM8F0DWK9Ay#qc;A$7xZF`wH8Tjf3Or?sC=6skyq-UN}H4awBp; z$e7;AD;2_!zu%oL#s{Q#(Rt$tUSo$jvUf)?EP>&1_RF7YU6N9I6-B6#z=+$86GNWM zQS_GUQ$?BhA{2BWcrQFe&ii-SeXeO4*x1$q>EfaOV6!jx8JUl@WYZBKW=jh%N8QN> zeYw6!wlpXP2A0pvvbJ`c%4B5uKaVqQ<+^uk8UA@w_(dy|i%H|g1# znLpXK;z2bCR^8{&<{@DcpV*kvs1g{%G(&1X>l*6>&q2yn{*py%wu3(;ZDMSVsBn&ocqqn!ceHao#@+HJL1Z2IyDw0X`2ud8Qk3(ZJxI?&^Nr5 zlhCHKP46w1of8SSzX;rinFK}{;ps2I$j~!0Qn(Wb#S$uFF0Srqj(jlGxCnsAm23k{i`gbKtO6wM zW|rlF1$>`sB2<-DwvE~(9`-}W5(^qYou>Pq|4q%!3}y_q(uK`-=KR{cXSDa{v0qb# ztLqdCmFDvT=J6hX;;?43xu>ZhR9drBf%Xd#&7EW1EAA0c24G%wRS>$G<23~@t6wR# z^LSCtiqa6<)5626a6Z7a?kO(hz8# z!FQ$oF^m~JqGiv=PLHKlbFRCzPHR|K+KNKZT^8|hX0{Kcl|p)+oP6m@5>tL@tuzQP z69XU0?#TbtmNQacYRaGasl!$dN;Uv5(>{FUG)kRLAZ1}Wj|Ov890gD7{u}ClEFq!k2}bZCz$; zfo`L}1qqsBj{fSJ9l)Ej2lVEMj~9ob7X6_|AL^cz5f*w{I)4u;rkyld_~1;e5*w&zR}`Wm&Uv*l~Gy7}XA`dJ?s z#U)c8UScQTD_SARH?v-u_B4)O0N@Egegxx!G#}o<6BYe=&?o1UyZdg&AtWSZfmd>7 zHJ7O6=SOWZy0>Z(R2_s4C{rr4C@h(NuexCw*K*C*oR}aJAR6&{)c4N5GiHZ<v7g3{00sFbO?HM*5A@ z)>agS&Ozon3lm%R&8Ss_3kNDznpl6HV#`zV=_IZ`_Nh)rOe3NXO#b_Kh-M+N>T{L~ z?-3DC&1X_YhFT;d=Q!ukK2D+m>|DzKTLMtCUgZVXrOCHK*KN-Sb|@I+KiyN(2a;$w zaGb6`?%GH4=^h)uM?p(hQjSPKh%h^AyMUejv z_$PdMyy0xYVk56uXJ=jtc38kvv$t{Ia;LReYDuTXoiX(k7lcY-FEIkgm;SdHDgU6+ zj)LVxofbtuG{3jqZ&{PKa`?VCS2pt$Uq@1_cf&4kzvR&#(#m!G42T{72DrZtPk9Ol z+|dvr?#Sr4Xw5>`P1#1V9wl0nm{N-{vXFTK3I#d<^C8k-;);?$WST$g9Z$xi}vT=+zc)fkSOkOM&zcQheF)Y zT%$YTvqqyI1g3?BYT8X80W~wKu;DJu>!}7b240zG1zS;nKt9rgUg~(dg)Okp!!yJ(S5fB>d*_-y?)1BqXKQ zLz)E@6+b5^WGk#kkgiWw`i?CZDl4k;zt!`FdmKWy9nM1mEtdZI08C4m^}luK?&$DI z_w##&Ek?C--AATQbWMg6+#u$Hd7&VK!`gb6)6_ZbOpq_OpCu*(-aV?p8Mzo?svL~i0_YY?=%;Av9Cu-5Z!lCA()uQIh{-h3uFoO z@2c`VF1jf+!Z~Eo2?;%t>pKdP=%rgzCK?LaBa&IVfB#lEPzNQ2hdU4tr)>eZS`Pr{rp5K@LK%z&y$4we^5%Vc;H=#sE$$g7gM=m(}Aj z6xor$23ME0&BGr#Ip}m&$F)^dxtWdDpRQIh4%^b4>?rSyMwV^-m#hm@bMXFIeT@=*_vIbtqcl|%eq zO6u&r$v53UgFw=NUN&(^SO_s4SLn}C1~%(u0X+}Az65&U-jlGTBy2XPx`sw~9ZP*b zpyuf`Tb-)<0#IYvFE5UD8#8`(cK(fyiB52{=o}eA-Su^1iuxkM^xz<3%z$MrF1}+l zu_h`6WnZ6xQEqH{yo7`Vd|{z@Zhr31_;^7x2`0v|v0PHJ*tocWuoUV#CnOH8=JoYu z6f6~FxBFUf_Yf32GCf|JX6>^Hz})ZT5&rrMh0`WI<*?Fmxk8LrTmhKxzA64Nt*wJo z_t;#V@)4Joy_B08RUu_x*H>*)R?d);5=4cV*xvr$Pjob?gO`ec-%L#Q+rs*OXlQUm zBo(vh_}0Ltf$<6huvNYt9v+qBF+ixWBO@YCPEyx;Il;%u$W;6CnZ5o z*Es=fsAs7kmKRq$U9bGQ0iEIg@j&chn#I|*`CuDu(7WT6%5(H-hF9;T+V_n$Al+Q< zbrwX&973f~uYc5xyZ;q~ui(1ZMzTZy6OU6{=}34hSb}O2P5+c0!CPe5Uk4z#$odZhIefrMs`~eBU}~uUrgo7G>-J4iQFu1Q{{MRR`P=h5MMrS{ z*1wb_pb3`}hF>9ae_Q!?oDp3DrV?P`r)nPDm;yog*C~VE{_8pK5>V2I z2Aoas1ONZe9rmYQy&$C85@Y|hW}EVN#)7|Ib;q;^;ClBW&Hqi$D)#U6g6-V;6fj@= z4px)?OB7+w(jSjv5>jh`>lO%W{H1pNKN&B~2@?o&*;0>?l3@LcgnJ#aE$+WQ6f*%) zRaY__B2(}Rnf?1!*#Eu~4-;Wwuo+~BJ;oZsV?R&<#x#TdV*0~Nk8Zx8QIwNZEa|9T^*~aoL>9OQ<2gAulmJ0jvLZ( ze`KvY-~$=md@PQqTVJMo4#%I5b=FSmXfBspO$rKP$kazrrqxtD+M%bSF&JCS;Pkrf zplxTTB}*z0L0ZqaYd5n!_`3$+zntpd03(X!j&tW?r?Y^nN3wCU+vt72kG|}7CN4EJ zc)Oc2N3-Dc3xiwsbd0^4k(G6$6f0x1y?sZ;9PvPzM)#h(JSHy5Yv6>nOeNzwt$B@7 z>GcHmAN8Tpe;(HONhAC&e#=mSKoY$st*v4{XK`%=Z99#cBltM!l?jAR|8Z0q(Y^+Y!ORb-_X==zweU3extu&=opD8C^*Z3Hbe2W4k zCG#F*tDd_7Nz3gX+f5s*CwPdvC zDLq6w>`&wRaISd!dZ%1BG`XISw9NSjrT<>4CV~p&6>FC1G%S+`5Vb&8KxMMX+1<}L z4=l#Jz#sqt&qPi{7ntWgw6%z9|ftamTYeuXvW=WYK5xPyVB& z@ZUj>h=P+GKqQ!F?uG2!4!~oy#q4t6xgJPzS$+NTg-_FSsiL;pcxZWo{fQ4C+Cw?- zK$SP<=xj5PCOB`uKdbv&NS1N;IojiPY6L*m$0Q_3idw_EEi(TE$T2<(pbRkM@%TM_ zfODaG`$aT}Zs|MAjWH&Wcy_Ut#bS=_Q`}6U{n8X1e#w&laV+X$&G;Ssafn|Ix-Jm~ z9H*^6IUnt<`4!UsjCIFWm&>u~&QQ)sOYeZSG}NbtJnGVU=k4BSO^#+aot@R*5E9tF zfuXhP?-#qwCLbXJprKpX)==w8)WroBkXWqd!%1MTQ5@HJtfih(g9tp~eb7abn4C;x zf4JWmgY#=4-qs=L9gjy$-D3l}sJUg&jhwyCifOG*L{#R0hW~Sip6K}F->gg`Dji+J zVfB@^!5XZ#+=`0e$CgKKyXA)6&p@7OG7wd61pnbW|-(b*2pWoK^IJaVJaxxqv5gB?oUzP5hKVopZ3x?nz zB6}t-oEHRvb z#Q4@h+e)$RM2s-3fd0ug%MHROwtKz4%5LxBEcpo|vJfN)Arca;*#Ro_;_cdnZ-w^pa%$tW9mz~1B&8=;vhdYZSJP0vK>BhX9 zfHb}@0x6#WK9bvRh^}OOww$l_oKOo54V8#=XR~md!o!T8-UKfB#8HQ-%LBmTG8sG) zKv@R>YrW;+@;T_{2U(|yY}rgE1^p@FE82n-g<@Y?!upL@AXgINjCEWow> zc;@qi^=ABU|B}#6;T2;bxF{eH6z_36u*q#lhs*5_0D$4RJ(6&yb9&dM@q%QUqwz&J9G#LJ`TE<^9=+lz35tKL;=!o zaq{6{s`EPJps&LWV2Y57k2A(?=I!`^6iy#>*}t_LZZ%r3{ysC_5z^7 zdmmHM!qqbMhz;E#%S8J_gw_-6w{~AR2=fH`?G}!WoRI5F1-jS4-_m%X9gGJ{-3bDk|LgvFpVesBe)| zbj}{>%_L6V`cv>79E#h?CryD5!lA^w$TY>-bX2SJ7~c)L1ECR|I^o68e-iv$&Pzxf z6@-J^YaKB#^78&P7x5OFlKtn-llE}qO*)0Mmbg(H=S}KP+4O)s-H{kEF|k7B9wR0y zYBYSl>j6+vVD?)V-t3=>g(ij*ur((ddiPmQP~z260*}puG{CnE*F(r>R~N9WSrq9# za1`mDmjsb`kMDi-MUl=2b8d8;XR*&N6T^nr_B^-SZ_SCZs@)$jrO45z{+QdH)_V#c zEkO&|$`eiGdko4lw?SrBe0qs#e+d$87x2=eljUdng2;X9eF<7*3=LG@_}o%vwB4$tw8cz5o_H~~4su~fq||S>(ibqV1;{Th zHS3$Y%T1ZJ@EelTE;u#pB9_iO=Bi!UcqKoxUKn}&2?V)3^V%|vJYQ%X6V5`lQ$ZRS z8==V8LlR1*bu&jH>AxHw8P+vCaW}0jCfK6$Z~L+LFoBF0#+?YY`Im1^sDR$s_6<>i zL7DqcRAc;(;F6xx$>vXh|Est)@2ExGL4ShSj^`zjT6pawB1A@oVO>9(A5 zr&OD{9Mh*Ewj1-(J`l}l7cN74st5@V#V5w<4b}@%T-thRHixC0?8=C#p5|$v@*J{p zlkW`N@YfnvtLWA*?D3b}U<}X@6VG5_{B0QVXLL+oxUARxguWZnZvVIPsE(;aH(!+} z`$|TN#IrtG7VDMG&=1@|So-#U37f@4IcV9>U90YZE~7OR%{&AcGqGK#;vovKPKb9P46)@g-9x=iTa{Q`f*w3Ax=d&br>l+0{+HB~VhD z{hsbku`*>WrO}D*wv1+1XRGLEs?XV8)lG}sb_}#!EQS~T+DTX4Zy>fktNm-k*csOb z5*V6OD(5T)Uz{k-v8;)oRo`?@`88yT;vVC&zNIHDXVMoQ)*Olp`}bKn(~uaKspR3qUB7R>~g?iDVJ1s zLsEM4G~3RbY#Y@UpzVX~M{pQaUyg+vZ>c*jnPIzbgNMxTO$dQq`2BE&Kw3-Y%N{Gg z=aqTidn&S{j3zszn|KMiSUO)wgcQ&f4ffQ3F z%wVN%C6UGc4MF-bgj5O-;NKc4Py!nvHq;~AR>DbnSTPNge@lA!L`;0H@{fridU&5? zd0JbNDh*o4+1hMJTAokw0_8zfgm0Gn*YoxKo-doJ? zo%iRT_&z`I^5Thmo^xH-Irq7xj-2|Fv)hB3+z)5_a%C(hMb&D{9U&{Cg)aSLSK$gM z`-4YU#<7Ubi?f$D-lHJAeKvj$hy= zEVCg(?kntu?$G{K9g~i39I0H}-Tj24US*|bb+B1NUzN)%tdS$ya5$W$D>-H43!C|d zs-l#d>hUI{D{hc2`LmwNVF#vHhD)H7`_Y~2TvTXiJ`F}FPL zJh|y;fzvhaE4kTF>X>+tVU_-jJNic&Kg8XVxO^`+|&_Nc3R+=jdkt z<6|qTIu3FKNK4{Ew9P!02v*4krb*?&A`bWZlHiJuqla)>j!ml>HkX5HHa^ zbJD(mtW!nWfx=Qx_)GY83)^j7??k?kZzqp@(GTj&&@Rw=dU|-iw?%Xbb1by5h!h3m z;?6(k=`_DeY+5=)V`{xFpKUO`@F*6NbYuz&G9MhLXOe{7W|j4qa9svkYMYaraJqX) zSAdJEx>&osR?JyMy&aYl4xQ-wjbLdTXuh${6^?Q}=lwb^nv&Ou)~~aaQ4F|D059Jt zLRvP;gccFpYhx9cfO7#Am0k?USV4I7i|@wuC%0E?*a6n^%`cPlW)sB2=6uxv*z!7e zTm+fSWP8;bwg%Ns9Nq2OU!F4sSr=cMzYxXfnzyYj{ObE$pYATDha!W>X2gnim~wNM z&#Z2q>3{4-<&Xxyz%{U^EFzOS~;*Rdg0(pRC2oB%R4Wtf^*=3 z#`32m#Q2rWpsZ|;CkMq9tMop*%N*4X#%;1nr(_@M{^SU5$1dhk-Bv)9NUNx6nzte| z6L#C~G$qv0$zD8~52LJe;z#>U7(>hFkQ9?yo6PQWi>3iOT;bd8R~@OStKuCPM~8~u z7!Bz}8&KCJjI}-C_?e1RSTnO#_&ejYcS%>ksOzUzbLPN@A4EW!+a8l95vQw)51W9p z-+P$wVAuM|E>^(vqLAKo!x=ePgMLm%ev9bw8^ z?XT~6O6=3KXO&}R26t)i=|pDOLON8$Q@#3y{S3;|DzjX_hS9w~r(^rb6e`pIyYtG- z+M02W03SCAxc@apT{BxqSF`g0gs)eq@^}DU{^=E=#K198Wck<8QI8im6_-gaCGa1G?w4R0i&EeX-R`JIG4^MFJ!WS`RV(i6uhh-q7c(Nt8tPj z=uMAe7qt*UmpdI)4()Pkcm&B0p5Twr=|}d*0}baOH>hMvtu3dIUdT(#vrb%TYU4&R zH2NPY=?ThuP>EMDPSCfE6)4O?Qw!$f`+nWG$MBhbZTiDMSnIciyS9%KIN z!1U2SLtr(|oZNPu~7uED+tx0!0B3BU)O3i`iIe4-sa9>$-hPS7GZmz*42QC&Jq#DoBWtOE)B+GCeR$cx? zEvV*!RkzUA#>ur?h1WlHV-n%a=u2C_sRv?Y_^BCmqqhvJRMk|8Gs-eP%*zgBck;RtFJ`%D3nxgq|MS#-O%8kDE_C#fq+Vm_P?uMfl((01QACnHRd_-m#fxpV? z;S>FR-mz0dy)jaAU*q3X%aM4pqn+YS44N;PuMobqvo+fNVqoD%E@H+K*4jQCa5ask z0{{&C0QF{>OPfh`DYlR2B1HxE{Lx`(6C+XcBZxJEH?b~?KK)z)tfaHA`b{Qne31ip+sSP z%tYn0K0WFZXvMV0IJ%NY{!$1p4^+{$Hq=cUWK+P+?ypGR2JktKFX3(dm5vHLD$fAT zUyOApo>u8~1fhqaGF7BYzayrJdR31Pgk3&Y+GA&;cE61BqMHC#P0XDGWu|*7ll!3A zs8?Yu!5y|$PAfm3bjY6QC(hm2+4a9*`5w|VtoNl56A)4M&gZmI;3x&al77!|(hTJb z{)CA>pNkn@fCa7-UC5bkF#6J7h|Y%XnhantiK96oGgwvG3fQE?-SaZ054JOJJZB~b z_m-xz_r_#Cdz*`(ciNOuo>o794$XBmT-;%Hv?Y^!-*@|%VpRjDWk-Oc1XWiel-&dJ z&p%=3fP#Rpgk;kxQVd@14-c4TS#{=QTM)dkhleM)`1|I65*))mFQH zP$02uDlR>lBEssw;RCQUNScYpXf;mKn=VyefjgnKAI{Z5Nvw34(4w=c^^fQ~lX(%zh*q+BrK@=FZLaFFTIr zGd^P*F4;+KlwfE0i^INQHT^5Db6-}<(R48x`v<>pfU{gMufc9p>`5MGOqrrD-je=p z`;*GmaceQixlL75rM+*$_`FVT;L_V=@eH0s8-pHTQXdb-q+oqJZ(Ld_^Zb| zr;?dv82hXTR?{={VWrJlbrv^0JqV~k&QSY`-zAyaq$;s|;}fd6sB#0f@1gx{+KG8% zZ0uJpGPU*uCB6inO4m&JvbyAISZUzQqqI8bECmPsx5|djAANj=+n5Z?uDymz_D*!s zi5}p6Z}8w=-YHU2+zb_)@+T$Tzp48jZF7Ef`b8o|PvgmyL5|9O|8z&1^xXq1$O7>+ zsD_vH*MEHEz^8x|e^G4T-``Iw)q%hJhNkXrdwYiwO31~XK2<(Ta4?CP9rk$p^-jRR zx4V%yvTb^*|Jsdx8Se43#<$kFE)9K-_zjvQ!d(B3huduZSGH9@3Tt2Vwwx3!_m!vb zUvE@NH#UF(1cxLKO8uniEtcQt-0N2IG;l_5dhVL@a2x3xRINdd-JrjIJ=hIx>KkpK zM$CEm_b&1tLAfWJO#bFk!aLf_|2J}gNJh<+Nd^n*pri^(Pb3+c zl@$0*nxDzzm-_U8BFXbMZ;#iJ)l??=Tu30)Ct{F-oZG?ua0}P{gu3={CUUWHBg2fM z%x@S5%tl|uWz9mdtC7j)Ykm+^Hzhj;|IOsKT7(NO=^5R4KMQF5|V18+@XGW7x))d{(CuUv`4R zq4Jq1;O>(!Z0tmiq9f*6kxUT$RnjyeA%Kw$-QOvPOqiD|Fn~p0L0I&}WnNPU zOU*TvO@ZJ^89mrX?*@}0^?)(TsvuAWBEP<#ezITPy(oI-|8xK^vnu_dP?;HW`=^9q z&NN2^62*uyg@s~AHl(1gfQjS1O}yR_Uo7Fj5c=i+b4~ga8ezoy@(HaERb5y<*!+-U zlNwhYg3ka?3GGR>B8FFSA-S>bRUs6OY9&4lj%V=Ly1iC%`8R zXIPh%O*Cb&BWqc1xeNNi#rDSKp{FT6fY_87mE5jg!<*r6B7rc#Yk31`CtqDO!=p?57V-YrZkBFpqKGI&&4@$?dV_Q>q9UoM9Yt>2d;zl>a)S*2m6 zq`Ir#8l<2!g{RtWHUig%A9?^qw<404^+yC->?wo<(mfyO)gkV8iPe zFQPRjk}$ed#jJ`Upp@b>9YNz7wkIRyonSCQ31ncGTcPiv$kEl~3ivjs*WM;^OHzBX z)1lK`hmj<)6UPc2QL&`3?Bc0NYBI_huN$VoeE%`Wbky$O0={xW>Q1~|N}=4&;9ICzQXlXq4zjpN<6})gG3o%t z;4_6iG5GG)c&PL8V6)o3T60QZU`!_fyJ;l#6IDiL-^iAsb2m!%B5G1X(xFPhL})&zGn8tboh_m>72#4-O!!n1l4U zK2N!LXfNcN1sAOWGMJ}*PSk&XT#Z?``iDdY54fDRUo;)=$>_u^W17jd8U)9eu-w{g z>giHk=%2RU6b~O0Y+VjeK(nOOz=RKp7LUdKZ)q459}B-9`_7WAcmAvXL2}+*lh}oI z#`ptK`&YCdCXx@+T{b|Lr<0)gs~Ts8x=|5yI_Bep#l!Di2A$@bCe^1uhKa9VJAjNX z_>MVPAs=ytckJ%Sm!jb1BWn@A?u)aOHjRX6`7nXk4P1^PEHK^&-On$I0Yp_Ps$v2f&b&IrM3I9c+3{?h^nZ z>AQYCSsiV2o{43viJ`uD;R|MpaU~$yq&FN3$C>jGf%)0^r+wcl{>KeZ|RWEcTw$iF^dAwOT-iFT-S!fBIj6THr>1rR$F&4h59lO;ng_k z4@+ih0)4-$X#vEdw&zhyeCeX|rhJG6vvLmWxCAr}nP~mW+5|Z?ZUZ z%HiGCxlvRnC!m3pzuS)3cOdm>@7jUX@@3>Z3N3fAaOgIwJLECc@!rd{QFex_Niw|1 zL^p#{&!@*?4@RgMs)|gZ&DmY8Bf^qFvVIKUounGa=V#V#;Oyes^z~pQ?^Eb-MrEHh zs@!c@Qh-9@C~H6&%LHT@=A*WtNJBc^{=BL z+V#il4*>Z+?BdB%P5x@tA?t~|BPqI(nOAm=Rn}2r-&Wb_eS}$Qbl0%R!HlTjfy2qV zJbL0hY0yAHfw0f)rVi>GjSeAPVMPwWG-ZMX;ba9o2@E99nU$V}a!rNT+%aMxYLEAM zMp}-0|FvruZ+{@e;A`w(tEgag>s*qEh)hmZ;n`~ih4U8M@|_<&>5GQJeBs}p zBUA!_x)yjE`xM=G-fEXQspc}qOc+c`%KVppr}ic$CeU+*%?Ap1NW^de>0unG8m=$l9xvVOQ6dC@SR(`!km0vUzE#z?y zz^wt_NaqJS@IhU88*4Bb%$La#7Bfu_uPAhHk--c8R<49ympQ@~Iolsvrmz62_r7Hg2Qi2{}cD z;HOm?stPPznJAxp02}3xd}ns{&s@J%CovOYiSWQvVXjP)z%Jp^#zM?~yl76?Hgu#Z z=mv|njU~tY?j+K5c7A>!VRVYCNQp1=ez#Se(+Y><*@?~0&XDKAy7D#hTaP880_j?y zsNGo->LKzC*@{-{stF3QkbVw!e+^c31)O$jHFmAJ#HfD0Un2CrdeRJAYyhGrG5HFd zLjc=qHz2EQ-pZX+`92hpJF?z5VwIIE=sD}b*?2%DIx!j0*>S54JNjoypWU2DEixb; z(VXV>erk}pnhN_;4m%iRe1Gp7h=d@h;mN-Hai-N%^R&~eubT~+^)g9jG4{Hd~}t!iQE+j|-Ri(8XSCwKOfMyCT*@4p{ux}?de z@r5Qo1e}}mT)8+;^*WvIrV&YYc=*!Xl@n-~$E$qo@jB9|WY0cehjr(Hz~cpo6wbD% zhyUIF={oO`8C%uLU&!>^L)f3ONH&~L>BBSuPw%EN~6>&)ftZoebVJ*5VkP3zBonUIxw%nZEo=Ajuy@8-G zW91^jZfbs310$)f_1o576m;r~8=`YF`JHSby816$;`R&m%a6FK9rQH{!-|(DFRn<8 z|Gfc$Xn4`&ug!*aNT#9Pec&dV=c_FogvC4c-~YRA!fzc<|3;n!t#t$o@!vNb4(Y#d zo^U$uQ+-|$9@vwB$EF3d8CmYeMUoh28>BHP~|pGJ|K~m z3_js$sGTck3Z5MJD2)vY`q%b@YjfDjyB_Z4UWdPOwn53Q4282G4m)9M@0gUy znHE*{p1@GBpgg1a`ddjaOU;DXSA;pfYx)93{6}~G&gnyOCUVVGv5p`MWuP?NQ>IX) z^cMRXU0!~4bn>!P)nP?6JOi3}jiEBH@UoI3PQZXqH%BWix_lZj-~V7F>Q#$trchAV zypHGOgCG^f-xCvUXUX*%?4Q!Se7`vAE94D&Dm-cc(?1)|1v~w%3cf`-Nd?r7c~EJk za#kwQnF-rb-eq~_(I@fc%U6Q9no%%(t$^WEO?c_xwtzlRCTzI7LA)R`;kL52In(vr zu;|hczFi{yO8SydDj{O4;5Av{i>j>xvBbShE0rv&h%*4gPOUl*q zk%Wju1mRvvf{bGYan>k5BRr4-aV0s4>T4BK09!b!PBFz*}O)4V@Y)H?jl?G&BrHKkLgoY z3F-jJcBj_v`iW4qdl7csMBDu2&e3fZVZytf;9V8PS8Wr2&aBXV2Xcp42}x*UVLW7R zkDhvvTMtCRRSI3Tp13zE-ar3us1$%51@S@t0 zRq$OWM)Gk}zJ>xF5U-Usa(llpl%XhXY@8C>&k5RbrNXcX$RMp@x4bLBv2tUOi)z7L z9^-RMoxtg|6hi((NM!0Q8YR$@v+?oX8X8&y_7&0Mw@&im)lGboBErLY)L(OJ5QFp} z0Rea8*8tzWjrUS$k&V8IYPg=?_@IY>G|HtidyMM30x@G~|3h{f1OZ9$?OgV%t*e`y zon0Bqt?knOGu&sa-Ds`yYW-`Kv8`y)vRWq)5LKh1?l${P2BDw~J?8e%bUrA8sT%0vWHcwdVRmp>dAL1wC$wejBqc@qz5lEz3 zQlD-V1SJV8ZXbErFBJy67I-Bpr2i2ZyZ@I@?C=GNNs5Ku~dr@W35f*nte-pKx1ANfkLs zNeUM?XDeF=OAv@YB``%;woi;?z__biwj`evOv0%_hf+wDq!&r0f!HWfiB6jtB%T)a zc3?o&g@y0FUdPkLb#Ast^9`)17|E9$9K+)NSDHvX4l;qPHLbxVoyF5oY& zOH16>=#uKc6~D>DPA2pa-;}BO#oFb{%746<%|ZP+n=T0xrG)ikdsYTohEE({O*zrd zqdO~8PBUNStbqM`S{J+8q}In)C3gY_W-}xz*q`M^VT{agZDwUTQVO1nql155#BZk# zYeG>ELEEWN@&}!Z-mg~c=H$4fQOQwUn=O-5i}~Q&@Hjj^E;Q-WiER+JObfi|VQ2k) z+c7n@DI?}V+|qJtb4ljX3WF+gkV?s~7E;=!1G_e{|M{}Jn`N?n_jYb==m6$JNzL@T z?jA~Zy~1P9WbZqVyW6vDuV7J547Wr(@@#IvD)43sGGNdvZy~iOALSR9|slXq*Bk`ecIivS=VlUa)0dPJ?}Ix zHqSFZH`{2~-VTGJulnZ0$E57DWT*%j-jbN==jBE?GDW;+n37r?DqAEO# zF<||jrAIJi&yAHht-@xcT!GJ$Ic$cOh?@KB2Ip?8$c$~Z1?5neJaREx=4Pdc1Fk{Tp45ssW!W1V{3rU5s^m#2z8~hZnmFp_p<-NNWZm zsC#7u)lWXs#Tg6+UmMFSDoVJBA%Z9l9dW3r3o)^A!{^yxHtu@KYXhM-9yc3lyYCYF z2gf;QW~Q&DB_uY-t=HJRpwHh21D}5-#nXz}_ouC1o^~{`QSl*u)efauyN@PlPiIKL z!ZlR+s>ZR+@Bt-C8h(=lm@Ym1Ee*||MHupxj>AMj?X#`9l6E<_NTH>bk!vfR{bkF7 z(1K;C{KLmhI7w5ETYheX=0io+aPiFHf#c2vF0P?Z@}7@!gtvxjKwCN={>TWSNM7hi8jPN)kk*5GPRvr+?AWp)5QL)8}dqxJa2o z$rbkbQ!r(N8f2$ND#}xV4yb5T1@MAG4bz@KeE$1bNoV|N=bKZ%-ey!xjpxsccY%DU zl4z(#&>><4e^m-6qLHBP)%IEUGpLt-9!V7#v+HeDG<7Aw#KfGE>F_0u2wzSzpNI-~ zeRvo_>2a2jfoh>jqNW;=|2IJ4H-F#x^1AYp3Gz;6qvl!VRJ>K9-qC3d5^0E87B|hd zT=w&@7n;nx9n=aRRCQ$U2|$knu3)fkXe2JxL#ezr>lZdk5v%00`1yg)9$(pPSEgVX z%`^BCVq#+UD=X4w`26p&=2#c7NHOtz=@_ZL5{LM2@=+Dm_VgkH+P%Ew8DaK!Ia1Ah za6J0C^jyn-jh4Pi08s&RnUdPpI*ZJV{jPKDZyRoV^CI&hXhdv?Rgj!&#led;0b`se zX;`z???)3Dla09*^@@UUf8x6RTfJ)22U!|=wP`uEuJ?{oHYz$7nG;X&65$ky((Z;N z3gGKjKJ&Wha%sK#D`w0^EbYc11o+Qmn~6Cuhf!PEiO*` zQ-ZCnPfPPvlPGZ)2b9S7F=k(~1t%uvTPkC%`YD%NkVS z&6PjD9B1VYw@h3kV?;tA9!NDf2(P(`J%bSZXdYpd{gtNwzmKqJi}+WhDFhuXN@gyV z&mJ0Xe50W9!+^~4{ovpLH*LhyT4tQEe4-E;m|{nx^>XnSv%@IQ{P>z(rIMO6Jn<5i zDlH^hCo(3)EM2N!YgaXE$Z!WpfY()%Hn&rafxvZn-7VTZe0u{Aas`}r+7S{|dv%lK{5+{U~GGW+_m3`;R@CGwb(Mh;ubH-Ugrh)`z7tbAHVu69RDJ>$*Ti%1==RI6N?}p-S-f-KYJsHFI zo<)joy8!@{vH6B-T3kMm1{D>JySa9S?Q?q8b!HYF)oM2oOm9=^Z*E>l*jqD4x>bxv zF?W0{g-Zto`Rf3lhpyr{p}pZCDg(>@Lbt-mu92(R04dE)Tp(bGOmBJ)Hrw^>Ho zSE+vYz)Kd2GE@`O_gyOA_>Gg{52h$lxn=4 zLXUi5_Z!~tgC9QESkIr5dS9w#R4cUjQFE6N%=a+F_j(*Jc!jXk;Ya({5~KV3zG=9a z&_8gKBmrO(Q=x2TX^rI0qz+kvEMr>XaveJD1N=i*EI-vW8W4u60)Fxxw!+e$X5Ri( zVPHmaMGaIuB^~%gY3fq0M%!Vp98o{GMk4LL%jc^pS23m;I|DQz11FJD3z`m}F0was z_HD{lzQM+&CVMfu8hVXg$l~M32?aCu{9Jr4;MD55YQ{xGN0b{QUGznk@yY>jC3*fi zG~yUjB^(%{F&PQ*jY&!vY{=N+aJvP~IO?Lz9%)Lqzw$)yC1uw&HTTA4A8qVyv61Ck zZklgl;y+#l-#v~lvSV)Hq2l&|(+9rJS2Al7?eoLjpASUFxV`07rYLM6%GS?b)O@KR ziLU_to()R!;f=yoD8_w{W~Jn&N9DcMD%Wbp0+`pqAOOq;qDPTKce#M47x{IJ zOsXID>HzuKiM#HLfx4D z8i!uL5jH0dSN4t%biPCf%2YOz*0+!KSQlnZOlNVjyH7YfkE z+6}_}NpgzM#Xb$~?UzIVbLw2{Bpeh;=AJqUJ@w~$^M^YqQ|$8c9-B#e$kkuN3PwP{ z^1u;NW&_7J`y7STpS&FrK|TOUHg8iBw*ccXSB!yFhfCb~Iq1E7g2 zV@1<+xKVa9wPBTI&YSx+>VViFbZ1edY=bzCD+I}cy9@X>{3N@_vKAf>T+TD#?|}0A zuVwM8Z9e#1Q7mHMeSH9sqyfRSCRSnZ@SF&^Tk$P89qVoXu>CT%HBV~BRBZ{~qEsAB zz{I@8vkzKNa7BqKbUV2`Z}Y(aF5?FcnJ%<%S=nT#2hQcEeRMUGmw8`dHQ-_4^%a$2YPu3L%~HCHvTf7lRnzf7tqY(p`g&`#^)ub0w=rePe9}%^KYZ48WWP zC*fS49x|6g+6)J{E#K%UZ~F-E$q+?M9Hb1;h#i9frqCF*Cks2L^h$v560V!kQXT7g zgIL<%4UT_4symss^B|ogbAAf1cOUDV;rL};Gqg&K!EUA%>>$vNsy|-aJpr@)qdD*A zl%X~I?p;q-yDOFennU9Kf{)^{AaN{G?D+}@wtW)I;KwVbNx+;WOF_`fS|?DT{=t6J zzg0Ko+gj+KC)kFJWrXWrw#fmOjuiadNr-9c3P5&TQaN$;n zR*UBatJ1Mx=B>t>f@KxN)!tx}#>0{8VqNby-FkxSFQpC?uCle*lEh5e!CwL>J4R_C{?{?)_u0yvOl6GF#rm zpZ%KOw&b+og7ZI8-sFXN(!D^&th1PR0PwZW?o`@lX6fwgS6NUNPXH#s%LYGfDV)!% zDm#cYlQa^wnk`;8w)kEswLJ=byL!9wdA>ia`Zs%zJ3%~Saa#I=Jm4M5HO}RtdZU=C zZaGbB>zX zEOuayXfkE^306m)tf~U_RZBWcSlIYqm)%Jz02amXFiC&2klN0*Ldk0}Mj6__MoJg; zET7Bv4-NKB{-p4Pk0?liic&lXN6PI}Hy4MZ{J>;!@R6zUaEib@m^@)~N5tSswo)RI z6Yz8RS_SWmvt4R40ZkoQK2Ucrqxt93M1_~(bJtUs!QF3XX@Xra0vL8MKz*M-!ny<0U9X292rb{(qAD`-^8>3tJX^xE;&zY64mY^f5r)RPqb+AQk@PaK9ogbqVVJeuj@~gkw!q zIWxv1Hdcet_HeDR)a#TF(^4hhX3e9ygp(m*Od}@wcc)5_&m$5RcA5%fnkA#9p~B|U zqP`hHqymnj_v1@qG^jzahh){0(NL*kMpDNG7D8;-rtqZX1^z6Y{yy)#L4+hC-T732 zl<}{kqG%_}>j-;Ox6P4-#5q~2mf9f!B`EUjk+z)sL|*A#)_9BS8cOika>V7VLLaO0 zAdNVKg2U$4W+W=-=o~2I6WZ6_*`ic}pxRwSB1nA(N%dcSHO3q6Rk*`zxnP*VsyaQOZuaL~KNP$F-MAq_t1OX|Aq0kFmD%Tdwq?zs=LZy~{?F&K64 zeT7M~?hZe62jP?Akwlj4Hg0D#*@$@(mJTuEX?|PtB3vB7PkC9bxA?AzB$sxXivZJk z@7Vu&v$kl-h%&^>YoZ&^eA+@Z@vX0%3Q&XAoAl=0o8LLO%6{#S0FRP%zH#Mn&1&{V ze#!bqgwRYX2_#aHHtIJZAi)i;9<2ckK@`^;>Bi_6FiA{*)Ew@^i#zZoPSp={w1V0$GKQ%W1sx9c?uxr2Z(@D5jk6{WPUj% za&KgIQDct1ovfIZ!F|pa z7OhdeWKH)q;}AU_Z)&nt-?*B-r6QFJXaatJ5PTCsxKtOj%v7pU9RNV#zIm5C@>}Dx zk1c+?R#EVc47sA$J^5N&Dve5TisvNglIAdADQU@~ISYEBAMaUz5E5B3{|KxRKEf~? zdtxcqD}yF#aF+pUS@uSDai9^(O45GWBf^(AqEdq4FS9&D#zx*}Sqwg#DL0n-=65;> z54wg5SetoA^+0JEpb#13k(g#c>0P7pJwZ+J%~Tnafkb=8Wk%Y&ew><|AFNTFn)JR3 zJNDL`ti*cF=gy<#%{V~$=o4_qP>PYA*LK}kp##-$FpJam%z+F#pLv8!O--#k&)k8i z7}Py74t+XiZ+7{O?LdV&}l>PMtKqk9Ct-rI@VN@?J#s)e&Jp)_wE&&tS z9<2>w&=UQV){w+tFoM?;T0L}r9GaXpIAE?_Z~ZKu{?!1r+MgRL|I7Q|_M;O%#ALDj zMVZCCwt?i$Q8gc!wihnIj{zIGJ>E_&qS)B1z1H$P>QR(5Po#2nF9`b+_x%8AFzL--Q~nj`Hf^ny44f zUxa54sQq9Qe4UhC!jNNt{1(cFVmF4~8qzi)%BWVOqAyvb!3-qKsHY_IzV6QW{jM%( zN~<3{LsBNT~!+HJ{`1i~N4EflQ0 zWUlqDW<-)@uNb>pR$1?EF-FN>=^#i3#8>|i%|__Fc~wpa6t%z;GKlaaN+4sbMp(h` z7lcT)lr|h%NTDADn0T;Mnt;PztSy`RMsHqeb&$_8y5z1NAnj#P)ezv6_WBgTROVca zXZ{rM#NmEVWx(eJxAQe4tTNimHU$V4fDY7KT-w3X;V1u#|CR8^cVvY@%K|Eg!A59; zmUqRO$^n=F5$o@NOF&3W#X-rjdN=#3N%3(^2L%+9Jhrg4>&|ny0R@QXYyKjpQ|E>E zQM$UvZ=#y7{;rDsQw$iWxvebhOjlGBuvN$VOT@q$?b6|lXS@5@Q}{SmUIf>9%Ky4GD@GSsVPQ>Ig4MFJm%AD)rDYe#B_&d$lQLCakERr!Ycp zrGlIPoYVs>>dD|c8#2yVG|(aW;kO&j*vORc^_>V50)_2wV$f_$Fk{CIJVwY?jHA2D z>sy(|c_p6b8KVx)b(>Y_DT*XUcNC>8HOX9tM`=}x4)+XON0N+(@+D&L+&nObx)fAR!{ZNLY)m0&^OO5 z2zAkO^m`YsSKYH)B6zN!rweLO@GYcnSC8U{uP1An7=lu~8vloS_c~+B zfGN1#wY>wVCft=`6K#?oaUxIy_uOTZ7ppN`;*Z6D^<*qQZA;r6D9UVcd!P+_3O6-# zN-(pGw_0GJqdI08CvQdV!hZp>tc%A;;@4AL(WSlZ2Yxo79?sV?KjFkz^{_x4C+|R2 zN5R%8SCv+TnS2p7;19%psf7lCKJwxL6oe2UNl4UCd33esKzd29(m@64a!?3@QS!xQ z1>4uzYSh5+1TJBqTL_dQC>0vW008)GktgWmpLH+H?1$^Wh};Avm94Rvm^(+NW*=|9 zMfSyx`bAgUbNAOCsJTNWjr7xN421e7B;)Ef5Oah*FoD?%IC;SU9V2`h%TzK5m#!Ay zS=#wM(BUlHTWKBSVz0sCshtmbo8Jd&Duttbv5xWBZMho=vT?Guh>Z{voh7}(RDIoT zhAb4L%u}j!&o@DK2Bd1)ID6>EVQR*)Kw^TK3^JysF66xK_H}#Z2R5Cc0M>U9#JhHn zuytIZfOw4-ky3#P;Iw!)Bh54k62KpkZ+#uqy&0*fs+=OEH0^gEXj=zm`Ub)n!c zkdK8y4nS`?{ksdpn-r%?!|_IqL0TkWqp6saAvJYIhuT#xdgaaQeK3o0u`(S~10l*`6T%J&pWt&O!uJ`sYaqT|-5uhk!23l&#{M7hc zq4QfaSq79c-^bA=Ep7Dcpd9f-Px5kcX~6CS1CnmL5h{+p-vpp*7v|iFks5P{k9_SeCC2SrVd+QOP_qw^ zcJ(*u(*Q3l5Mi?fbs-5vl2uJ^ZKHMq4p~U+U6(pp-6^m{ZMfrStQ#M|m^`#q=`x{@ zi{*fa>F?ckzGo7ACeFa!0(4ndCsEvvjOQ{ul=KPm*~Na5WEPOiAhIg@P%m=IDJK`t zyE*)D$fI-~05&Jfc}mT4V%+c`!)Df}Al-$~K(-uM;TZ#gjZwllSE>9>!^K$2(Ks^dnKNfcc5_DI;q#p`jz~THh}PBts)}@TI#}>^$$p zU#NUlzsj59)T-b-z{|VR1KURwz}|c@7x&|tGp4z~!EJ{BLh~QKnw-`+ssZ_)*)bTJ zk@V=+NL=QsDpN;$HZqB8NtEk5NgJ0pVd2oOC!4w?q7B3;n19MwtKju+91wwu#0N33 zL{R%jy>lOufrvYyRD6bn<~wgAYx*HS>~kVh${s{Cw82&VdDn|;j+G8iw8(h;D4C-} zmz#3LRs&kHkLElsP6+3o_^L^RbbKkN(|>zMiUgJG-R5Xv@8JvlyHd*M8+a7QMh8s6 z0)J-Xt?!SLeffMX$bso2q)AsOwO!1P=KVbO7_`P7h+jILCp*?%jI3p?2L{1gPZj3( z{;PRD7sr%AWHVSm*F7AZdS#phIBxpY1(H&6yGK)9YF1Xn4NBAI{o2(&j|-}v$3AkT zgffYAaayjZZ%k|25C!)rKDH1-A7po0B98)yNMbyvp$lQ_DfbNM>WmxhZ2)u*^@DM1*!Yv^WIdGf^+8rr98E9Pp{4o zFN77l=I+TR?2DPlR=x-kDxtT~TW#arX@XD-KVC^{xxiu5J51&$AH^b=Tit(s>RCga~yN4uXUG-$zqj5|u93R2(_oEQxz$~_d7eE|$?3dD zEMpVty>MSc>U=T%B!rGMMHa_MK1b~p89`hKB&kbr69j#8kJ;Xa!dQ+%G((J45KAq#1< zDveq-H`_eYVonn+u#UE)y|Kp)Z!D?j<6Z7R3W|rJJ2m^Aq;U9MJNNC*FLPfSzPEOv zubs%r|u^yR9I9k9FjmgzOcG}lYlG&B}un+qR!v0Sn;pLfuZ_n=eoV+-{ z1z1>l(S=u{O-7z(%k{n|WziNYUmsts}NCI4q6Rh`97 zKk^^7k0uw!uW<}i4~pv27$LcGI06{9G^HL(x3iDu=T;!R>Ltq3n0>DIB(ua1r5yXf zK6%!IDFDl_1p)BzaVbPRZA~^RTEampfz+A3x0_jK;~mkp9L|fFPgiG+{b9b2=qte_ ze0;pTx7mEB{=Fw3W9f&cZMYX+?=U@%1mzV0z~7&9=1V#if`e@Kd3*#KlA&G61`QV~SqMVLPwiXoj z@#Hbalc-sCiqtM)sSMbNoMCFc$e(I$z!c#G?q z&q(Ad@=KryuFm?0ba7Zg?r`iL3z1rj*9#FX?hPQGs%>D&W}ctZ0P{Ta zv7MQn1I4cU+vHEI=K{Bt4X-U@o-pBSolP9?oIFlyCUjj_ny(jU-;^lMO@d4OCTj!z z_bT<9NA!JNJ_!ek2uDH->o@g32R`Cd3kiMwOhhaSt0F-nbIn5~vEDmQSxZ1$SZiWw z9{(Z#4Pf40Noo`OGm5C#eI}j7g>NAW*RGPN*SOYN<44CW?w-b6h74be*?D-o;U`YB zcsffCxYjq)YH(3lu!aH@@-#71hAq%gHNOUXu^2h<6}wx@zUBA&h#5#NFD$G`ETq$B zbF9OZQSe6!v;!P5GG+w2>~nLX(s%BK11YL*q# zu+5q6VHqS-Y5YT3nfGwv$ENrE&l`Qa^(ts5l3a@TxHwXkCh%I?{iPMc z9igh?{Wh|m0Um&`OC*B&8v}Z*jAVivT}CY_cRf?hSy6Hp`PDW}?Dc-Pepr=scXwBJ zRrGqC>5a=xDflDGM5oM4Z(Nm?MLmLrW%XEci ziP}&z+IMa>hJhbSTxtyU_MXNw6{k7tU>aZ5dX1Xi&8*{Vxk%g!4S_0&i%r_Z)s=g* z%x|;ZNu)bTt!_K;lbWLtmWqjakZ<6Jytp__O}t#2v%F|P`&gWLAs|DWW;T`XMKJw6 z>453A{4=gRAKsfGi%jLeN&5`jmlyGaiCmc2VBPmrDJHY&3?GqO1MpsoZh#YdkZT=D zUs!nYAhz#wR>Gn7ypE!pw4~qc;Q4DS@jZf!Uxvp>hW@B^v@%0g3;uW}8*A{XvzrK* zUb8r+e`=hSOw94$@g<<9p=Q3pxlE&e)t(PH?)F>%;ruh%LFY_h|H|EScKLpxeIS0> z!awtGGM?5oPMnwFC+h4`56Rs|5*7n6g+a|wM`c*k7H?)lN~GV!PX## zUC8YiH)#QhwNZi1ZZ;Gv(1D<{;bPu<7nqMu7%2d+vxBgoY{}&BLL)B(0 zj`qK4OD;!wwd{@_TpA3<-QOe*0`~e1v>t3sX&4xW*Lnt}QL!zV5bW)N1|q_LMG82* z4{gNJdbiV4mt2rTJkNQtN=a5$meH_XFNMa8wKAZg4htLK+RECJCMxz@Mq&vKR$40a z0N+3JsG@82iB)8I`0XJQ!vwOD3?suoARp_kYP&5jp9Oo`w>Ds2>HY!aZ`5PSkP{V1 zpjQ|!Rs$x_{@?OP3Yd4?4(1GlZ)PVxe?H@|F8R^QQt#@Dageyo_Rd0vOJ7Uze`U&x zf0wtND*sfG6-OY4xN)tLr?rQJk(_C<2I(7jW5}Vge^=q_$LQF4@$iA)OFLND>7Jzt zcF?cgG1CB#IAh)Ig#Uk~(JN_&1f}8MyJ@2+M#++_BIaUF4YR+;c{I(Z9R_BOj*f44 zyKw%$x4*=a@fdL5*x_>FV5i@#*z@&41hJ<39^Ai@`TvTk{jNV0nF{{ZIrB8hS^39Y z<__*FoqI<{^%iSSW~T}Oxo<0k3BfaC*VP3Xf_!3PEu1EI(IW24?pMAfG8Y6g<8z4r zGb;ahixLZC-dqk%|xTC&e*@0Dr&YGH? zD|}4%Jf`pXDPaC74iGX3H?jpP4F)-skdZ z3q0sGa7?siD=zUwzLAVFj!T1*H`y!`zv&_Hc*)}S9X|$$U~P!Bxx-j8+lV%&fjYk+?R3_U2(zCj#5Fu}Ts-DlU5601tOGZMg3iL! z2Hc9I<69T~$QYPMHEcA>if2kpPUQdpewVLj^Q~AQuN5=!0xX zF6q|-S#iP9b0PZD@1q_Q;rkCrd#Ww@ELN=JomaAzGXB~^z89rM1g|8_RUcUS;k!}A z>bIBSKsfL&^nY(DZM-k!7D<1Sh#~#Zhu6Sd(xtwFx`wX9>SnfyXG0q17t=@m`|j@% zE(nh%SQu*wT228Z3B??Ko7D$`rw$aS!xCcvnWu^Bi9H55W?G^C1FC1A5yFb}+f$yL z#@{@hq3ZbLg;p88W_;mEEKFusHNVXGe&w{@@Bbbzl9F;*@{~fgj@Q$S7b^<2y+Mw& zk8f2T=tVwAH6PBaqzP8iy-ic&<42(&Izwif%@0f}-e!~3PssilASgxB5cwh5^NStw z$#m|str?MxH?v#ht%#Zf+?&+hTPYjs*@cqWe-ce4m)XzrYWBUwswWn^MM8=XYqhl^ zE-u~MH-TCYVdu!Pl;{=R5_#!uHEQXU^v#M z?F~0?fpTC~09SnvQU#6EtiyxIZLG^j!@X?BNp1WavZ&4Ta8!eAyvebIVO-#rhLkS^ z#^ytsKxv342}J);^FdthZ7O_+K+7)f*5dHAVq|7eff*kRdDok5zX{GSRVEjNdntm% z6OuDkPPRf9zSO609)}@<6hgU>EQv7rP2#DjeS|9EL#1GAkos^qd21=h;6os;Z?On> zGEKr0LJC?*3G$M&vX~UHD$Mdpe#;^D&zD4hgvm#aHbl5Pm1O-?a?*_7_%x5gk<(N9 zC3C7kR|-&rofbnvWu6O}zL%l+qZMwyOA~|93=~o0AVprdr7FiA14ImWoTWm!KoFw^ zb5-$*7mNztL9;FyY(7?l;LCP&Q_Ce&!*_#(^FM@4a+Bi{!&RJyBuhI|a09UaOtC|M ziu-F9-@44YUG@3UY_%!=+<)SdC=p(#^?;d71gSdv z&~aY%UJ4;VUa`u;csmB}AqUw##yD(HC?qeO4wUuQCRy(e;7&dc*`hE-rizlwV#9=3 zO^!>al*Lx0iIy3*rF}d4uuxrr(12$5Hvn?LnQCNFNPxHYdhF9&DZRQ7MropL)LId* z$z#|2201}!@mro;q2O?sp|kv<^Tw8~lX&%I88OSOz%<(toQZG%7z^%xqTO2?4_@u_ z2e3G;PrO|HOptkx9}y+>4reQm>I6lc%5Vc_gR&9_b0@;(dAx^h5LeD=1?3j|18@8v*Z}>4J1ncf|JkR z3}b4cH&FA*L*R$&nI=zf)ybDi#6M2rY2Hfew^SgU1hPP%!!Cs-(S^L9jDxBp=-%I6 zcAez?ohaM+Wex@8+)we$$6IBPpe}Q3zPVEJGlTBCrfHj0wbs^sk1$`IJrmhlJ6fY2 znmQMXMy)yqssD9!tnKaVSW0?UO#VBly2c5b($0nJ3rXOxZ)^n$@P#)mmfZMw>m>7m zb=!Ba)P)=Gpa{6~iJ6(JIRd8`e{K`LGT*d$5$1lUD^$qqDZf=W$P!T?v1mQa`6Rk!Pl&JVPtE>$jBjZC7?&-whp9YMWX*CAHNs5cd`D zJzq1_thUodL4gJtald9VA-Jbqb2Td&b)v3!i#62fb3VNrkjI1&b0Pu&`+%kwn<0{^yj#bri;2IIxgN3xG>S+QR}&u>n=_gBeR zxqV(hfO{x1F0-5a&>iuj1S5; zF}1!Xbt=A&Wpb`X6(j%x^`N2NqUO8Xtq#-J2nMBY;->nRX^ z%{Tfzm(^qfSNzSp((P=mEh%Ka)Akgp#q}=coD}zi6AspK@0`~A&y7p3e3ikr{L4#OuoTaV-VaGQ zY?~B`a=l zB<+rqS@i2(ffM>ws^O8it8e1H^|^y9HnuUSz7%Oya#U&i8*~Mb!N@3s=U>s}?t1q& zP8|tEi}9mw9ioGijG`n8GI#f@w$AqAW!Ah0*fF0GwEV_HKg&qscrj3xa`HHtyvyMa zLaYuxjqNd~leDzR`4k_Q&_7cj?&5OCi^HUe_r(ug)_wbtbg7-xlZaRH3;bBW20WU6 zLtkr}jEl*)&`)Qtc5PoC#Bqo^$K-@nF{PCG9knXG~)h0#>~7TWcnCnVDoqH z`Ui&0S966oKX~9J5Qew=-xde0tklEUmnug`W)luRy$C89wo;KvfUh;zq$9TG8Wqm( z0vSrx28M?I98f!-OlKEkoQ;rTlP3Bb&0`X$b!Mmp1dx!e+=Sh&>c=GeznrSCVcCJ5qbHdpS-_d@<* zeg0fE3n)M=+$ijATph<|jZj=V!v3f+j9OOlOH%>`Lq%HICXUoBli1=u0|N zyBSt9c0|EP&%U}q{dx-S`#Lc7%?qxfS{pavqsWoJBR(%`~3 zD?%b|rqHBByTSowTp+Z7mDf$XquXVS#CzQsPIU(HJZaF1zUT>vu5Q5sz@xnk@4{w` z*s;VbcBN;jT9$*2ei}5N20)IdY`^U>JUymj>1|*=oxI`|t<_YN|Gl&HLN}HF*nW*s@7DuTvpeYlg8TNvwB% zq@AfeP5`9msH?msq3Z2p4{}kSVLpgq9kVae{1WHFwimIgFZ%jEq%8TzREe+S48vaf zL-1;Kq<@>4s_Z#&x-0jfvBF-FMS6i97xtlH{R7e-(FQk&!U)&khN2g?gI4@EOB94#&3&vd} z|1FKgg`^$=ZbVXHXz3onz8mQIXj-+TK(veljXi9X|!~k zB^=@4XI$j~krTt8p6%X!Iwcab($R;Hu4ffM0sML7cZ-3Q9$yjRa0|56!*6ylpSGU;A z*>8j)RwHnx>y}D)A53*L`au4{?S{-rNNbzAqEEpu)|4HuHrE58MUN56zo z4JZ`cec-T}tq=sxnh06H?@#4VQ*D-Gau9Sw5==6dD*8xTXERu<)3LpZ^^>6pVe?PN zvQFKh^OOGztCq7Usy(rwBEb?dTb?K!=yF6U1c)nAz?s4Y5gQyqW`{kDe$0KdC z-(N-5&X~Boos1$|OJ)%s%XF5P`bsPoXRJOtQ}<4N_YCwg?25SH!ZTZM71U{SeTS+k zO)%4e#+oOZae5Ea-n2P-5uU{vObA)x?drqkZDM7XpxmRx=vWcupt3Otw7*a!s`b=C zE^WJZll#0(KLgOG)kh4OXRB)9J02W$_6j6GMtP1>?ygCDSLe0If#b%S6ifYJHdM+} zF%+bj-$m}%P!kOjH6DF-hj|eW#z)a|mg=?_y|!BhnNl8TBfT_QkU1QaoOcy$O+M`9 zva@q5X3~uyJ)q}>JSj;*KmkB`tltL)9X@z-O}FHRyc(VIH_1tIm9CXZ<*MyuNl)$A z(RY?PEScGyxL@;!LNI-N&UgiQYGMI$H~L`i7^Ta7ZHcvseSUwce!T2Kz!k+THgp&` z`NILU{2a@36jix-|p}5%Thq*zE((#eG5} ziNR-xebN8FX90qO_UAi=NgNzp7Js3j%+*HE_eC%4R0LX6Gt$D73H$dxE^>L;uaQ_9 z8o0a+qABWqgU#&tQD-|&0-R&*?;qS8ZL~*3Mw1eAVUNS356Nob!x_%RsPsPWKB#=P zKC9{b+}Q?73a(TsQ!$WtjdIv0;B@%;0X-TG-ds|EKyonTPhV7@ixB}T!8Z$G3XZh2x}NtVb$;eebf2%`2%ez_g-J;8`8{wy2S3DGhlaM`GJ+kj ze!1+v-?!tv=JC1JrtfX@t4biX`O8bBydxK)ohvzAHVTERF6Tl%A@FTY{ z-rrUC<_h>G)f+sl8d4=-hnnCi0E;N1bF8qa@P57RWP9Wbeh4Fvo%?33WX{v0>{E^u z7u{x7U-n<7ccy3gN#4}7V}t7xna{bgnY_D9Dlx5HFR`ti${BfQ*g^W7$$p8Sx%@_c z-23(oPHa>pzf;&#Dlu<1T38fxIlXpe)TmKfmA1R8gpQ13_gfO0tJRa&u|Wh`D?i7# zw(_s~VRX-pqpMh6)Yo%t|2)77e$<XNFfjkczgYZ`Mq3LhW)(nRT-vjl0Wh8Z>FOXyN!)Ap$UkNV$U zQFz>Ls3!u&quWYrN;m#I+r#nNyc993;NU8kCyCS5X5qP2FinYIeMsVWjl;zQ|BQfO zaf9@cu)_WT>3CC<4=H$igQ*{6WTq(c;G9vN=Xoi?w*!ACp%PmMI!1<)`|xB+as_D@ zc;~+*4xeagX~nWB#Ck~`9gLAc4#cm(>?@F6IXI1yuKhALj=Eh4jR=un@WGde70}9M zIV(^w-`x;4)4K@as@!|!l zIO}^GU0Z#FiSz?#HKz8vhpLAJ6z^-jm+B)M8yoMJhEnNpA}wbO@2ZUW&Tr*er8b|A z@$QIAV(tB>5*b-M>laxoWQGzbt8kheo;bc&=$TCVLV7OcAwjGNJiu65TB4?N2^-ik z%cG(WYgPTcnyaXBest2BErA!>!)7SdDJKL|$r*O7w$)lLbG#q^?+Gu`EW&`uY-v$0HdCUc*AffP zmn+gP(|W!U?dsyL`342iRZ#$+YP(&w2)!FD&g}a$K&! zEua>DLr|pA#Ph`8;BdPwI91h_%<=zPT%IF$?bBzU+`h12!(U*9`<3W3#n=6(|AM%B}dX2p5D`4!gD(^K#0;N?CW{(cx)K)8W^?B2B0j9>b-B%PrI36zb369lk zB0TXiUEo6fxys}z({g{YL_f!<`3sTgS(!fCu+d#6{CxFf+^=I=p0%KgxTh z38t5P*-tAU=oFrx?;Blg0X(0taiXCJOY(N>FdV{uv#}-&iu+PG#~edLjk+InM!O>* zyLM0Zr*lyVhn@aE7gu|lEsj(SM>Ackvk4rL_!Id`UwIcP#}AxJSKHGK1Cg%&cp$EQ zbTVIvO_z+S^xTq=xRZL$Jog=ls}uDF5-#SQ5Ekm?r6~M~HK0$sySt?;O>uXFzc>8v zR=Wl#V>LyCQ-+@$$B3mdX)rQXSoRN3N6$5D=4)*;r3~}TPZqaF@&vvlrKWCe9?MSg z{FrxGho}ic>7vx;*xcN_H=1KSR4bF-@O%IyvYKy6yrWZ#0PNUvrRviM946DLb}!1% zcm;g+&IvsRc=ngd1igr2!&AsxU%^8N>we<)MJf--^CQW!LyNv{g7x+NTq3|4G1BGR zP%|D~Kj09A%Vcy*t+&F;#=*laa^9>yo`@Gj5)IZ?<@XUmuTd5I9nK#h;VpE)^Kz7% z^!3(hXlQ*cT7<7Xy2O&xs)xWN1wWC%i32CT8!wDZ4XH=&I+4V`-FD@C`D2+C(VW-` z3EKneoyLUIVE>dj424!=V86BeRmx6nHo5t0f{3vx@ml2ixqw2#NUCsK??~OZqx=zN_!~ajw(HlGJjY6KKK4IbXJ7+sr|&y zQ68t+4cM7`zY#&a{iijaK-tst!+@h*b-{}IY3)wNYb!+41 z{*Q5=iMZ-xy~|QXspM_G3N2pvU-yC%J@`LXwpF%vFoc=hTgPOW^VG@qRc$$&#Kx~| z=E(mTo0{%!@BiYrH`Or41snk)QndW(nrZbUe-$f$k2yw$l>-U)OZ88BddT+&RK=== zol${ERg)EYXDivzC>_tS6%vt~m-W_MygaP-oe<5a1QKHtpsL!+EUmzI?B$wqfRj-C z7(H+Z_aRt5V|#V0PlZ7}Dl>sTlq5PT^w02c1eSLFxoCyn#hvwdzS1Xv4JAj!+dG1s z<7v003{!V;1~Pd$MLQnD24oT^hr6IXw6)(X@5=qgY_5@|exc?A4o=@mFj`f8ZEjji z%PnO{MA+tSS1_fKI1;6Rp4VzMg*F;t=T-mvMRtqs807-j?X9jxZ!pBnQI8;X$kkjq zg_|Hm198%+bnG-ZmkMNEuj_j$gp$QN4)MkN)rNFcgvtHN+PZ9&iejdgos92*LUk8A z9}z(Y+~6bXohCiNDrNh6pj8T!m5@ijR2dBKV(WO_$@s$W;Jf8Rv^y;*iFoZJ5nRsC z(okuz-*|8h_Ga%xJ>I>q@_+teMiOo+w|RYG^lZC=I#Nbec5xD+&IFptRbIb)9~Z_qFCNRjstVWw+1!yKqRh`xZ5|Fai}i#vh|+@ZeJWaHFIgF{%+Gpw+$+b{QN#-D*-+^BEfej8ybfnlq&+JuXn3mpe1SUX!J zlsH;go@&atrcQNp&Sguhu77XyFOP!4o4o;l2v1Jz=UxSr9;P@+6-;W4fOqjF7Ym7A_s2s5 zuKdjkCGM*8ZPw64&9~ct)xq#xfkHiaQ@kfL@-u6xI4I$wM^6Fzk1+P^8FwN#UMOz) zW`ZgD*eRYZ0NK3^V;{^$~Isa9ga^)`#o7&pJ>Z}%)Nab&~mWkp)s20PRP z%iG#b9)2cMM%JM$IzNPUJ@F-f{swcUW+Pi2ixN`t=z+)Rc6kAwfAjot1E3F&w0PPa zK5(zZ{6Yn%#Gdn&K$SLsw7_c%e>j~l;qdcOez9Bm7d-faTskuG{L9vN=)MGtehps? zG+cj!{I4rUD*yfk^_yDb5nyNNY$#90zGPCPpqxzj1GdAuT~fj2{la%OTN)aTm*Amc z(GG^LY)}H@klh@egZ+P2R#s+xf4a3wV+)3Vx~p+o22R_`Noo?YZYjV*$Ly~6hX?$- zIHUDS%6_e5jJs&Z_L$)3Iaw0D_I6AfMc<7!cYA{ehtrcK$um@<#;bEuOPAB`#ld$t z-@jLXAL>2VYr{wvaB(RB`dCbr>;Ai(HkQKqeNFInedlOs7yM*C*J$rKqi6#(*VK+< zlZ+C(V`)`q;%P7wo-?M^WJS2SIZws$dvyMIP%)veYqO4_s7!#a84lR{36IY}AX~wF zEysqd%h|1*8hx5IJDc0tTyZ#HTWVn2@&S&~_Z#0L@TXHrI?iOAoG`PDhF$d89RJ$I zc}p+UEd7+n$)wk8@q7>JcuEPrJMA8;s;un>IHc8rwXE~}#AD?*W!et?aBH7()0bL% zU{7XupA!#}Wdef%xHu1!Ni|{Dng?q9I_|c+r?=W{xBI>+%RMvg@s9c?wf2j>qcpUr z^55}aI6hAcjV9W|BO{vZj$b{M#eI1>E2ffl@mOiPRgUVA!y;kZ=>DvG4aob4#+nSV+1=mY+wbWxK680N0zoPAQ$BCZ zdnt9Sccj&&S)B%6MJ4^eeu|J))>TQ{EU3S^OT&BYZ;A~zznL^+FOYB?wpRHeiW8Zl z0nPgyFPeB_L2#U`MFU~4>QLfdRWzD7K8JCCM1UpxSeEaRJ;oE|(iC>9=o`k-lYRrG?scw88Re>x=0D zXyOn{s6;twExdxmDJHCpu};W#&>Uva=Hvv3{3c@9cQc&+s;|D;Yvb7i@SY3xEJP`u zLZ__{emG22gEmG(N#;ndx6_HJkrvF3e-U=&@rDXUEF8B6#o;1w6@YeCH7;4Hl0Ksu zkBK&A_>~5|FuStEq7AbsfHzzpYWIiOe>g#hXg34DXQPTRxG2rse{!>Y;dl3~c=Wzo zCJzSmH-!V1NOm}@b~X|c4fC2DACvAC^XiuJW<-4IM=TPmC`xAtD! zU%F$9BmtWfWlgPX`q6*>>>9AKPH%R=((C&A5t>d$W;YsN+@GnvbE}VJp~Z>DBn%4# zMAM~<&GAfT)y{RI!Bu6^3mM)>*XfoRPMO_B-}6@ zM%gbKxLY`k-5egyi=WM*I?>wPtF{2bg+|yNq4t zcW@|u^A2m?`<^*340{J0mam-}1>Plyaz2wEYWKoB*znW-^HZnAF_`6Q?=1#-9TVhC z&I~n%UWCZIPK<_bk#bKYMam*lghPOQfrv;Q!-T-k#-MMA=(YB(7qXu&7Yg^O%M=P%J7YQX*<3A^p#|kty8(KP!N=}#%}7H-W3uhcMJyngW&5byxIqbJmnhu< z2!+{tWa1+`%7s^=1x#;lyGpfKqdU+}D9HFC-lFp-7Ue=srVXx#o2>VkG+GfrO}WOX zOY4}Bun33_R`}r7)pFP&c2>*&hK)SkTMzI7yIbS{6z-8p__mm%3ILhvvo%E@cOnv9 z1F-=R^K#dI=WVdn`^3nR;0cQ~^>UR_``Rk`%rOeL^_^iCy!d0G?gzB?+^fXuGwcN} zdz!qyYtpdwbn9pCv=4WGi@$<^Vn1R$JGaBG;b5!&LgTG^P7WC&5=yoZsrvGg3#AK& zoQ20NiS)v|vnup%<(02-GBe`2|SpXn2h9n z)_ySB!9YD*7?{23*x28eom*X-95E7l?ufQpu(GpVORmO}O^rZ`(VSmg+yHd8vnJ!t z?b75@!ff@7%=ql`H`nD|VfP!Z?2nh>EX`JIEFWy~dE24!dcYrkwvj)9lQVmiY7yhL zn6X?=%MzNxf*a9x$+4R-q%?wSa@gDHOAhB=ZwhocD>m`s3WU{B1TTn7xlC-#_egElX)bt z<1IbEM6@Z8k)mvq?q$;H2UwkiyX(1mjVX{(W_26Dzq^Pylcy~Ng;zZ0EQ04VB zsh9cerh8nX7Wf-XAXjwR$&p1(i}x*9I_eqP6YBnO56`{y7~TR}U1phz7dhh?Mw~?&PP##>@JpN>gsPOROw;=&8Al3Z(B>R|+7Fy$77U zp=3(Mz%NYkVO&X8E-CRQ`CAd_cv7Z3qx&wf(nEK(n+hVbxnx}CJ zxTuG%+4$5|irT6@sR5+tJPf)bEFhSKwyje&iFtmAW3U85)D6H{t0g3it)NhULPAB^ z(KPso>g*hvGJ%iu4)Kp5H3F4~hXYr>z9HL3>X6P~=ng#G8(9xQ_j1x(-O?N?KfB+X zp+1+8ykcquO8+YSp-EeHiFZyydvO8NA6{%l<0tLTM`>?J!E1;2L!ShMK?kv-Bo&+K z`~I;ixl_o1*+64!7GfsX8HaX486loI86skG3=dTnM&RRyi%jsYm}-5$hOM51n78Otvm1!H+LO(8SQo*RUEf`71lK4;crtXq^PVI8>mGUQvTjM>;B}jk=q zDm1NWQCh+w(3Tca4W(m-SCy*f@eV4wDO~wU88dDRn+45`9DjcKUTz!tYERTVoB8%Tb^{FANTk8L@7M9AYBwglP(GeQ%nl`oZsDeY>QKb zY=30|eBpkP zo!BZrzOZ!S#DBbCOZna^*0+wSh!KxWlSje}f7}fm5Cubvc$NKP?C0*Otr$I&#b{x! z#Ha$)Zd6gGU49jv@Xeg3^wyw`rLGUg|MaQL z_nm1)-Mr`cG*)BaBgdIQcz6w1Sii%-np)hL<`;U=_rPLS^MVe~CF=hf9{SkT(V4b< z)1gyP8*O)L?$m)s^`qJFh=&4|g%J%!C>H70igy+dit3#o>YgS46P)zxrjxPC@Kl+{ zXxlU@JDD}n^AkP`OO*l$1k!(WOea?h_|k?{_(G|w1XduQq5q$WN9s$+tvos$$&00E zHSZ=drqo^xMQh+6wDExPHd zQyqiuzc)o7?bhzrFPH+aUOlPq)k0o84r{a~@-d?FDy0UsvP+fzPtpMNinZ>MmeIDY zBOJm)l#uJYYfS{lM52gh$Jc{~MGXr8EdH;lEBN%tskv?i{>5FGz*Tb!`*fqADMPHU z+d|Z~Z9hrFJLPDgHqfIsJCXIr9~&KqMO(;IOQWJU9sMUZ`sPSd?Nw)<3$Y-gYhE5T z+=;fz#@g$tV$}v0>Wh~gfwv{m;~ozdLDW{}RD=0iP8dg;#j1cXz19!%!%=VnbZu24 z>Hq&U*V3tjgt}vn*J6%y)-deRo`fB*B-R`heN-)B$!H3l4zL*SNtyjt6k2+_-d`MO zZ*j@CB7?w@dT?-&%nDSt@0>yL1s90e0-W z-ZP7Ww%=`%<(1ZB4_U!_kl?zi#GAj}VL-7@$4V#7StG;=ivR}*C^dJrajE~;f4&H0 z^r)B503JRzRyImIed@6q5bM*-MMZT{6z~J;=`z~4C||vvl^-gpRm9eiE%*7FXqq!p-| zUZN@dH^ebm$Dv`N{Nv4if0Y+IOB|f5O(rYVboehnJzVi9RjeRuIj4?gbR~v{PXTkp znI}EUOS0lz>JE0=oIi58iLlY#m@qE5;m^W=8ZS6LBCYu|BV!c*w{I~Si}NL}-wb;O zCnuk(qNCsXzM1cXXJe}gUbO$8arEcHo$z}uSa*%SPFf{?G-`6>W;RtsI!(Vvyz9ln z6L4b_FGew%&yCb+PM|E4__{NQ!W5*i8ew@ug3J01dIUdVZ#6XY-ri2;vXRTIpraa) ztQ|ovRQq_ZUVNUj5FHuo$>+$Hkl;*p__K@mNkHwdk;T}I^rP__MiPL%7+>dlgzwXbCq<-=ftlRqEWR3|X0(36tS#c52t}rS$ zWFEJ|ljCshZ+yON$yO z5Q)pumP`t*?YZgh98cZW-Enu*>jive!yU=pkOGE`zi2euDkU zQxC?})X31#5Y=+ad?PRv%is)1`P8hq-J>^%1lF>UPA;bhqy2Cp7I1>NwJ`cWX$M9S zQ9}#WKs}fpj!zO^St$95v{<@DUnk<MXY88;SaKB-0G7+~q0BVyV`@Nrisehila(B&$6WmQL7d)E4&5s;sY4c5 zliFzgapKp`@uEcyEh#BO3R(_I)5!)w@O4-H!_|~`b{sl>TW`cgRA?j^FJ!Ve;0p>8 z5)!a99Z2=DsZ7P;o}+1ZGx=f?rpeG8Rjux4&sSkzFi zv9PyLW3XW*evcQ~pnwi3{0#;sFtt04(8|uan!?sb(yF@Pgf%Derb{8!(D}H-j+dFb zHI5@8;cKi0DJ7##?*J0cqZO}Hf3``R`(h=!x#`g#|A|-KCJe~oH>t`98RdZ*pGbx$ zY%M8ix6Gx_E#{sMcU_leLQQxL;rD&LOxJ8&Y#h8sdtk-TGs#~cO^RaiAG zbF=cUAF99CNNHft?J0hlpf3XLP+^shr4Y zZX}W$3jDp;33(zrb`@Sy^J@f&G3o$KLMKpG?7>&W!cX6zU&i7nGf zCHf}Ty{pGYtU?r?uQZ-+>#cv(cPQ3q zt`>U;7uCyyjSjcJpI<+TX*B~YZOSQfliU2XlSR__ZveDHxOfMo3JDq}9n#+a?f;bZ zl9mTgNZ_3I;9{i%VE9kp`FSqj@Cdo?`GX~rfB*>rWzc=)f!MDG+sVJi`b{5fT$D(;Up#Mi5d+4w0#{(>AA(@jEeNQbGYb5Y4!A~ zyAsFV19JL31f27YE=*p%ksz_D{0Y$RaMQug$;#l|+Nu<-4m1`Tnp|L3fFnXsRH>*R#>y^RepKh z=51n3{1hg3zU5g#=5YyLePi8GdHD^X*c!|;zu*Sx^T_9MOJJwXs}NFqyV3Xgeybrc z;>Em*gFN6%-ir2XuEhzhhF06;Qn2h^m+4u}w;QI2Bz8$a#@xxEnU3>sHFtGgA|Ws= zLKT1}At@{(ak(93pQgYb?4>#Bz>OLUPGt3acO^h^#Rw>Ac8^T=>f^AT_@a{yji0nu z%kw521i%DNwmcXlyi^4nQVf~_Ps0og3)_xETQj*|wxzy^O65q)E7}^qd%o-=UY7py zIU1em2#AS2XR=wsU1YmfC*%PR^qFo6aoaKR3&|N|30(pMrl;$c%Z?-0qG<-sC=v`@ z@RZkjSk_7~<3nJeWF6$HS?J>sBP@nUNW;%I5YdT8z42@}>EbVRGAxcVUEwO1urFUJ z5O1TLo%Rn#kEK2V=psftoWcGRm4Agz^{aS@lQVXq`^#WgoEsf7szwL6BYY7tdE!(w zaZ7rd5bxOSd~>n#ywIU_BljIZMG<0PYwv4WTx@lX_I8K$vpq^21<2AYiA%;+qK|<_q4VSt^+)Omfv!0iH7hR0{r(GrI5IMjOpf;&vM9i(L+Jn1 ziIKv|9`|JF5nZ@nYq{uu*etuf+K$m!=pN5bY~dhh*O9@@dF{^doGXcYgKE+W$-9Sg zbt>SWLM=l@0Jtank>i)llU{yYE}F>L{F?-GqP{Q3r!4~p&Vs5PrPb@_z4P>$_rn?i z#&gG7)XRH3EM>x^&_{kY%(u6%j<6t6%P~4jaI)0Zr&SMdg#=(M;zE?m!GK^*?g5{T z?Vu$IV@AL;8kLygA?Jr^k9!d_!4n=t8U3FI;+cq-x{C=y58R28xgP`UTmuj2HDXd zMhg_h&UOqNaE^J>S)VLOvqT~FZ|B{dM^-nEH$M??M43($KSdO%!{&FEQXgfGQU)!r4`|G2bCi#2L6@uSG z=|ig$yccqAkMsebQ{`dkHQcziZ~668_xI=VI>|cqc{xAD;xwXbswc>mLM(UO{6Ykq zZXBwG{Q>1G*SgZ2V3JnncA%Y^4kn#ZtJR}JQ@I(KSm&jaecgX43zj{;V*?J%G9B-y z0U#Z!(c_4RjS6|^^u#hEX(^Mehwb%}1PJnxyYbDSoY>jeER}M7uU2f^OG)PH!0>0= z13SOuc)8mi9${;Kj5TFFbP&I5wY!ONl$Bk^UUxCzM9)jk$R0q_2`QzMp;ChY2-HHm zDd$n5705fzwDxFIc){Uo7=3mARA2P@(Pq(dlYP2gSBvSR?Jxbrq$CaJ(uFP(nfi$= zDVfC|2U0+p$)W%;=0La2DN4{s`>)^1(*Af2#fYF~L|$H?vsONNC{~Q{0>r71M^<*a zb$w^?S@UD9wHl2_Nq&477UO!_?oKhl`xacT!(J)JYP@~YTg^$(F>P=N2pZru(5xu@ zkI%1Iw_zE@X5)80(7}-kXItr2#fbt3=-|Mk2-u9as9MG_>A!c_V-RYGN`75pkEwGN z^bU!>gKlj*wLiQkMFmc%om~8xke)hzS<_ohC&?2cIC^|fpj(QEB4{JkQ~bFmtDw=YHI- zx{|qJG`VYZu1<0Em&+OrruNNBjbfU%ip+%O<9x;%>MFr+YW*;AxbLS{lYz#2a%9>qph zGnBp-sNXm9BXhqI8e+t@WicQr2UOod*+s!;{?KXmR z`CyO4e83-wAjD7tDU$$nvaWr>b{wA*{yztxWYd8%^i7mY@Aa<5Kmt6j31~J)8r-+G z0{?4U=~8QS{=0pcxG4lea0W&WfmAO@zQE|VU)*W`0TEW`%KR-4D3$``ZQo?b^PH(> zx6#hT6Qma^n0osyhKzUigo1}BNKH2t@BnXZWB+TrSJQ5dh=ln4N?O7C3-H|iflvBs zV^7Kn5QwA!t_=SZpsKRcpug6;RKGe|Tm97H?TPdN{EdCWnL4F&dDnymAPdklJJr9; z0N&TtpG^n+&q(>G8KTBwxqu%)m%4oRA^}Kl?L$rzzy{Doa*_TeK>8~WIZC7s!YDM^ z?E~L_2)*a?9xQG;!s9xy)Pw@Q*sS~CYn>b^tu)zPvih8Gm!kmEg=R`JTtg-qmvwF6HYrl~OKp4zDf&&5rrgp=z{~Lz#|GgQVW;?ERGw1ik z^``wzVx2Lyi&Jzw)I^Cu-2-|=7JK?R!q+b z-i@G^i={-&W+33xt78d;+KlB@AuU;hFofu`-me#i;?=+kKrKV6%gL4yjH~b#eh+s? zbujOZ`-(?hKr}rB=Lt~@xjNA;hW(%FY+>0P+1gVl6HhE}?RJKLSZg4}h@Y(5{mSw+ zSoEIxN`OIDQiMz>yX%B;4hz>aCQo20O%gJ1YEo(U!zvOq~7a+nehQcRK_}VAagBnoZ6DWln%r8LQGMhC*}|Au>ZCz?eel zV&u118+NjGiW7YK>y$;rjBN)eosA)Y+c6cWtlFuh~M*-5g&j>JCKxCd>tS7*k!ii9K zoX=*56+e!PVg=tm*B9{_Rvtv_5U~TB07XONnC|aRsI&+%4jtckfcOj@kWFjz5?t&+ zlgg}WtQxEFUIgJ{_Tlh}ak=H;S6eEoB|xF*4bSqsVE9mSy%b#DETI<3m!z1id)}k4 zIL-%1Q`ltl};O z6zdh0;MqbQ&qhE#|CFwm9fxCI1Yz*%L$KtN3h{GVD~?51eSdW1jZ=!D_?Hdr*6!m; z#S;DQIaGa6;^sM~4KNF12XBEF`pl{nziOB-A=CYkT#y11Mwy`EFZiHhfz#~Ayfo1p zE0)6W;*SgORA2z0nKj+_k+AQ1F%0W->Fz@^Q z$1x-{bOwNhL$DCNgfBs{?{JicG~0z4Mv8turZsB$ z*eF8P5mE^jvzqWRtB@>(SQtKu`Eb+dgC0AqBpK8%C2j-{0xc|%iq*sM&Q*LClCnj* z8#85$KH}cbnU%5^XAa?JWL`5vdy9V_ATr07R?}92UK%PqOf@Cf-65;S{f9ZOmURGq zZo1qx-T_vv0bHE2&S&U-hQLLEXocDvJRG1cv+(plH7N3)9E&sDK-RVPVApk1X$Zm< z-wi-zAuD#75gOnh-imzj63ON-b~}+XV5>y>L9q6#l5arThqy?P({$g*FbTT2W6&Fe zI2?x5sO{-OLK$t*JH(aHc3To#UJ^zo*LwvKy~y(WV?$ZA z>>o*muUH2`?iqw446v-^@`JIJf%X(1bp*?;6_E_e1Ubothn0rBB1+`EVWe{&dvTdo zCW;|sp(L{0$8x8o!H6nQmK+uyyJwry4~3hOHR-ivuWdvL0Tjh<#r9 zTq(g2s1+rBYsDF6J|a%>LyIglo?amV>dieJwP}M0QC5zaExXCyW}y_}vNLjKpTfcZ zgccxEE1~G!-Cs&jSjo$?8^7|`E;S#U{W!znb1p(9sl-(Bw!o_Fzq_$7t%$TNj`}_e z?`F(62o%TmRpd!3vO26HwFa*dW&-AzD~M?qVzpmuvP0wGnPpyUs<@_4aB0oOumOey zWKNSs5^Z_W*KzvmXBHzN1JG@GUkf1VldHr0;@T4^ghX=Fp&zftWF_K93)?At%%Smv z-X-98mGcz{+J0j#W{1jALiq;ioRt8J6`q-03lAmbD!XW%KmuVSmG*eJ#ebdX?q10zL@vw=GTlrqX^cBaFK$lTkgl2>TMzmcb`BK0)X3<4p=1wT=6D2OF!Saaumc= z0SNb}wW{OXJ~#-IN&9V_T3h{rnw39Ld}_?06;H^nSI_7Tr&EKv*<(=ReaB7E6JZTC z3UWTg^DTy>fu` zmtgvKNH9BPRc|&r@}r41x}YcpVR2491GlS=x5}G6yH| zI@5Ry1W_yjiHCvrV?M5~Pwz8t%uC+b5jtMjzha89w8<)i)X=D0l7JI&Ggmr5#`-&< zGGVJ+C*Q%zKjPC_(g7p`@cN4X8@0pv60Bw}q9dP$V&FNfm9Sfh7fK~)G^*K+|A?D0 z*FTpSW1gIx%Hlf6`4*;F?%4vQFa&vj{&Z`(?ZTH4ya;$nK++%}E_$0HDKhP8o^18- z${m?9uKVE}&;Z&=0r#KMr!=Ms1McdLT#5a|8`icI5>?P7JXVsRO)}77zYuVPpgi;) z&+l>#5x{_fJOurJ%n`#Kgytu_V(E^KVT$fbB&2P+ksiqoq3 zNb-s!T%-W|IhlQ)S-q1MNREytv4uj|$B8Oo)y$y_+v#q6bsmQj{o*c^lzWs^(Gof} z*VD{#AtbZ1U!D=|AVOG^A+d#sI1xf)9y4pgf~2 ze-0CjCOC&QfaQaW$|8bUNL~&k2oy}jwH^`pt3eE`>cpV?JJ#f0{<*)N*%5gT|sQuPfj)&`JaZlxgEG! z?Q-DRf`I#`cWJn|7Ay+@4({I*r>0mSg|b?t^AFt9Y?TnBAVyqW46xT>x??IGHmJuj z5vKI(p%b7_lt_+^lt}hpVR+(xaUH-v=Vp?i&O$DF>4g)g6te5-u%X#W^RyUU6z-5Dvu%_y6Y9k%a6h%=XI;Xy`-QRpC# z?U-cY*(nGXuvQXNOZ+uLf_;SU*x$oV69Rj$eGHACllbS<`eSM5ioc{s3$E|Qguj`G zrQP$od3i>gaDyMw+vgz~)ZgS6!3QqBuaE`6N(ovFc!T%FDcj-q$;@bG@W{dxa>u@i z5=L3ERE~->gs_ULlsA6Momwi@e~8b;R1K#^S28~o$C_e~WB0(ut1S36SELEis2G~i zAO=DRf-s2lt{sn+?q<-;vNGd--zkNr9TWxxexT*v40zN`R7yj^5HC>fhAB?Ch&qz0W4H_ei8)2}Xt{(z;-dW|ch?RC?5-A?NB>K?{G6Ql& zacqa#Ak%w!9qOE05pt@SFh>4AG#sj~tv23|u*F+;L&hpj$OCTrS)?J0ce3YcC3##2 z_XksF8SBf0=n%>Q8TKdhQo+94q0+uRZrmP?^ap*A=e|^aUXh-k zUo`l>=PNVX3cZ{G!E@1{;Q({@hYc^qG%~&5QLBZPye5xpAx>0c_N~q zkn+)E>g$t0W!zug?|$<18b!y#8`uigb z!oms?HsfCS*kDzf)vrqGQHWSvuC#>;nqZJ|#pGXoSV3c&Rr z?_y3miB(?FjXV?)gOEBUrq8Z8nQSKAG3#1Qv5F5(aSoe(a7So<9>!Fs2W%)`>l@X2gzDiEcp+`f$R&!DOrtdT19Uv(jvoi|A{e@oLCuCaKGB9DldPf6T{a_cb-g1+3G<*Ze)&8gv z@$&88BR@{qcY6hjF@G@~e&mH%(iE#u%f|fe=}+uHce~u~`7S_PuvnNfe z!yjS4`+L4WLq`I4XKR0sF1$|ln}!5i#xBlZpj(MuV>;~g-r>**2{%AtGO9r|OqK(Q zx9D{L^xIt+yUTt&MzBl#LuzElF_uU!q^R;@07eJQitnGo-jb|hPI;;Tzko1Ik~k&h z1;u!VpuxoL!r?|~4)Q+O8PD5KDuJ>ez=-Skw4_y9K5ie2H}iOln~ljWn#LVeqGRpA zd}SVmOKAbcIV`|I9#ES*P-?~(UG-aO;tzM$Z~$b~rl$ImtOw(4)V!E}3-nteVxG^_Xpj zx9h7l`@t{e`||K-8g(TvaYChd`bb5@Uu%jpobXBwcl@+?myi~mIRK>$|6D9h&Mm1l z^XWmcVFh<4ZSDiLoXA&G3|gr;3@NaDi(+&G8{!O|4^YxRsaBTlw;M^|Exv3b{p|C!#*}N()~525kLl>Qm7GjzNTM z_MVJkw0^?S=y+F$51%r{J=mm-gRn?E{Bn{FV~s`%+|v3=3#sz1aIl2QOQ^?p@Hf3` z?{g*LfMJzlwF;he0EAu!owJB!v1-8SrH?aLZ?FG`++7Ac+(W&bYa7`NoHDsN zA`G!#*4+Ey(b(#Sn2vA^q+SAv8It1W?4Z_=JpQi-TaTN97x z%)y|37jrW&k0B<^tzIZ+cTyEAEa=zeCkdq-9QEMk-KI+~7>S8ZsPU`8=3Sx3?h7jB zceFChJgmno^0yb=`}tB#)4A&UXogmnZm>ogr|-&b(l2`U9W)(xo(O0HShDV5lEcvs z=FocX!Ot+Q;TJU#ZLyBsT%%2}z5lPcuYQQ?`Tj;x@eK+h2q>jA(#_H;4bsxxjWkOt zigY(DAt}v*bW1l#cP-uB@XX@p`!_uMgR646cVTmbUgVKr?>vLiZjB3 zJI-#zJ8UoNPH0J=Y#$BaE5AP_(Qm8plFR?i^i*T?r-RS)6-usUox1xBI;LG6^-E_C z3JsQJnXUAUR3T5h%i((KPMBNhKPAh|*)47;NKY;T6f(OLV!O2zX8ThPJZ~Ia4z*B1 zh%gP7=K{|!PtSw4*f1M%mviRQsGgD(MHd+VZfU1tHr}Ifb-!WkV_e@|xT*PGDcqM- zEhN+%fO}AP@uqGhp)EdgRkg;HoNl!s{VWdE+e@xor6KqmbePuZ$NYU zaj;aR&ItYfxTmn{XtEBd1IC~x5gEM)>wT_eh6C7Vw+^p zVuyK9SuhI^M zypqjl&Vx7v{`)l=fwsTxp9cMraxNT0Sz4`};H0SPMmot(@7&RXmN|Q6`-L8n18_Jz z_FS1BlX6~kTfqk0X`hO$Pp7R$C^_h=-b&QhwCv&&t3KD?)x1-9mi7#Z`$A8vbo+!r z&!5obNAtnjOp2S~=2=?}ZtF89YeRaEy=1gH*ox*Xf6O#@nwOOrjmV5LjCb{YbTd6N zRpfEn5?N|bSX_+om~9c1mW*bU%K2-wekP-`xU{5NY6r1jz3K`jua+;9F?v;2_%W$H z*T5!aHs|(p0=xUGGLru?B$M2Ru8&%zdQFE*3L!n6&Dm?`2d_OZF1QlPt)dEey?2B! zu1gGdResIxrQ9~W!ScZkinP#fqOohy@n`xICw1_)Eo|sLd zwv(Sn9&jxd9+tl*-Whs5SWxuz#OD9fr}UqThEvYq0*9JMFhd#3vH}j)IQ%j*_O*d| zC8bgy7o$xVSK%)Pj{VoS1X`~|3R?u)TtWz+MtqF*hI6bN1*L_Mi|S<&#@D#&Goy?5 z>*HG58Sveb8qRj%dhgf%9BTmwW8s!~GE*~q&RS4cQtMq|rZk^Hh*@}ysUn~^y<-G3 zJWK`-zcuSij9-nV&>(!_y_-8_Xk85odLqIU80h(gjX;v6UM4i5e70r9B_7u#2P>zG z>zDS5mX1iM79^28nn+M`g4ab}7-vuTQQ= z;6y?k=4tTzMCTQ2pJmd;^b9jZB$!h!PE*#_@_%-4KN=5+?Q&d=J4{3JSYxH0vFHrkH4ir`;ttlx#0XhbSqjY)x$c1%aBe~d18%hsQlE-1nrosNBTjwEd{~Xc zFXmWtZ(U@Fd7FQaES=eJka@o3BK%?_m>l$h5p88V2~(&enYsRAk5?vz+k)?&9MF@x zAI@#JIxw$+FO`Bsdx^|D{^f*$&N?{ zeM+K(xZat_z|x0s_+9dLZ&gp)B|}Kj`&BjS-o1DfvhAN9Nq#*?(A7!aMqQlZO!%}h zYT3ZXO~~{s*}02zIxbD% zPaSRrZvXdmwtu*D30fl}tIjv~eRbA#@#h2;E@@(8+&i3eUDNEK%fHun&_SB=m9FhL z>_Bd>cqX?{qutRw``232u=TSAy5BDK5~B1}Z)4=I+e(c_LV^gs#8yyWH|&T4VE7WA zS*pSUJ!4X4Zl_Cf4jqJcRh|=+pg`-bu8l8 zE!DFyo6|Bq&i1-bX}4AGe^C^ZJ+w}ob=VmDD#8|Pf3P=#RcFD8ZrB({!oAWbl5+dB zv$Dc#(ue{@u;`Uo{d*|eX!i%&CriC%5%SJ&UKQVhDrV>`m8*IvD_g~tP@;-@vEeBCinJC=V|iH(!GsM_ ztK{Y0c1p9_vm=ZT)Th+oy|Q~Yo^9tt>!mwS6veIsbabI@{g)BD_#tU}fjnEP4Zc1N zOlb3mq=Jjb$xNsNx0^fCY62%Aa}?~BYWTMsFXHwll^noOvz z^s6SoS5l-&Sdv5Pg#7JV3Amge z9DXz2TW|3MxM<+^_@^eH8|@H>7tuM!Jq|kKX{JrCyjHNls;m|dQJTF-gng~IQ3%eg zWV7f=rDm@nCZ^7uV^{{99f2#~4c)Gp11xZQ{;R6E^B`%tqd6_`Vw{%tR~iY<`E%cZ zqLjIcE@Y@*cOhd&?G#P@))Wa0yFs~76Y~kl>6?!!);$!2V!D|rEOq9sNATNn(OlaVu zQJ}(aCG2YVY3#t(_iM?_D76?Xt>mG+C?L;J7@)qjvO;BbGikKgn#2B6-I~~wL%+E= z(f!2Lp5n5C$Y>{`?8!?3^j-IgF*vzeq;vh1JC8g2VN4;pVYkGiS^*e_2`fHnX@pkd z3EbeCWOnO10OB|nOv>XkmRO&%f2?p_0WH6YrRbat1-ZR><^S@Hf(TgFhS(XpmtdE-x6sw@Yb28UDD@|fk$RgrfoSCfyTJcM<7wkvruwe13u8Zii_0ucg#m0x3r)90Gk zq1;?RvER7;`K8>z3ev8ue6O6KKktC&fUDAP&EkIdIz|U+cN=-=r6b$Bu8Y{}^a6Uf z@&^mc+Y*Mt?BUodQ_J^&ggS@c+c+_=F@r2{*g~03aatd4NVE@&ifd^~ZDu#sUahuz z3gT3U4mT+%d-cS;HIJ_#t#YegKr9L?=-XwT%`75di2~#`tZUm$pgMWWaYjI`w`r#R zy!O&mU~M8)WsbQzjN+13z4n1%Kdk*XDfQpY%s1}bxfnMiUGcH}a{t->7xa=lyzs+p zTkH0$&AqqElUz$msih0};}zQ|vkwpdI(BFbo2qgQJ&E8?^hQblWmzA39!1kr&Qas% z9{@BqIJE7gr-PD;!gi$v67qwD`fAP0(v0)(-@nBb6(+l*s6ZS^@)Lqn&Q0Sdh=7dmQ`ov8+uD_?SW!XR4W^RGqZ*1ON8T&}; zXX+VgTd>y6GXB?+%ZmS$G0Yc$7{dTajkcib-@ZP^CB-GTpVF-ne5b~Q^;dfuKW^D+ zx;@j}Ty(fWfNrfMHI~aOWQi3ksZg17Nd#$K?d3}M5$WN*Wx7~SCaA6_a8I8$A8C%g zb1sULeW+gJfOS7 z;(ogveXKxt#qW`_Q$VlNIgdNcX8L zFJzusY>YI`a7B7HTxIw1w$q*dA>>CB(s$QyR8_ zp1R!x6mnjr%reUzShs|Pj9zDcv?_hf?bZ5TSrTb$7cM2xcfQ(bnzcy5r97c<_k6YA z;ixr*n4=tY%54PCq1t00ii~L=yH#;0Bw5hHpX8D#IP0~rmE57 zAtPohTWqP9m>=Z2u{Wh8A*)wIvErs6Nr-GhYUU!?rS?3s=Uguw;y6dyLVeKNxVwJQ zsc~wAF(3$CuQQ=p(h1nMa4xkx$qjPZFA;%fPEJj^$Ho+sYXbebS_F@>6Z9jfmI9yOk|i@ol4NUJ9T4 zD*!j+Ln1r}z!1|mvy;wp`<+3+jvvArjv3K_!adOJRqev?b)dmup%R-_z-^X zSQb)Gu$>DGER5p=JZwo_D0@^F=eoRutKj`}7V&Is5awa67CcP};(sY|(X z1zPa#1nZEQh1dt#VxtkvfY{?!efYr9dK(d@KunIfp;Rf0?RdXx!6?0B11@%_+3y$P zAm7cC%cSpPlu-#$tG@{jFxb(2FuZ5be!$?) z+~`=Q_P}*uzggv;>D)O4bBm{AyeU$(#)Ta0ek|z@V2Ai7A>zk0hEkzcL!U_V)q_zd z%6{?~Jd(|t`-RoBjz&d~c%9j8T(FR7=M~MU5kme`tastk9E1Q%wW@EnSf4S%)N;@% zO)n>LuJ)Vx+OU%Te6gus(kOYH^vFT?YJk?kaxkgU^&aTTUFvu&1Jg=c2C!G2%}>$uG0oL~Lt!z8d+1@6Z*g8mjbopq;Xmojy9q zXH#}k8X{RT24|a++b7vG!ffvKC07d}T4}n@ViyYpivBL1=e0+vf0^#9LKbpA3@{Yn z8P|u;>(uc99MtW%%n;_=RrXff*|lv(F0)*)l&ZJ&O)P5WM`~t%nxVFy43W)j8l_4< z2qi>#@8)%uY|>SuF$xIpVEdSPmheb4OEB;>q1Tuk&=^XEYWqJ<1-}H0bcJ`b;A_v4 zMA4{3P0Li#;>#sWhALB2LVVb$`@)SsrptF%ADvxkp}$~wOmLbl%CMf1~n@H`c|&u1BG=$R>DV*p>^%um6Z_1s+e0r z%$5qNcZ+=<;P+5yXxm<=`(pnEQ_1_D^ZAN{54L7qd_nEeqCOi zrpQts?sA}7gKIFveeS{bW4-1%jrHMMuazi2Kk<-Muvy{PL2P=guZWJ@**-jo zmGz;~@+c$z=w_ZchDps-MuM~4a-UCNYczf^(u*s5CorRhqJBs4Yu~romXVne7#?6l zvF|pHsy+ZZy9Fc!<5hY^Ag)B87WKr@mcAUX;B6Kl7Lv3pf%+Sg({^^cpGfFDI^FCA zKcwBgoeNQKP!tTlNkaeBt#CQG@o!*1{|(IU)FFL!uP=y%`77X#>@Vu+R9L>u#aOPK z`F~CjR0tfQyX9mGx6~a8pyZpaq*_=oIk)Ph>iYBqxiNvc^~7lAZ?D5fQF=XK`b>}! zMHnIY3#c;4v0Bz{mf#bXD@PuyD?3dCMP8}xjSKE9hOcGIOgmn^P%ZHpp^F*Ff};NL zxcHABEm(JLEYjA#p=`;0w;a)8D2f#)wEER%erMfSCeY6hL4yR3epg~ewYPjAVD%){ z55rt_!MSvQm@vkMZhEbs5c0%{a$KL^)W~ZF_%Sm@({CvVKUiw_lxR-4d2(y{xX59v z4_qr>F*vf+LHa$Vy(hymQ(@Rlp&9lZEV3<8&pog`SH*vFa^ilre0k)uPL8ZV0dHCK zNRh-*6(pxgb@=TTg=#@K?+Zrj$FMFtLwYPV~E+|Y=y!ML*kLfD#;o*($ z*r*k|*)h9LP*Q-7P%3mC6}$HGwFRE_d`%U34=px!IFZC^P9Q?FT%<{Di5(|=OMm3* z`0C_3CFpty0|emSb+@c&9bTs-KNepb8kRT79NqY(`ujST+BFeh8guy$#+TSVk8H25 zNPF@zzs_G@s+QUPXQ6!aabSmk*lIYcUA_FL?NAIiq$VzQPEMD_8C8xh;s0s@c8)#x zA+$bV7jodI8}9~Jo{>yo$J5^j7HF<>lo16v21r3NLXSEhAsGz+X3wndcH-W3`lika zpA}gFsYB`%N-!h_!qH;0-npK$9AY@j;IPHGUqac2VyTnI+cT9vv$M1A|DA{CE7QHX z{5D<<<)13I(jA&~tc1EKWb5H*yKj4lj@Y<&w?X)GN7ls%9K%k>*$ z|B=zsfz>eRlMCxpfWfp(r+-rhCzd~$-^z)!oGAV=!^_3ColW{!&@)K#Q?49kF;8^f zaB8iNNLS2hbJIH}QtMCV=5O_WHhq7zYtp}CwKY>0nac0B&!Fx71suZj>y$gm%}ex6 zO4^FAG32Kk7Tj)@r>*z;@0~^r4o6T$*qhOXb?`s0K7E!_drE0=bLQ>tbC59PJhN^2 zu$tHw&ayRA6&-f%_TOyHH*&o^%sD2OF1610b5)s|(+JEx+$?=ylN{fs_)RPSq1xXVX>{kNV|6T2%NrKddNQYi2rU z>X$?oj%mbm9Pa0Me-^$q5%D{O39j}mBKDhM+^)DPTq*%)c&+VY_@r}-xD4trviepU zx#DN%hxn;-*eSjxQw7pbK4{QhWIM#;Eo^9E0US~(pKmA; zfXDO!9%1xrq!_|fHxaC0L>D&IIpN;v+jj&n1@Qh?z+2!%x%T$zs)2n$AM@QB6M}Hx zy^C|3Hp6SxyY8!OO>J^@|GtgO`(7V=6K-Np@6zmMh}&D(`~b6ZfqEbK&nhC@@KTe$ z`Nc(Vm03RA{r%RVZVW6eCce6Z`So}-CMKq7em+}A`*u265|k}^r?WHTS*pnPDMA&M z3TZeYIN*J`WZh+5ae-Q*H1pmw=s7D8U5l_7_RE(NO2d?CWaQxBc#5|Ntl6?jWCwqZ zz-|w%QU){hV1?YP=@0G!t<29q^?Zvu-`U08d`42~m%?Z*#Z;$GG4nYFUY6u>qB95b(J!F!rP(si z9nYWd@56<)>xJDd=}nq3vGH+4q#Y<&2IW!4|NiB+f6F(-!6+ge78e#q>qIWo*M~Sh zsee^0v_Mv`y0wM!`IH$QVr`x5;4uml&rys} zB$)ByOS*7wWGnK8Fo0nqw^zyR`d#olnlDM64TX2%?z|@4jMiAUF=4k|GtD^8ShDVc z(ahNt7#Bv?J`v$wFp<5+KWPw=|i^S4uW=KD$4zbY=Vv%|I%mH?cM1 zKyvg0LJ4j^CU+MTNZdA?RUpsZgQ5>d;}yx&6wD>xreDa;7t|LmE+^Sny!ZF>n?`>D zkel!hix{=oMbK7P#=;P{{UQv)5V3s4W6V?$?g2 zegf$SYc|r~J^JT(LaV(_FzHqb4Ek5DSE4#lPcOd<(#lbgh6+qHBHcoG;=efw9CR$n zgfh(1vz`nyV;X`xpj04_p^^x-P>ffmqgSjLiU*sUi5s3%zgzHj*RSp z>28!z6e6On^}|mp#_pGBAk#`5B0KFJ9KfMJGoszaf==B%UE|-=Q566p2drE20)KlL zEJ|M{aN1ru&lY`PlYeSS>viazGF1FqC$wK6QNv|g>c-NM0VCb10twMVbbq9=72(~= z+Xeo6d?R2l!>ufw-~juBVJ1UJoFbq@_(R7yR`#$UH0)BYfp| zhn7j1`pXe7KH};IVXz_4YlaNDPw!&8A*g0tHoZPr3fej(E@w}KFxaW|-slOCXz#Fl zoyh+aXC#slAyX5sk+DK1_$N{4Kt@{ji=Vl@Zhrd}rkGmi~tZMz2IuYzTt-e z#11W5o1f z=pxV*(mQ7q;lqF35CbN0*zH`}n}gP6T#EhM4ejtYuIWn9xY)7TJR&l(r@I6e6=hg% zqb_!~pT440j?8Uc8y`$MLqog;&i_dX<4*LQf9UZr_#0e8jY?IpfAf#SJYVBopYzHN z=e6Wnk?6^VIui^p5%bg?Q81XR=nkedB*xA;nkmyY+_Hyz)RZSG-oE8IV6~|m4^SKO zV$LqfZ1B_;2FyIzJBYt?8P5LyL<|EPRx>~9n?YP7=j7>w)fd&1UOz+?So13l#7OaA zi-%gA7VCa^2LAfe#03A5`#^XVTXrI;lL~_dv#%bi+jCNU1DR_&U(Jx=UsGXetx^e! z6MgGTCh-t6MsVYF1|hIYr(xe9uaO!hd;H_muep3Yq^g2U7xkG3h5Lt1s%D@VeE_lMQ)#}cF@7W))CB*|lZ7duGEJHT*w)YY6Z zhacZOA$hb+0Jv@fe%<>_b(c?k0`Qobkgl^v{LV4?wpj3Mr(Fi3s`1%FE}g%Z7%gLF z6%pEKJM~|&d$MyT277)GGkTkwK_~a?G4b4PZ(#wUf8}QZMLnbIuFU*%H|+M1@A7F% zYOmPM-YsP@upNN4j%*7`)n3ZtuBm3djHM;wbqHI`?Ak-vGcD<4@}Llo(VMg`R-yFV zE~$8*ZZDKM*W3$(!CKM;O-h_s@&1I8QrgttKU2vOopK`nY=4UKT7WUc45wW#Q`BmM z-TlcI(gs3_%*I!^j0#a|Lsa%OM<@a!fkTTqt2x@)_r^*lEb~?QVxQ*4*sdsXQ+CH> z_ksuGKZ}E}TulsJoo~wW?Qcn4)$PfiSk6fw^E`FQG>CPZg+xTf3H!G5EP=8dKr~ZN zadG`yCuxnZK=(Y0%as zv4lh?3Lr#FY&Joqv)j^E&gY28=d_zDov;fruj>=GL*w50Wv~b+5wP*<>o^5L$3cS&p%V-x% zk*{e=N0CmsV$geY`I2K@g~V!>XUUb^_sHM^VKY%>^g-0OD;PO9RNvs`_*fm(`L&;N z+Lqd(>yZtaAf-Zy6FBEpX3`KZYHBvUH8yEn`+ffFaw_QT>_hHIp=|uYiy@}YK8DSB z0WS{E=v=#->G+_ty5G=NCj~_X__&&QkZguq4QMSjB^Q9pWT=fqo6i0`NH9V2TqFQr z{~)5sY=7E-8)5hQYK`{9dIDINY=*OYWp#Bt>#gNZkC}^$>;AjjH}O22)hFi!yY3I; zuDhcwls^cSCA<`gKHBvNz*}`V^S4meDw2=HbH5rnlNNdLMg1wsz1If{!daDS@Q6&d zfVU`=-BH=KUMMCqNj-kOH_Kv@QH-(jO81bBgEVes;aOR%Hlsm7+(JXjvPe^OIyu38 zyO>DpTm;fAG_dIg*zc_Mq2M#mCh-@a!1#Aq9FbWspD?IYSwFB9(}vzcMUM&lZ<5}u zP@W2d_QNQCApz`0H)^Wl&+dIK`|Qw@L~Fe}F9ggpU!#iMfcKfu;9_Qxnh~&L3d+{C zzg6)8(RuQ4JC64!yKm^Z{cyyv?2qJVYRX?}_!N4C5YnnIUaUGg9?nZw%B%VZ1}`Jv zO+`56A*K|NRVH)L>P`-fxRYtJS> zi64*c@T1YT9_WPCZvwt@DS9liIP7}dvOKX51LC}bJH=Iy?(|KpLnZZ@zrR0sji%rl zUL(jc+4G{K5ES%IqBvHJ?V4GiDyXhnSX`Z5eL$iGI3(mr`3EmhUGiHauUMAE4i7dF zU5?j>+L!%tp19n?y8jH1NVtKq$bR8U4IE&t@A@6+JmSG5I|USF7m2wi zCZ=a^g~UgVUG06qBRBRw-Ek0)QkH9nPz z5~IJ^Uk+UtWOi!{&3}r%nyzQCcJAb+ZFTrK8G3+Bz9=PQ4UQ0VYx|>8AeZMDg7-P6 z0p{ch5ogR`%G1>C^bxj25Zv8QdJRF6TBL@|ogl5j)cyYAGP)YLSBVfRGOVAQ@ljBNEk2q7{y52giNxb~ALlzn12i0<8(#23AVlP+gZK`$ z0<{B5k9lW&cbD7w-r>o+>#IK8;brj{Jg{(0mrn7`$V=3`|6#L;=qFn@oQ#FmJ*vfR z$jk;CIzJ32i?Gj}u;c_(FZk>{Pwf5Csy~Rq7;Y)!^CFz6OCR1s-A0{ocO$!43Mz_) zu(IocbML&yO+t}K$WLX6_P}-~4%Gav@Zd@>>kZb89U4`oATw>j- zkPMpF)wfC^b+n~cjSX-q-Zk0yWt)QUH&?aER2bW(L+eH(s!F--Hzf@j+gFY) z=YEGc6*YXzRv`g!%4h*L=du&c^?0)Gn-RuJSllyN0DDZwe&U#wvm4*=`CAc=0r@uK zSWWGlf&{rRJ|d#$&qM=|j1Z@{{HX=fc<7S^d6!%V<-;9Zs%bx5IJ^NkUmJQ* zqI+V)HiBPNt_y-1ra-oR`-vs^f$ScRV3MVXN4zTN`(Wo-lKd-wd0Dc1P z{dAj{sx@_PKjK8yui3M;JVgWeYyjveW7EkG$99eW?goO!0gs4??b%LG&0dBQLNb_s zqMfihZIA{f-r#>|NN$2!0d(BC8kzfKmcCx&x?Z76(Kh=WKQE^hq5Yv&_ zjqxA)5M01~Aksfl(N^NbgL#hJwT;DJsLPK_-!(le@ z2rxJxIdg04i4d{(Ojqsv`(QL5Lr8Vc6Huu%I`D%stcR(Rz18_@^>Ki^RLaqwg4J8q zx3czsu(fk~lv*5WlU12+uoJ|X2ttTB@Dl(HJlQ#y$#UGcZF9>go==sw#l|kJjN|?y zZm%g^f)C&dLem%P5wSAJRCjpVr)7S-Y(PiB8_NJWg8&D2yIcvfe(|N{y9*4K1XnFg zN!i}ox=&zX-QF7#*1sJ!MRdPWzv{d@^}EZ`NnBJ^2VfoJ>&ckJ^$tQ9h$7ZNCbVs{v7E=*B_0kJ=TC4#lK;{wT&48W;s@HHl2ViM{ z5v^C>zED+ER4cPw;0W~wpyrbl{`}5@M2=m2sD;ex;)baH`#+AZ$0RjuL?dSVF=;(h`Ge`v z5ipB{4aZk;B$Q+k6FEyJL|^3YWDhWfU7=JtyT;-&M!2&5dtrpNOPUogz3S)Fy&eMn z#J=8{Z?;3XWI8wAwYQV@N_0E#A{!O*HCa;%uYU~O^!CM%>f~)p5&s(6V?k01p=0!7 zdDSL64X*$dNMtZ)>IW`=N=ffsIY5^Nsfk zE{d6%*@NAw*6`wI2d=Bx=DpRG)m0|y_BUH*0I7ATygn=*pZBQY#&!Jy*z*7K)nX9{ z3b3#lxHUeuspx}28ji?5em zU0C<=&5nh*YrUgPQqkVs6|fi-VA~i}gF)_X_CUZ*ka{_xV8uCA z-J4eq8V%nC{k{jgn9ZG#Q@z&~n2pcFYa{{lp$2zdxKOp1y9?eu$-VDJ5iL;gR+j7Mwmz-(y@K+cCztZ$zlWROD_!Uwn>07+>3)A&HJYfhO?Nh zAR(vUbOO7@tM1i;vP`OM8IDz9ic2JPK}oYR1IPDfF+mJvkXLmSOn^#;4BH2f+5Yd6 zzaQhdJp3bjvkE^USB00wd2;br<@lK`azZv>LgLlYaCzI+QTl0@N!T;wHC}jtvU980 zLObh3j^`s7Pt|DHZo?Vmlhz%isxSduwR{fkXwerJq$YC#`$DK=q%nz!fojcj?|&2L za@rYw$XPV0ml6}hIf}fKEVxr=i*cz3>q2SDb0Avqseoma3ipr2 z=5$K3)WuLPrh3%M;Gc{(+QX3Mu`rWO9Z)L{gsRo{l zP;UrIBqk)psd%Moj2s_w8{jH3g%KRH>vB<1+LwTX)CI`MesN?^d&$mvUoCLtj~rBbN5QtUU4s&jn4qo8DX#HU-0d=QeIwxf#y z6&NAww`~J#W~1z2$w-P7mb-2D8U>IAP2bQ?{W$#N;19jCCAH+RlakyUKApK zKei5#k#;znX=DZo?V2HuMHAcuBQV@>H0lF))=Q_YOu#-MD;JOipJ|o5QlVB83>XyR z?C0kn4mWS#P(m_kKexf~!(mcLfRgvuQh7=$&{K!xbgVy+tXBdX`Z?KiNq)3IwddsH zW)E%WJwW-kPcCCX%hS;<$WA4}t1m~^x3-3oc3r$eU_Xxxk_db*2s-Fv#{A3|Fp8X4yASm)ySZCP8j;#r15&F0vf~^TV z{1(yn{TushMiCKVk4Mt%t0KZkFE;feKE_z1)~dJgHu`{0Z&xF&c!wOU^{jW{8Je=u z=P_-teLb&}nE>v}R+{zExMe>1h;Swv{gYHY@~#xrR6}($je)DpOwxdNO^G`sBSXQn z7ZOI`jr1(XZSM9{o!aLqm8J#Ik*T$0mV(ZJt8SWFb!%J>xPY=ujP3Za175Un-ZtZn%f@lin2gV;;Pc_Ux!t2q;dn|;7g^-K7DwC}?RGzP0xgW3K zNx3y#MHa!qT?08#P2_12r93p-)rLVFBLbpPZJh>t)_rF({V;-Fa>j+HoGin$HxBz6 zV?Q5$W?{ks4}q}6-`G5{`i)`kf{kz2PNiL&I~(*#pZ+x?gM#vyQbPEh(wp{J^@33+pJSVG3nHkZK(5p_9oUuyUKE_Ff zgH+0v_EOtkyO_S)Z@m^Oa%D2FgEhv|P7FT5D^w!k9d%BNa+L)tmre4Z?*7Bq@ReG> zj$WiqJ4fbyOK%Rk?^8#JbzYyM0*)#KkLx5`)L$GwIwcupnC<#H^G+#O{o9X^>I3tG zD}ZHctJT~IUlPNFgA)gV(MS*M5TAW25O2mC$vJg-syQ>WF^*f)CHLIWcyqaaNd~vwdoY?OY@I#5;_KJJ9~BqY5=$CV zSM1TdBSlM_(~p9N)Xe`06Sr@d8pgUaVM<$jAMgmh-8~*p6dDqu%-|-3$ z!Eo&GvxiDQ-)EX5X&`q{9!sY-dBw~Eu{~l2IZ=ttn6mdbC+rd%Chb{mrQ8KyuV7a2 zCH9W=DGnY~4pbQaKK8O-%lzdRcMfn<(pQ(mSJ>ei$jYK=slE`IkR@1>z2KBlhlcfW z;^<#~F~l2OoEP0qb-s}2cHYGvV*l}e{vOvs07%l0>vk__73-gaUhmn$IVHFZ^5}t- zt5@Co>J+;f0ZjFd~O7m_VosveS>i3WS0K;;In=s%8As}Dtj#s4Sc3XLnEkm+A>~8?$fqJu1%gR@p>Vev^rtFs6shZelHAVNI z%+F0%6+sP+StFqoY!a|Ee~vV~cY&`NMj|F(Ibw^@h|uWczpqHcXv#DdX>D&yTl?e9 z8~t23|6>?>nOSN=IYj|S4@JDn3V(c6XcePmt7d>)L1)JGwel_M9AvZE`7DJk;ksh0 z8?o>$K5lw51b6)Fjorb-MMc9w`p9uvFm+<^O0z`PBpZrNl>Wm??EoMBg#~^1S+Kz= zDZ|X|@=~;UY|VrDf~>NcuH~Ew&E(|Xo?%Ny*dG}I&_=$ROdJ**&!iJS2$HBr&anfV2bMtata8i^|QX0ALcD|iZTV@upjUw#eeB9%hl&^9;ng~R;pk?sZ^Tt0KkD9G z2@XnO#b9Cb#%W}TEQ!~i>bfEp5d&nzry_PznLqGQDn340%BnObldfA$XQE?r!!-sk zMsX_N$itD`f^JiSrvzReKe9YRAtA=|SCW;D-@Z4Y$i%^&omfKMj8ekscvsTdhC_m` z&Vzq_VkF6~4EFWs&C$+=gshJ9qeqvDg)wYA0o*p$57d;*+4K;^n_@U8gKQ5-&P*7~fhv%49 zvggj1dxpIDw5-?@1f>sVDD1Z(U%0*jAr79X)a2<*S#vggZm%OXn>m~ab)0jZ`g!sM zrP1s6%yeITcCy9w26aRIRmBjKGU0I>xrAq!VHH2dp0Pqc3U6!jd-z_adFFv!l{!f+ z=!g1A@MfJi;1V;WY!wCNW_cN3yK^b@+Vcjh0rJV3v%G)I;#e&5iauHPQ6Ww%dNIz7 z@_H^`9O5h7KRhJVbMa&c%l)jb^~0{4NkD7g2|dalr$W)l;Z?jn^a4>TC3|xs8=To1 zt)4#W!rsXg0kh~wy=tIU{*sg*hA33XbkNCEsCTv~Jqd9pj-NgoNFv>GyWJK;zqENN z>ujTPNrW=bUw(#LoS}Npn#x0B zCtpd@8DFY3T_dp!HB+(*zT3FA0t}?Jvl;E;l$c0LMOLLNTES`erTB9uHqgh;V|a8XC-alU4}>EkB}Q*N7`_ln z06A`*P(fuQJ|t!KX-F;q4G&QkvoV1WXcUd0VfegX9aXl{t*I(dyZ41SV?ga!Y*!|o zS($pPzHL3&Hsz?$$U82$&^g>-;%B0QBq9#V*zTeHfBbw&oD{tG;5gfSJ2B<}ISvUC L8R5eBy59c}JXG!b literal 0 HcmV?d00001 From c9ab8aec4bb42e090ffafc424dade16dbfcf177f Mon Sep 17 00:00:00 2001 From: Mengjiao Liu Date: Fri, 29 Jul 2022 16:15:39 +0800 Subject: [PATCH 286/292] [zh-cn] Resync node affinity yaml --- .../zh-cn/examples/pods/pod-with-affinity-anti-affinity.yaml | 5 ++--- content/zh-cn/examples/pods/pod-with-node-affinity.yaml | 5 +++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/content/zh-cn/examples/pods/pod-with-affinity-anti-affinity.yaml b/content/zh-cn/examples/pods/pod-with-affinity-anti-affinity.yaml index eeb3cb372bf31..a7d14b2d6f755 100644 --- a/content/zh-cn/examples/pods/pod-with-affinity-anti-affinity.yaml +++ b/content/zh-cn/examples/pods/pod-with-affinity-anti-affinity.yaml @@ -8,11 +8,10 @@ spec: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - - key: topology.kubernetes.io/zone + - key: kubernetes.io/os operator: In values: - - antarctica-east1 - - antarctica-west1 + - linux preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: diff --git a/content/zh-cn/examples/pods/pod-with-node-affinity.yaml b/content/zh-cn/examples/pods/pod-with-node-affinity.yaml index e077f79883eff..ebc6f14490351 100644 --- a/content/zh-cn/examples/pods/pod-with-node-affinity.yaml +++ b/content/zh-cn/examples/pods/pod-with-node-affinity.yaml @@ -8,10 +8,11 @@ spec: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - - key: kubernetes.io/os + - key: topology.kubernetes.io/zone operator: In values: - - linux + - antarctica-east1 + - antarctica-west1 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: From a7a74c81efebe2f22d18800efbf597adeacdfa31 Mon Sep 17 00:00:00 2001 From: "yanrong.shi" Date: Thu, 28 Jul 2022 21:35:54 +0800 Subject: [PATCH 287/292] Update translate-compose-kubernetes.md --- .../pull-image-private-registry.md | 32 ++++++++++++++----- 1 file changed, 24 insertions(+), 8 deletions(-) diff --git a/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md index 7fd8b2ad7f6c7..69a10a8035a8a 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -17,11 +17,12 @@ This page shows how to create a Pod that uses a {{< glossary_tooltip text="Secret" term_id="secret" >}} to pull an image from a private container image registry or repository. There are many private registries in use. This task uses [Docker Hub](https://www.docker.com/products/docker-hub) +as an example registry. --> 本文介绍如何使用 {{< glossary_tooltip text="Secret" term_id="secret" >}} 从私有的镜像仓库或代码仓库拉取镜像来创建 Pod。 有很多私有镜像仓库正在使用中。这个任务使用的镜像仓库是 -[Docker Hub](https://www.docker.com/products/docker-hub) +[Docker Hub](https://www.docker.com/products/docker-hub)。 {{% thirdparty-content single="true" %}} @@ -45,12 +46,20 @@ registries in use. This task uses [Docker Hub](https://www.docker.com/products/d ## 登录 Docker 镜像仓库 {#log-in-to-docker} 在个人电脑上,要想拉取私有镜像必须在镜像仓库上进行身份验证。 + +使用 `docker` 命令工具来登录到 Docker Hub。 +更多详细信息,请查阅 +[Docker ID accounts](https://docs.docker.com/docker-id/#log-in) 中的 _log in_ 部分。 + ```shell docker login ``` @@ -161,7 +170,8 @@ the base64 encoded string in the data was successfully decoded, but could not be 如果你收到错误消息:`error: no objects passed to create`, 这可能意味着 base64 编码的字符串是无效的。 如果你收到类似 `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...` -的错误消息,则表示数据中的 base64 编码字符串已成功解码,但无法解析为 `.docker/config.json` 文件。 +的错误消息,则表示数据中的 base64 编码字符串已成功解码, +但无法解析为 `.docker/config.json` 文件。 + 输出和下面类似: ```yaml @@ -239,6 +251,8 @@ metadata: ... name: regcred ... +data: + .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0= type: kubernetes.io/dockerconfigjson ``` @@ -256,11 +270,13 @@ readable format: kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode ``` - + 输出和下面类似: ```json -{"auths":{"yourprivateregistry.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}} +{"auths":{"your.private.registry.example.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}} ``` 本页讲解你的集群把 Docker 用作容器运行时的运作机制, 并提供使用 `dockershim` 时,它所扮演角色的详细信息, -继而展示了一组操作,可用来检查弃用 `dockershim` 对你的工作负载是否有影响。 +继而展示了一组操作,可用来检查移除 `dockershim` 对你的工作负载是否有影响。 ## 检查你的应用是否依赖于 Docker {#find-docker-dependencies} @@ -106,7 +105,7 @@ execute the containers that make up a Kubernetes pod. Kubernetes is responsible and scheduling of Pods; on each node, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} uses the container runtime interface as an abstraction so that you can use any compatible container runtime. - --> +--> [容器运行时](/zh-cn/docs/concepts/containers/#container-runtimes)是一个软件, 用来运行组成 Kubernetes Pod 的容器。 Kubernetes 负责编排和调度 Pod;在每一个节点上,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} @@ -119,7 +118,7 @@ The CRI was designed to allow this kind of flexibility - and the kubelet began s because Docker existed before the CRI specification was invented, the Kubernetes project created an adapter component, `dockershim`. The dockershim adapter allows the kubelet to interact with Docker as if Docker were a CRI compatible runtime. - --> +--> 在早期版本中,Kubernetes 提供的兼容性支持一个容器运行时:Docker。 在 Kubernetes 后来的发展历史中,集群运营人员希望采用别的容器运行时。 于是 CRI 被设计出来满足这类灵活性需求 - 而 kubelet 亦开始支持 CRI。 @@ -128,9 +127,9 @@ dockershim 适配器允许 kubelet 与 Docker 交互,就好像 Docker 是一 +--> 你可以阅读博文 -[Kubernetes 正式支持集成 Containerd](/zh-cn/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/)。 +[Kubernetes 正式支持集成 Containerd](/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/)。 ![Dockershim 和 Containerd CRI 的实现对比图](/images/blog/2018-05-24-kubernetes-containerd-integration-goes-ga/cri-containerd.png) @@ -141,7 +140,7 @@ same containers can be run by container runtimes like Containerd as before. But now, since containers schedule directly with the container runtime, they are not visible to Docker. So any Docker tooling or fancy UI you might have used before to check on these containers is no longer available. - --> +--> 切换到 Containerd 容器运行时可以消除掉中间环节。 所有相同的容器都可由 Containerd 这类容器运行时来运行。 但是现在,由于直接用容器运行时调度容器,它们对 Docker 是不可见的。 @@ -151,7 +150,7 @@ before to check on these containers is no longer available. You cannot get container information using `docker ps` or `docker inspect` commands. As you cannot list containers, you cannot get logs, stop containers, or execute something inside container using `docker exec`. - --> +--> 你不能再使用 `docker ps` 或 `docker inspect` 命令来获取容器信息。 由于你不能列出容器,因此你不能获取日志、停止容器,甚至不能通过 `docker exec` 在容器中执行命令。 @@ -160,10 +159,9 @@ or execute something inside container using `docker exec`. If you're running workloads via Kubernetes, the best way to stop a container is through the Kubernetes API rather than directly through the container runtime (this advice applies for all container runtimes, not only Docker). - --> +--> 如果你在用 Kubernetes 运行工作负载,最好通过 Kubernetes API 停止容器, -而不是通过容器运行时来停止它们 -(此建议适用于所有容器运行时,不仅仅是针对 Docker)。 +而不是通过容器运行时来停止它们(此建议适用于所有容器运行时,不仅仅是针对 Docker)。 {{< /note >}} - 阅读[从 dockershim 迁移](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/), 以了解你的下一步工作。 -- 阅读[dockershim 弃用常见问题解答](/zh-cn/blog/2020/12/02/dockershim-faq/)文章,了解更多信息。 +- 阅读[弃用 Dockershim 的常见问题](/zh-cn/blog/2020/12/02/dockershim-faq/),了解更多信息。 From 40c626c27609052f08dde24260efa3b674c4898d Mon Sep 17 00:00:00 2001 From: Shubham Date: Fri, 29 Jul 2022 16:43:11 +0530 Subject: [PATCH 289/292] Fixed markdown in tabs. (#35504) * Fixed markdown in tabs. * Removed {{< note >}} shortcode. --- .../docs/tasks/tools/install-kubectl-linux.md | 21 ++++++++----------- 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md index 77af85d887c49..9ecd3b03c4c8a 100644 --- a/content/en/docs/tasks/tools/install-kubectl-linux.md +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -112,17 +112,11 @@ For example, to download version {{< param "fullversion" >}} on Linux, type: sudo apt-get update sudo apt-get install -y ca-certificates curl ``` - - {{< note >}} - If you use Debian 9 (stretch) or earlier you would also need to install `apt-transport-https`: - - ```shell - sudo apt-get install -y apt-transport-https - ``` - - {{< /note >}} - + ```shell + sudo apt-get install -y apt-transport-https + ``` + 2. Download the Google Cloud public signing key: ```shell @@ -144,7 +138,8 @@ For example, to download version {{< param "fullversion" >}} on Linux, type: {{% /tab %}} -{{< tab name="Red Hat-based distributions" codelang="bash" >}} +{{% tab name="Red Hat-based distributions" %}} +```bash cat <}} +``` + +{{% /tab %}} {{< /tabs >}} ### Install using other package management From 691da5bd0b7597cd6daafc0a122f1efbca0f8fc8 Mon Sep 17 00:00:00 2001 From: Michael Date: Fri, 29 Jul 2022 20:24:32 +0800 Subject: [PATCH 290/292] [zh-cn] updated /migrate-dockershim-dockerd.md --- .../2022-02-17-updated-dockershim-faq.md | 4 +-- .../docs/reference/glossary/dockershim.md | 4 +-- .../tools/kubeadm/install-kubeadm.md | 25 +++++++++++-------- .../migrate-dockershim-dockerd.md | 7 +++--- 4 files changed, 21 insertions(+), 19 deletions(-) diff --git a/content/zh-cn/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/zh-cn/blog/_posts/2022-02-17-updated-dockershim-faq.md index 19be63a86d38e..aa3ee8ee5edcd 100644 --- a/content/zh-cn/blog/_posts/2022-02-17-updated-dockershim-faq.md +++ b/content/zh-cn/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -4,7 +4,7 @@ title: "更新:移除 Dockershim 的常见问题" linkTitle: "移除 Dockershim 的常见问题" date: 2022-02-17 slug: dockershim-faq -aliases: [ 'zh/dockershim' ] +aliases: [ '/zh-cn/dockershim' ] --- Kubernetes 的早期版本仅适用于特定的容器运行时:Docker Engine。 -后来,Kubernetes 增加了对使用其他容器运行时的支持。[创建](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) CRI +后来,Kubernetes 增加了对使用其他容器运行时的支持。[创建](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) CRI 标准是为了实现编排器(如 Kubernetes)和许多不同的容器运行时之间交互操作。 Docker Engine 没有实现(CRI)接口,因此 Kubernetes 项目创建了特殊代码来帮助过渡, 并使 dockershim 代码成为 Kubernetes 的一部分。 diff --git a/content/zh-cn/docs/reference/glossary/dockershim.md b/content/zh-cn/docs/reference/glossary/dockershim.md index b12b08d54a4dd..daad4679bb4e2 100644 --- a/content/zh-cn/docs/reference/glossary/dockershim.md +++ b/content/zh-cn/docs/reference/glossary/dockershim.md @@ -36,5 +36,5 @@ Kubernetes 系统组件通过它与 {{< glossary_tooltip text="Docker Engine" te -从 Kubernetes v1.24 开始,dockershim 已从 Kubernetes 中移除. -想了解更多信息,可参考[移除 Dockershim 的常见问题](/zh-cn/dockershim)。 \ No newline at end of file +从 Kubernetes v1.24 开始,dockershim 已从 Kubernetes 中移除。 +想了解更多信息,可参考[移除 Dockershim 的常见问题](/zh-cn/dockershim)。 diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 889619b1dbd51..2c38d620bb520 100644 --- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -67,8 +67,7 @@ may [fail](https://github.com/kubernetes/kubeadm/issues/31). 一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 Kubernetes 使用这些值来唯一确定集群中的节点。 -如果这些值在每个节点上不唯一,可能会导致安装 -[失败](https://github.com/kubernetes/kubeadm/issues/31)。 +如果这些值在每个节点上不唯一,可能会导致安装[失败](https://github.com/kubernetes/kubeadm/issues/31)。 ## 检查网络适配器 -如果你有一个以上的网络适配器,同时你的 Kubernetes 组件通过默认路由不可达,我们建议你预先添加 IP 路由规则,这样 Kubernetes 集群就可以通过对应的适配器完成连接。 +如果你有一个以上的网络适配器,同时你的 Kubernetes 组件通过默认路由不可达,我们建议你预先添加 IP 路由规则, +这样 Kubernetes 集群就可以通过对应的适配器完成连接。 -你使用的 Pod 网络插件 (详见后续章节) 也可能需要开启某些特定端口。由于各个 Pod 网络插件的功能都有所不同, -请参阅他们各自文档中对端口的要求。 +你使用的 Pod 网络插件 (详见后续章节) 也可能需要开启某些特定端口。 +由于各个 Pod 网络插件的功能都有所不同,请参阅他们各自文档中对端口的要求。 -Docker Engine 没有实现 [CRI](/zh-cn/docs/concepts/architecture/cri/),而这是容器运行时在 Kubernetes 中工作所需要的。 +Docker Engine 没有实现 [CRI](/zh-cn/docs/concepts/architecture/cri/), +而这是容器运行时在 Kubernetes 中工作所需要的。 为此,必须安装一个额外的服务 [cri-dockerd](https://github.com/Mirantis/cri-dockerd)。 -cri-dockerd 是一个基于传统的内置Docker引擎支持的项目,它在 1.24 版本从 kubelet 中[移除](/zh-cn/dockershim)。 +cri-dockerd 是一个基于传统的内置 Docker 引擎支持的项目, +它在 1.24 版本从 kubelet 中[移除](/zh-cn/dockershim)。 {{< /note >}} -kubeadm **不能** 帮你安装或者管理 `kubelet` 或 `kubectl`,所以你需要 -确保它们与通过 kubeadm 安装的控制平面的版本相匹配。 +kubeadm **不能** 帮你安装或者管理 `kubelet` 或 `kubectl`, +所以你需要确保它们与通过 kubeadm 安装的控制平面的版本相匹配。 如果不这样做,则存在发生版本偏差的风险,可能会导致一些预料之外的错误和问题。 然而,控制平面与 kubelet 间的相差一个次要版本不一致是支持的,但 kubelet 的版本不可以超过 API 服务器的版本。 例如,1.7.0 版本的 kubelet 可以完全兼容 1.8.0 版本的 API 服务器,反之则不可以。 -有关安装 `kubectl` 的信息,请参阅[安装和设置 kubectl](/zh-cn/docs/tasks/tools/)文档。 +有关安装 `kubectl` 的信息,请参阅[安装和设置 kubectl](/zh-cn/docs/tasks/tools/) 文档。 {{< warning >}} 2. 腾空节点以安全地逐出所有运行中的 Pod: @@ -186,8 +186,7 @@ kubeadm 工具将节点上的套接字存储为控制面上 `Node` 对象的注 1. 将 `kubeadm.alpha.kubernetes.io/cri-socket` 标志从 `/var/run/dockershim.sock` 更改为 `unix:///var/run/cri-dockerd.sock`; -1. 保存所作更改。保存时,`Node` 对象被更新 - +1. 保存所作更改。保存时,`Node` 对象被更新。 -* 阅读 [dockershim 移除常见问题](/zh-cn/dockershim)。 +* 阅读 [移除 Dockershim 的常见问题](/zh-cn/dockershim)。 * [了解如何从基于 dockershim 的 Docker Engine 迁移到 containerd](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/)。 From 6ebfa53e76c63d19a4149d5f82835d2fec35255c Mon Sep 17 00:00:00 2001 From: Michael Date: Sat, 30 Jul 2022 10:27:52 +0800 Subject: [PATCH 291/292] [zh-cn] updated /concepts/configuration/secret.md --- .../docs/concepts/configuration/secret.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/content/zh-cn/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md index 59ac5e6313b5c..5c0549f1f3578 100644 --- a/content/zh-cn/docs/concepts/configuration/secret.md +++ b/content/zh-cn/docs/concepts/configuration/secret.md @@ -4,8 +4,8 @@ content_type: concept feature: title: Secret 和配置管理 description: > - 部署和更新 Secret 和应用程序的配置而不必重新构建容器镜像,且 - 不必将软件堆栈配置中的秘密信息暴露出来。 + 部署和更新 Secret 和应用程序的配置而不必重新构建容器镜像, + 且不必将软件堆栈配置中的秘密信息暴露出来。 weight: 30 --- ### 在静态 Pod 中使用 Secret {#restriction-static-pod} -你不可以在{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}}. +你不可以在{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}} 中使用 ConfigMap 或 Secret。 特殊字符(例如 `$`、`\`、`*`、`=` 和 `!`)会被你的 -[Shell](https://zh.wikipedia.org/wiki/Shell_(computing)) 解释,因此需要转义。 +[Shell](https://zh.wikipedia.org/wiki/%E6%AE%BC%E5%B1%A4) 解释,因此需要转义。 ### Docker 配置 Secret {#docker-config-secrets} -你可以使用下面两种 `type` 值之一来创建 Secret,用以存放用于访问容器鏡像倉庫的凭据: +你可以使用下面两种 `type` 值之一来创建 Secret,用以存放用于访问容器镜像仓库的凭据: - `kubernetes.io/dockercfg` - `kubernetes.io/dockerconfigjson` @@ -1631,7 +1630,7 @@ to create a Secret for accessing a container registry, you can do: 不过,API 服务器不会检查 JSON 数据本身是否是一个合法的 Docker 配置文件内容。 当你没有 Docker 配置文件,或者你想使用 `kubectl` 创建一个 Secret -来访问容器倉庫时,你可以这样做: +来访问容器仓库时,你可以这样做: ```shell kubectl create secret docker-registry secret-tiger-docker \ From 0b3516f02ce4a0dab56afb97aaba94aed4628f0f Mon Sep 17 00:00:00 2001 From: Arhell Date: Sat, 30 Jul 2022 11:07:06 +0300 Subject: [PATCH 292/292] [de] update KubeCon date --- content/de/_index.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/de/_index.html b/content/de/_index.html index 7eea7bf947fa7..b1bc57459cdbf 100644 --- a/content/de/_index.html +++ b/content/de/_index.html @@ -42,12 +42,12 @@

Die Herausforderungen bei der Migration von über 150 Microservices auf Kube

- Besuche die KubeCon Europe vom 16. bis 20. Mai 2022 + Besuchen die KubeCon North America vom 24. bis 28. Oktober 2022



- Besuchen die KubeCon North America vom 24. bis 28. Oktober 2022 + Besuche die KubeCon Europe vom 17. bis 21. April 2023

字段描述