Skip to content

Commit

Permalink
Correct obvious spelling errors
Browse files Browse the repository at this point in the history
  • Loading branch information
animatedmax committed Jan 4, 2019
1 parent 61c144f commit da2a5d9
Show file tree
Hide file tree
Showing 10 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion _uaa.html.md.erb
Expand Up @@ -72,7 +72,7 @@ For example, `mail`.
1. For **LDAP Referrals**, choose how UAA handles LDAP server referrals to other user stores.
UAA can follow the external referrals, ignore them without returning errors, or generate an error for each external referral and abort the authentication.

1. For **External Groups Whitelist**, enter a comma-separated list of group patterns which need to be populated in the user's `id_token`. For further information on accepted patterns see the description of the `config.externalGroupsWhitelist` in the OAuth/OIDC [Identity Provider Documentation](https://docs.cloudfoundry.org/api/uaa/version/4.19.0/index.html#oauth-oidc).
1. For **External Groups Whitelist**, enter a comma-separated list of group patterns which need to be populated in the user's `id_token`. For further information on accepted patterns see the description of the `config.externalGroupsWhitelist` in the OAuth/OIDC [Identity Provider Documentation](https://docs.cloudfoundry.org/api/uaa/version/4.19.0/index.html#oauth-oidc).
<p class="note"><strong>Note</strong>: When sent as a Bearer token in the Authentication header, wide pattern queries for users who are members of multiple groups, can cause the size of the <code>id_token</code> to extend beyond what is supported by web servers.</p>
![External Groups Whitelist field](images/external-groups-whitelist.png)

Expand Down
9 changes: 5 additions & 4 deletions azure-managed-identities.html.md.erb
Expand Up @@ -26,7 +26,7 @@ To retrieve your subscription ID and the name of your PKS resource group, you mu

Perform the following steps to create the managed identity for the master nodes:

1. Create a role definition using the following template, replacing `SUBSCRIPTION_ID` and`RESOURCE_GROUP` with your subscription ID and the name of your PKS resource group. For more information about custom roles in Azure, see [Custom Roles in Azure](https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles) in the Azure documentation.
1. Create a role definition using the following template, replacing `SUBSCRIPTION_ID` and `RESOURCE_GROUP` with your subscription ID and the name of your PKS resource group. For more information about custom roles in Azure, see [Custom Roles in Azure](https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles) in the Azure documentation.

```
{
Expand All @@ -50,7 +50,7 @@ Perform the following steps to create the managed identity for the master nodes:

],
"AssignableScopes": [
"/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP"
"/subscriptions/SUBSCRIPTION-ID/resourceGroups/RESOURCE-GROUP"
]
}
```
Expand Down Expand Up @@ -88,7 +88,8 @@ Perform the following steps to create the managed identity for the master nodes:

Perform the following steps to create the managed identity for the worker nodes:

1. Create a role definition using the following template, replacing `SUBSCRIPTION_ID` and`RESOURCE_GROUP` with your subscription ID and the name of your PKS resource group:
1. Create a role definition using the following template, replacing `SUBSCRIPTION-ID` and
`RESOURCE-GROUP` with your subscription ID and the name of your PKS resource group:

```
{
Expand All @@ -108,7 +109,7 @@ Perform the following steps to create the managed identity for the worker nodes:

],
"AssignableScopes": [
"/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP"
"/subscriptions/SUBSCRIPTION-ID/resourceGroups/RESOURCE-GROUP"
]
}
```
Expand Down
2 changes: 1 addition & 1 deletion generate-nsx-pi-cert.html.md.erb
Expand Up @@ -22,7 +22,7 @@ Before you begin this procedure, ensure that you have successfully completed all
<a href="./nsxt-prepare-compute-plane.html">Creating the PKS Compute Plane</a>
</li>
<li>
<a href="./vsphere-nsxt-om-deploy.html">Deployoing Ops Manager with NSX-T for PKS</a>
<a href="./vsphere-nsxt-om-deploy.html">Deploying Ops Manager with NSX-T for PKS</a>
</li>
<li>
<a href="./generate-nsx-ca-cert.html">Generating and Registering the NSX Manager Certificate for PKS</a>
Expand Down
2 changes: 1 addition & 1 deletion installing-nsx-t.html.md.erb
Expand Up @@ -109,7 +109,7 @@ To configure networking, do the following:
1. For **Floating IP Pool ID**, enter the `ip-pool-vips` ID that you created for load balancer VIPs. For more information, see [Plan Network CIDRs](nsxt-prepare-env.html#plan-cidrs). PKS uses the floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.
1. For **Nodes DNS**, enter one or more Domain Name Servers used by the Kubernetes nodes.
1. For **vSphere Cluster Names**, enter a comma-separated list of the vSphere clusters where you will deploy Kubernetes clusters.
The NSX-T precheck errand uses this field to verify that the hosts from the specified clusters are available in NSX-T. You can specify clusters in this format: `cluster1,cluster2,cluster3`.
The NSX-T pre-check errand uses this field to verify that the hosts from the specified clusters are available in NSX-T. You can specify clusters in this format: `cluster1,cluster2,cluster3`.
1. (Optional) Configure a global proxy for all outgoing HTTP and HTTPS traffic from your Kubernetes clusters and the PKS API server. See [Using Proxies with PKS on NSX-T](proxies.html) for instructions on how to enable a proxy.
1. Under **Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent)**, ignore the **Enable outbound internet access** checkbox.
1. Click **Save**.
Expand Down
13 changes: 6 additions & 7 deletions nsxt-deploy.html.md.erb
Expand Up @@ -253,7 +253,7 @@ To create an Edge Transport Node for PKS:
To verify the creation of Edge Transport Nodes:

1. In NSX Manager, select **Fabric > Nodes > Edges**.
1. Verify that Controller Connectivty and Manager Connectivity are `UP` for both Edge Nodes.
1. Verify that Controller Connectivity and Manager Connectivity are `UP` for both Edge Nodes.
<img src="images/nsxt/nsx-misc/edge-transport-node-08.png">
1. In NSX Manager, select **Fabric > Nodes > Transport Node**.
1. Verify that the configuration state is `Success`.
Expand Down Expand Up @@ -362,15 +362,15 @@ Configure T0 routing to the rest of your environment using the appropriate routi

### Verify T0 Router Creation

The T0 router uplink IP should be reachable from the corporate network. From your local laptop or worksation, ping the uplink IP address. For example:
The T0 router uplink IP should be reachable from the corporate network. From your local laptop or workstation, ping the uplink IP address. For example:

PING 10.40.206.24 (10.40.206.24): 56 data bytes
64 bytes from 10.40.206.24: icmp_seq=0 ttl=53 time=33.738 ms
64 bytes from 10.40.206.24: icmp_seq=1 ttl=53 time=36.965 ms

##<a id='create-edge-ha'></a> Step 14: Configure Edge Nodes for HA

Configure <a href="nsxt-prepare-env.html#nsx-edge-ha">high-availability (HA) for NSX Edge Nodes</a>. If the T0 Router is not correctly cofigured for HA, failover to the standby Edge Node will not occur.
Configure <a href="nsxt-prepare-env.html#nsx-edge-ha">high-availability (HA) for NSX Edge Nodes</a>. If the T0 Router is not correctly configured for HA, failover to the standby Edge Node will not occur.

Proper configuration requires two new uplinks on the T0 router: one attached to Edge TN1, and the other attached to Edge TN2. In addition, you need to create a VIP that is the IP address used for the T0 uplink defined when the T0 Router was created.

Expand Down Expand Up @@ -434,7 +434,7 @@ Create an HA virtual IP (VIP) address. Once created the HA VIP becomes the offic

For each ESXi host in the NSX-T Fabric to be used for PKS Compute purposes, create an associated transport node. For example, if you have three ESXi hosts in the NSX-T Fabric, create three nodes named `tnode-host-1`, `tnode-host-2`, and `tnode-host-3`. Add the Overlay Transport Zone to each ESXi Host Transport Node.

Prepare each ESXi server dedciated for the PKS Compute Cluster as a Transport Node. These instructions assume that for each participating ESXi host the ESXi hypervisor is installed and the `vmk0` is configured. In addition, each ESXi host must have at least one **free nic/vmnic** for use with NSX Host Transport Nodes that is not already in use by other vSwitches on the ESXi host. Make sure the `vmnic1` (second physical interface) of the ESXi host is not used. NSX will take ownership of it (opaque NSX vswitch will use it as uplink). For more information, see [Add a Hypervisor Host to the NSX-T Fabric](https://docs.vmware.com/en/VMware-NSX-T/2.2/com.vmware.nsxt.install.doc/GUID-8C0EEC08-3A63-4918-A5E2-7A94AD50B0E6.html) in the VMware NSX-T documentation.
Prepare each ESXi server dedicated for the PKS Compute Cluster as a Transport Node. These instructions assume that for each participating ESXi host the ESXi hypervisor is installed and the `vmk0` is configured. In addition, each ESXi host must have at least one **free nic/vmnic** for use with NSX Host Transport Nodes that is not already in use by other vSwitches on the ESXi host. Make sure the `vmnic1` (second physical interface) of the ESXi host is not used. NSX will take ownership of it (opaque NSX vswitch will use it as uplink). For more information, see [Add a Hypervisor Host to the NSX-T Fabric](https://docs.vmware.com/en/VMware-NSX-T/2.2/com.vmware.nsxt.install.doc/GUID-8C0EEC08-3A63-4918-A5E2-7A94AD50B0E6.html) in the VMware NSX-T documentation.

### Add ESXi Host to NSX-T Fabric

Expand Down Expand Up @@ -473,9 +473,9 @@ Complete the following operation for each ESXi host to be used by the PKS Comput

1. Verify that you see the ESXi Compute Transport Node:
<img src="images/nsxt/nsx-misc/esxi-prep-06.png">
1. Verfiy the status is `Up`.
1. Verify the status is `Up`.
<img src="images/nsxt/nsx-misc/esxi-prep-07.png">
<p class="note"><strong>Note</strong>: If you are using NSX-T 2.3, the status should be up. If you are using NSX-T 2.2, the status may incorectly show as down (because the Tunnel Status is Down.) Either way, verify TEP communications as described in the next step.</p>
<p class="note"><strong>Note</strong>: If you are using NSX-T 2.3, the status should be up. If you are using NSX-T 2.2, the status may incorrectly show as down (because the Tunnel Status is Down.) Either way, verify TEP communications as described in the next step.</p>
1. Make sure the NSX TEP vmk is created on ESXi host and TEP to TEP communication (with Edge TN for instance) works.

[root@ESXi-1:~] esxcfg-vmknic -l
Expand All @@ -485,4 +485,3 @@ Complete the following operation for each ESXi host to be used by the PKS Comput
##<a id='next'></a> Next Step

After you complete this procedure, follow the instructions in <a href="./nsxt-prepare-mgmt-plane.html">Creating the PKS Management Plane</a>.

10 changes: 5 additions & 5 deletions nsxt-prepare-env.html.md.erb
Expand Up @@ -43,7 +43,7 @@ When you install PKS on NSX-T, you are required to specify the **Pods IP Block I

####<a id='pods-ip-block'></a>Pods IP Block

Each time a Kubernetes namespace is created, a subnet from the **Pods IP Block** is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace. When a Kubernetes cluster is deployed by PKS, by default 3 namespaces are created. Often additional namespaces will be created by operators to faciliate cluster use. As a result, when creating the **Pods IP Block**, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see [Creating NSX-T Objects for PKS](nsxt-create-objects.html).
Each time a Kubernetes namespace is created, a subnet from the **Pods IP Block** is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace. When a Kubernetes cluster is deployed by PKS, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the **Pods IP Block**, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see [Creating NSX-T Objects for PKS](nsxt-create-objects.html).

<img src="images/nsxt/pods-ip-block.png" alt="Pods IP Block">

Expand Down Expand Up @@ -134,7 +134,7 @@ The VIB repository service provides access to native libraries for NSX Transport

##<a id='nsx-tep'></a> Step 8: Create TEP IP Pool

Create Tunnel Endpoint IP Pool (TEP IP Pool) within the usable range of the **VTEP CIDR** that was defined in [prepartion for installing NSX-T)(#plan-cidrs). The TEP IP Pool is used for [NSX Transport Nodes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-370D06E1-1BB6-4144-A654-7AF2542C3136.html?hWord=N4IghgNiBcICoFEAKACAcgZQBoFo4gF8g). For instructions, see [Create TEP IP Pool](nsxt-deploy.html#create-tep).
Create Tunnel Endpoint IP Pool (TEP IP Pool) within the usable range of the **VTEP CIDR** that was defined in [preparation for installing NSX-T)(#plan-cidrs). The TEP IP Pool is used for [NSX Transport Nodes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-370D06E1-1BB6-4144-A654-7AF2542C3136.html?hWord=N4IghgNiBcICoFEAKACAcgZQBoFo4gF8g). For instructions, see [Create TEP IP Pool](nsxt-deploy.html#create-tep).

##<a id='nsx-overlay'></a> Step 9: Create Overlay Transport Zone

Expand Down Expand Up @@ -164,7 +164,7 @@ Create an [NSX Edge Cluster](https://docs.vmware.com/en/VMware-NSX-T-Data-Center

Configure NSX Edge for high availability (HA) using Active/Standby mode to support failover, as shown in the following figure. For instructions, see [Configure Edge HA](nsxt-deploy.html#configure-edge-ha).

<p class="note"><strong>Note</strong>: If the T0 Router is not <a href="nsxt-deploy.html#configure-edge-ha">properly cofigured for HA</a>, failover to the standby Edge Node will not occur.</p>
<p class="note"><strong>Note</strong>: If the T0 Router is not <a href="nsxt-deploy.html#configure-edge-ha">properly configured for HA</a>, failover to the standby Edge Node will not occur.</p>

![NSX Edge High Availability](images/vsphere/nsxt-edge-ha.png)

Expand Down Expand Up @@ -258,15 +258,15 @@ At this point your NSX-T environment is prepared for PKS installation using the

##<a id='nsx-pks-harbor'></a> Step 25: Install Harbor Harbor Registry for PKS

The VMware Harbor Registry is recommended for PKS. Install Harbor in the NSX Management Plane with other PKS componets (PKS API, Ops Manager, and BOSH). For instructions, see <a href="https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html">Installing Harbor Registry on vSphere with NSX-T</a> in the PKS Harbor documentation.
The VMware Harbor Registry is recommended for PKS. Install Harbor in the NSX Management Plane with other PKS components (PKS API, Ops Manager, and BOSH). For instructions, see <a href="https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html">Installing Harbor Registry on vSphere with NSX-T</a> in the PKS Harbor documentation.

If you are using the [NAT deployment topology](nsxt-topologies.html#topology-nat) for PKS, create a DNAT rule that maps the private Harbor IP address to a routable IP address from the floating IP pool on the PKS management network. See <a href="https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html#create-dnat">Create DNAT Rule</a>.

##<a id='nsx-pks-adv'></a> Step 26: Perform Post-Installation NSX-T Configurations as Necessary

Once PKS is installed, you may want to perform additional NSX-T configurations to support customization of Kubernetes clusters at deployment time, such as:

- <a href="./proxies.html">Configuring an HTTTP Proxy</a> to proxy outgoing HTTP/S traffic from NCP, PKS, BOSH, and Ops Manager to vSphere infrastrcture components (vCenter, NSX Manager)
- <a href="./proxies.html">Configuring an HTTTP Proxy</a> to proxy outgoing HTTP/S traffic from NCP, PKS, BOSH, and Ops Manager to vSphere infrastructure components (vCenter, NSX Manager)
- <a href="./network-profiles-define.html">Defining Network Profiles</a> to customize NSX-T networking objects, such as load balancer size, custom Pods IP Block, routable Pods IP Block, configurable CIDR range for the Pods IP Block, custom Floating IP block, and more.
- <a href="./nsxt-multi-t0.html">Configuring Multiple Tier-0 Routers</a> to support customer/tenant isolation

4 changes: 2 additions & 2 deletions nsxt-prepare-mgmt-plane.html.md.erb
Expand Up @@ -61,7 +61,7 @@ If you are using the <a href="nsxt-topologies.html#topology-nat">NAT Topology</a

##<a id='create-rp'></a>Step 3. Create NSX-T Tier-1 Router for the PKS Management Plane

Defining a T1 router involves creating the router and attaching it to the logical switch, creating a router port, and adversiting the routes.
Defining a T1 router involves creating the router and attaching it to the logical switch, creating a router port, and advertising the routes.

### Create T1 Router

Expand Down Expand Up @@ -122,7 +122,7 @@ To create a DNAT rule for Ops Manager:
1. In NSX Manager, select **Routing > Routers**.
1. Select the **T0 Router > Services > NAT**.
<img src="images/nsxt/mgmt-plane/create-mgmt-plane-19.png">
1. Add and configure a DNAT rule with the routable IP address as the detination and the internal IP address for Ops Manager as the translated IP. For example:
1. Add and configure a DNAT rule with the routable IP address as the destination and the internal IP address for Ops Manager as the translated IP. For example:
* **Priority**: 1000
* **Action**: DNAT
* **Destination IP**: 10.40.14.1
Expand Down
4 changes: 2 additions & 2 deletions vsphere-nsxt-om-config.html.md.erb
Expand Up @@ -127,8 +127,8 @@ Ops Manager Availability Zones correspond to your vCenter clusters and resource
* Enter the name of an existing vCenter **Cluster** to use as an Availability Zone, such as `COMP-Cluster-1`.
* Enter the name of the **PKS Management Resource Pool** in the vCenter cluster that you specified above, such as `RP-MGMT-PKS`. The jobs running in this Availability Zone share the CPU and memory resources defined by the pool.
* Click **Add Cluster** and create at least one PKS Compute AZ.
* Sepecify the **Cluster** and the **Resource Pool**, such as `RP-PKS-AZ`.
* Add addional clusters as necessary. Click the trash icon to delete a cluster. The first cluster cannot be deleted.
* Specify the **Cluster** and the **Resource Pool**, such as `RP-PKS-AZ`.
* Add additional clusters as necessary. Click the trash icon to delete a cluster. The first cluster cannot be deleted.

<%= image_tag("images/nsxt/bosh/config-bosh-09.png") %>
Expand Down
2 changes: 1 addition & 1 deletion vsphere-nsxt-om-deploy.html.md.erb
Expand Up @@ -25,7 +25,7 @@ This topic provides instructions for deploying Ops Manager on VMware vSphere wit

## <a id="deploy-om"></a>Deploy Ops Manager for PKS

1. Before starting, refer to the [PKS Release Notes](release-notes.html) for supported Ops Manager versions for PKS. Or, download the [Compatiblity Matrix](https://network.pivotal.io/products/ops-manager/) from the Ops Manager download page.
1. Before starting, refer to the [PKS Release Notes](release-notes.html) for supported Ops Manager versions for PKS. Or, download the [Compatibility Matrix](https://network.pivotal.io/products/ops-manager/) from the Ops Manager download page.
1. Before starting, refer to the known issues in the [PCF Ops Manager Release v2.3 Release Notes](http://docs.pivotal.io/pivotalcf/2-3/pcf-release-notes/opsmanager-rn.html) or the [PCF Ops Manager Release v2.4 Release Notes](http://docs.pivotal.io/pivotalcf/2-4/pcf-release-notes/opsmanager-rn.html).
1. Download the [Pivotal Cloud Foundry Ops Manager for vSphere](https://network.pivotal.io/products/ops-manager) `.ova` file at [Pivotal Network](https://network.pivotal.io). Use the dropdown menu to select the supported Ops Manager release.

Expand Down
2 changes: 1 addition & 1 deletion vsphere-persistent-storage.html.md.erb
Expand Up @@ -151,7 +151,7 @@ This section describes PKS support for vSphere environments with multiple comput

With this topology, each vSAN datastore is only visible from each vSphere compute cluster. It is not possible to have a vSAN datastore shared across all vSphere compute clusters.

You can insert a shared NFS, iSCSI (VMFS), or FC (VMFS) datastore across all vSAN-based vSphere compute clusters to support both static and dynamic PV provisiong.
You can insert a shared NFS, iSCSI (VMFS), or FC (VMFS) datastore across all vSAN-based vSphere compute clusters to support both static and dynamic PV provisioning.

Refer to the following diagram:

Expand Down

0 comments on commit da2a5d9

Please sign in to comment.