Skip to content

Commit 24a25cf

Browse files
authored
doc: fix doc build (#517)
1 parent f9d8f34 commit 24a25cf

File tree

7 files changed

+127
-111
lines changed

7 files changed

+127
-111
lines changed

docs/api-types/target-group-policy.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,23 +8,24 @@ health check configurations of those backend resources.
88

99
When attaching a policy to a resource, the following restrictions apply:
1010

11-
* A policy can be only attached to `Service` resources.
12-
* The attached resource can only be `backendRef` of `HTTPRoute` and `GRPCRoute`.
13-
* The attached resource should exist in the same namespace as the policy resource.
11+
- A policy can be only attached to `Service` resources.
12+
- The attached resource can only be `backendRef` of `HTTPRoute` and `GRPCRoute`.
13+
- The attached resource should exist in the same namespace as the policy resource.
1414

1515
The policy will not take effect if:
1616

17-
* The resource does not exist
18-
* The resource is not referenced by any route
19-
* The resource is referenced by a route of unsupported type
17+
- The resource does not exist
18+
- The resource is not referenced by any route
19+
- The resource is referenced by a route of unsupported type
2020

2121
These restrictions are not forced; for example, users may create a policy that targets a service that is not created yet.
2222
However, the policy will not take effect unless the target is valid.
2323

2424
**Limitations and Considerations**
25-
* Attaching TargetGroupPolicy to a resource that is already referenced by a route will result in a replacement
26-
of VPC Lattice TargetGroup resource, except for health check updates.
27-
* Removing TargetGroupPolicy of a resource will roll back protocol configuration to default setting. (HTTP1/HTTP plaintext)
25+
26+
- Attaching TargetGroupPolicy to a resource that is already referenced by a route will result in a replacement
27+
of VPC Lattice TargetGroup resource, except for health check updates.
28+
- Removing TargetGroupPolicy of a resource will roll back protocol configuration to default setting. (HTTP1/HTTP plaintext)
2829

2930
## Example Configuration
3031

docs/concepts/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
# Configure AWS Gateway API Controller
2+
23
Refer to this section to further configure your use of the AWS Gateway API Controller.
3-
The features here build on the examples shown in [Get Started Using the AWS Gateway API Controller](../getstarted.md).
4+
The features here build on the examples shown in [Get Started Using the AWS Gateway API Controller](../guides/getstarted.md).

docs/concepts/overview.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ For medium and large-scale customers, applications can often spread across multi
44
For example, information pertaining to a company’s authentication, billing, and inventory may each be served by services running on different VPCs in AWS.
55
Someone wanting to run an application that is spread out in this way might find themselves having to work with multiple ways to configure:
66

7-
* Authentication and authorization
8-
* Observability
9-
* Service discovery
10-
* Network connectivity and traffic routing
7+
- Authentication and authorization
8+
- Observability
9+
- Service discovery
10+
- Network connectivity and traffic routing
1111

1212
This is not a new problem.
1313
A common approach to interconnecting services that span multiple VPCs is to use service meshes. But these require sidecars, which can introduce scaling problems and present their own management challenges, such as dealing with control plane and data plane at scale.
@@ -20,38 +20,38 @@ The goal of VPC Lattice is to provide a way to have a single, over-arching servi
2020
You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless.
2121
The components making up that view include:
2222

23-
* Service Directory: This is an account-level directory for gathering your services in once place.
24-
This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you.
25-
A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80).
26-
However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter.
23+
- Service Directory: This is an account-level directory for gathering your services in once place.
24+
This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you.
25+
A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80).
26+
However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter.
2727

28-
* Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items.
28+
- Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items.
2929
These networks let you register services to run across accounts and VPCs.
3030
You can create common authorization rules to simplify connectivity.
3131

32-
* Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway.
32+
- Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway.
3333
You configure rules for handling traffic and for authorizing access.
3434
For now, you can assign IAM roles to allow certain requests.
3535
These are similar to S3 or IAM resource policies.
3636
Overall, this provides a common way to apply access rules at the service or service network levels.
3737

38-
* Service Gateway: This feature is not yet implemented.
38+
- Service Gateway: This feature is not yet implemented.
3939
It is meant to centralize management of ingress and egress gateways.
4040
The Service Gateway will also let you manage access to external dependencies and clients using a centrally managed VPC.
4141

4242
If all goes well, you should be able to achieve some of the following goals:
4343

44-
* Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs.
44+
- Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs.
4545
After configuring your services with the Kubernetes Gateway API, you can facilitate communications between services on those clusters without dealing with the underlying infrastructure.
4646
VPC Lattice handles a lot of the details for you without needing things like sidecars.
47-
* Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features.
47+
- Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features.
4848
This gives you a way to have a consistent interface to multiple types of platforms.
4949

5050
With VPC Lattice you can also avoid some of these common problems:
5151

52-
* Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together.
52+
- Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together.
5353
IP address conflicts can also occur if you wanted to manage resources across multiple Kubernetes clusters.
54-
* Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted.
54+
- Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted.
5555
While this might not be a big issue for a handful of sidecars, it can be disruptive if you have thousands of pods, each with its own sidecar.
5656

5757
## Relationship between VPC Lattice and Kubernetes
@@ -64,8 +64,8 @@ The following figure illustrates how VPC Lattice objects connect to [Kubernetes
6464
As shown in the figure, there are different personas associated with different levels of control in VPC Lattice.
6565
Notice that the Kubernetes Gateway API syntax is used to create the gateway, HTTPRoute and services, but Kubernetes gets the details of those items from VPC Lattice:
6666

67-
* Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass.
68-
* Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies.
69-
* Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case.
67+
- Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass.
68+
- Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies.
69+
- Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case.
7070
This is all done by checking the related VPC Lattice Services (and related policies), Target Groups, and Targets
7171
Keep in mind that Target Groups v1 and v2 can be on different clusters in different VPCs.

docs/guides/deploy.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
2020
```bash
2121
eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION
2222
```
23+
2324
1. Configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. See [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) for details. Lattice has both an IPv4 and IPv6 prefix lists available.
2425
```bash
2526
CLUSTER_SG=$(aws eks describe-cluster --name $CLUSTER_NAME --output json| jq -r '.cluster.resourcesVpcConfig.clusterSecurityGroupId')
@@ -33,6 +34,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
3334
eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve --region $AWS_REGION
3435
```
3536
1. Create a policy (`recommended-inline-policy.json`) in IAM with the following content that can invoke the gateway API and copy the policy arn for later use:
37+
3638
```bash
3739
{
3840
"Version": "2012-10-17",
@@ -62,6 +64,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
6264
--policy-name VPCLatticeControllerIAMPolicy \
6365
--policy-document file://examples/recommended-inline-policy.json
6466
```
67+
6568
1. Create the `aws-application-networking-system` namespace:
6669
```bash
6770
kubectl apply -f examples/deploy-namesystem.yaml
@@ -71,6 +74,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
7174
export VPCLatticeControllerIAMPolicyArn=$(aws iam list-policies --query 'Policies[?PolicyName==`VPCLatticeControllerIAMPolicy`].Arn' --output text)
7275
```
7376
1. Create an iamserviceaccount for pod level permission:
77+
7478
```bash
7579
eksctl create iamserviceaccount \
7680
--cluster=$CLUSTER_NAME \
@@ -128,10 +132,13 @@ Alternatively, you can manually provide configuration variables when installing
128132
## Controller Installation
129133

130134
1. Run either `kubectl` or `helm` to deploy the controller. Check [Environment Variables](../concepts/environment.md) for detailed explanation of each configuration option.
135+
131136
```bash
132137
kubectl apply -f examples/deploy-v0.0.18.yaml
133138
```
139+
134140
or
141+
135142
```bash
136143
# login to ECR
137144
aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws
@@ -153,4 +160,5 @@ Alternatively, you can manually provide configuration variables when installing
153160
```bash
154161
kubectl apply -f examples/gatewayclass.yaml
155162
```
156-
1. You are all set! Check our [Getting Started Guide](getstarted.md) to try setting up service-to-service communication.
163+
1. You are all set! Check our [Getting Started Guide](getstarted.md) to try setting up service-to-service communication.
164+

docs/guides/getstarted.md

Lines changed: 51 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@ Both clusters are created using `eksctl`, with both clusters created from the sa
88

99
Using these examples as a foundation, see the [Configuration](../concepts/index.md) section for ways to further configure service-to-service communications.
1010

11-
1211
**NOTE**: You can get the yaml files used on this page by cloning the [AWS Gateway API Controller](https://github.com/aws/aws-application-networking-k8s) repository.
1312

1413
## Set up single-cluster/VPC service-to-service communications
@@ -25,12 +24,14 @@ This example creates a single cluster in a single VPC, then configures two route
2524
When `DEFAULT_SERVICE_NETWORK` environment variable is specified, the controller will automatically configure a service network for you.
2625
For example:
2726
```bash
27+
2828
helm upgrade gateway-api-controller \
2929
oci://281979210680.dkr.ecr.us-west-2.amazonaws.com/aws-gateway-controller-chart \
3030
--reuse-values \
3131
--set=defaultServiceNetwork=my-hotel
3232
```
3333
Alternatively, you can use AWS CLI to manually create a VPC Lattice service network, with the name `my-hotel`:
34+
3435
```bash
3536
aws vpc-lattice create-service-network --name my-hotel # grab service network ID
3637
aws vpc-lattice create-service-network-vpc-association --service-network-identifier <service-network-id> --vpc-identifier <k8s-cluster-vpc-id>
@@ -48,14 +49,18 @@ This example creates a single cluster in a single VPC, then configures two route
4849
]
4950
}
5051
```
52+
5153
1. Create the Kubernetes Gateway `my-hotel`:
54+
5255
```bash
5356
kubectl apply -f examples/my-hotel-gateway.yaml
5457
```
58+
5559
Verify that `my-hotel` Gateway is created with `PROGRAMMED` status equals to `True`:
60+
5661
```bash
57-
kubectl get gateway
58-
62+
kubectl get gateway
63+
5964
NAME CLASS ADDRESS PROGRAMMED AGE
6065
my-hotel amazon-vpc-lattice True 7d12h
6166
```
@@ -72,37 +77,40 @@ This example creates a single cluster in a single VPC, then configures two route
7277
kubectl apply -f examples/inventory-route.yaml
7378
```
7479
1. Find out HTTPRoute's DNS name from HTTPRoute status:
80+
7581
```bash
7682
kubectl get httproute
77-
83+
7884
NAME HOSTNAMES AGE
7985
inventory 51s
8086
rates 6m11s
8187
```
88+
8289
1. Check VPC Lattice generated DNS address for HTTPRoute `inventory` and `rates` :
83-
```bash
84-
kubectl get httproute inventory -o yaml
85-
86-
apiVersion: gateway.networking.k8s.io/v1beta1
87-
kind: HTTPRoute
88-
metadata:
89-
annotations:
90-
application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-02fb06f1acdeb5b55.7d67968.vpc-lattice-svcs.us-west-2.on.aws
91-
...
92-
```
93-
94-
```bash
95-
kubectl get httproute rates -o yaml
96-
97-
apiVersion: v1
98-
items:
99-
- apiVersion: gateway.networking.k8s.io/v1beta1
100-
kind: HTTPRoute
101-
metadata:
102-
annotations:
103-
application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-0d38139624f20d213.7d67968.vpc-lattice-svcs.us-west-2.on.aws
104-
...
105-
```
90+
91+
```bash
92+
kubectl get httproute inventory -o yaml
93+
94+
apiVersion: gateway.networking.k8s.io/v1beta1
95+
kind: HTTPRoute
96+
metadata:
97+
annotations:
98+
application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-02fb06f1acdeb5b55.7d67968.vpc-lattice-svcs.us-west-2.on.aws
99+
...
100+
```
101+
102+
```bash
103+
kubectl get httproute rates -o yaml
104+
105+
apiVersion: v1
106+
items:
107+
- apiVersion: gateway.networking.k8s.io/v1beta1
108+
kind: HTTPRoute
109+
metadata:
110+
annotations:
111+
application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-0d38139624f20d213.7d67968.vpc-lattice-svcs.us-west-2.on.aws
112+
...
113+
```
106114

107115
1. If the previous step returns the expected response, store VPC Lattice assigned DNS names to variables.
108116

@@ -112,18 +120,20 @@ This example creates a single cluster in a single VPC, then configures two route
112120
```
113121

114122
Confirm that the URLs are stored correctly:
115-
123+
116124
```bash
117125
echo $ratesFQDN $inventoryFQDN
118126
rates-default-034e0056410499722.7d67968.vpc-lattice-svcs.us-west-2.on.aws inventory-default-0c54a5e5a426f92c2.7d67968.vpc-lattice-svcs.us-west-2.on.aws
119127
```
120128

121129
#### Verify service-to-service communications
122130

123-
1. Check connectivity from the `inventory-ver1` service to `parking` and `review` services:
131+
1. Check connectivity from the `inventory-ver1` service to `parking` and `review` services:
132+
124133
```bash
125134
kubectl exec deploy/inventory-ver1 -- curl $ratesFQDN/parking $ratesFQDN/review
126135
```
136+
127137
```
128138
Requsting to Pod(parking-8548d7f98d-57whb): parking handler pod
129139
Requsting to Pod(review-6df847686d-dhzwc): review handler pod
@@ -136,29 +146,29 @@ This example creates a single cluster in a single VPC, then configures two route
136146
```
137147
Requsting to Pod(inventory-ver1-99d48958c-whr2q): Inventory-ver1 handler pod
138148
```
139-
Now you could confirm the service-to-service communications within one cluster is working as expected.
149+
Now you could confirm the service-to-service communications within one cluster is working as expected.
140150

141151
## Set up multi-cluster/multi-VPC service-to-service communications
142152

143153
This sections builds on the previous section by migrating a Kubernetes service (HTTPRoute inventory) from one Kubernetes cluster to a different Kubernetes cluster.
144154
For example, it will:
145155

146-
* Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC.
147-
* Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster.
156+
- Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC.
157+
- Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster.
148158

149159
The following figure illustrates this:
150160

151161
![Multiple clusters/VPCs service-to-service communications](../images/example2.png)
152162

153163
### Steps
154164

155-
**Set up `inventory-ver2` service and serviceExport in the second cluster**
165+
**Set up `inventory-ver2` service and serviceExport in the second cluster**
156166

157167
1. Create a second Kubernetes cluster `cluster2` (using the same instructions used to create the first).
158168

159-
1. Ensure you're using the second cluster's `kubectl` context.
169+
1. Ensure you're using the second cluster's `kubectl` context.
160170
```bash
161-
kubectl config get-contexts
171+
kubectl config get-contexts
162172
```
163173
If your context is set to the first cluster, switch it to use the second cluster one:
164174
```bash
@@ -169,10 +179,11 @@ The following figure illustrates this:
169179
kubectl apply -f examples/inventory-ver2.yaml
170180
```
171181
1. Export this Kubernetes inventory-ver2 from the second cluster, so that it can be referenced by HTTPRoute in the first cluster:
182+
172183
```bash
173184
kubectl apply -f examples/inventory-ver2-export.yaml
174185
```
175-
186+
176187
**Switch back to the first cluster**
177188

178189
1. Switch context back to the first cluster
@@ -188,9 +199,10 @@ The following figure illustrates this:
188199
kubectl apply -f examples/inventory-route-bluegreen.yaml
189200
```
190201
1. Check the service-to-service connectivity from `parking`(in cluster1) to `inventory-ver1`(in cluster1) and `inventory-ver2`(in cluster2):
202+
191203
```bash
192204
kubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl "$0"; done' "$inventoryFQDN"
193-
205+
194206
Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod <----> in 2nd cluster
195207
Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod
196208
Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod
@@ -201,4 +213,6 @@ The following figure illustrates this:
201213
Requsting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod
202214
Requsting to Pod(inventory-ver1-74fc59977-wg8br): Inventory-ver1 handler pod....
203215
```
216+
204217
You can see that the traffic is distributed between *inventory-ver1* and *inventory-ver2* as expected.
218+

0 commit comments

Comments
 (0)