From f610113b67d96bcc89b49c5148da4adbc624f96c Mon Sep 17 00:00:00 2001 From: Ehren Graber Date: Fri, 17 Nov 2023 00:18:40 -0800 Subject: [PATCH] doc: fix doc build --- docs/api-types/target-group-policy.md | 19 ++--- docs/concepts/index.md | 3 +- docs/concepts/overview.md | 38 +++++----- docs/guides/deploy.md | 45 ++++++----- docs/guides/getstarted.md | 103 +++++++++++++++----------- docs/guides/multi-sn.md | 74 ++++++++---------- mkdocs.yml | 12 +-- 7 files changed, 153 insertions(+), 141 deletions(-) diff --git a/docs/api-types/target-group-policy.md b/docs/api-types/target-group-policy.md index 4f206ec4..530650c7 100644 --- a/docs/api-types/target-group-policy.md +++ b/docs/api-types/target-group-policy.md @@ -8,23 +8,24 @@ health check configurations of those backend resources. When attaching a policy to a resource, the following restrictions apply: -* A policy can be only attached to `Service` resources. -* The attached resource can only be `backendRef` of `HTTPRoute` and `GRPCRoute`. -* The attached resource should exist in the same namespace as the policy resource. +- A policy can be only attached to `Service` resources. +- The attached resource can only be `backendRef` of `HTTPRoute` and `GRPCRoute`. +- The attached resource should exist in the same namespace as the policy resource. The policy will not take effect if: -* The resource does not exist -* The resource is not referenced by any route -* The resource is referenced by a route of unsupported type +- The resource does not exist +- The resource is not referenced by any route +- The resource is referenced by a route of unsupported type These restrictions are not forced; for example, users may create a policy that targets a service that is not created yet. However, the policy will not take effect unless the target is valid. **Limitations and Considerations** -* Attaching TargetGroupPolicy to a resource that is already referenced by a route will result in a replacement -of VPC Lattice TargetGroup resource, except for health check updates. -* Removing TargetGroupPolicy of a resource will roll back protocol configuration to default setting. (HTTP1/HTTP plaintext) + +- Attaching TargetGroupPolicy to a resource that is already referenced by a route will result in a replacement + of VPC Lattice TargetGroup resource, except for health check updates. +- Removing TargetGroupPolicy of a resource will roll back protocol configuration to default setting. (HTTP1/HTTP plaintext) ## Example Configuration diff --git a/docs/concepts/index.md b/docs/concepts/index.md index bfe9ed06..c513c946 100644 --- a/docs/concepts/index.md +++ b/docs/concepts/index.md @@ -1,3 +1,4 @@ # Configure AWS Gateway API Controller + Refer to this section to further configure your use of the AWS Gateway API Controller. -The features here build on the examples shown in [Get Started Using the AWS Gateway API Controller](../getstarted.md). +The features here build on the examples shown in [Get Started Using the AWS Gateway API Controller](../guides/getstarted.md). diff --git a/docs/concepts/overview.md b/docs/concepts/overview.md index dc3bdd98..46d7636a 100644 --- a/docs/concepts/overview.md +++ b/docs/concepts/overview.md @@ -4,10 +4,10 @@ For medium and large-scale customers, applications can often spread across multi For example, information pertaining to a company’s authentication, billing, and inventory may each be served by services running on different VPCs in AWS. Someone wanting to run an application that is spread out in this way might find themselves having to work with multiple ways to configure: -* Authentication and authorization -* Observability -* Service discovery -* Network connectivity and traffic routing +- Authentication and authorization +- Observability +- Service discovery +- Network connectivity and traffic routing This is not a new problem. A common approach to interconnecting services that span multiple VPCs is to use service meshes. But these require sidecars, which can introduce scaling problems and present their own management challenges, such as dealing with control plane and data plane at scale. @@ -20,38 +20,38 @@ The goal of VPC Lattice is to provide a way to have a single, over-arching servi You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless. The components making up that view include: -* Service Directory: This is an account-level directory for gathering your services in once place. -This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you. -A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80). -However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter. +- Service Directory: This is an account-level directory for gathering your services in once place. + This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you. + A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80). + However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter. -* Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items. +- Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items. These networks let you register services to run across accounts and VPCs. You can create common authorization rules to simplify connectivity. -* Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway. +- Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway. You configure rules for handling traffic and for authorizing access. For now, you can assign IAM roles to allow certain requests. These are similar to S3 or IAM resource policies. Overall, this provides a common way to apply access rules at the service or service network levels. -* Service Gateway: This feature is not yet implemented. +- Service Gateway: This feature is not yet implemented. It is meant to centralize management of ingress and egress gateways. The Service Gateway will also let you manage access to external dependencies and clients using a centrally managed VPC. If all goes well, you should be able to achieve some of the following goals: -* Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs. +- Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs. After configuring your services with the Kubernetes Gateway API, you can facilitate communications between services on those clusters without dealing with the underlying infrastructure. VPC Lattice handles a lot of the details for you without needing things like sidecars. -* Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features. +- Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features. This gives you a way to have a consistent interface to multiple types of platforms. With VPC Lattice you can also avoid some of these common problems: -* Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together. +- Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together. IP address conflicts can also occur if you wanted to manage resources across multiple Kubernetes clusters. -* Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted. +- Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted. While this might not be a big issue for a handful of sidecars, it can be disruptive if you have thousands of pods, each with its own sidecar. ## Relationship between VPC Lattice and Kubernetes @@ -59,13 +59,13 @@ With VPC Lattice you can also avoid some of these common problems: As a Kubernetes user, you can have a very Kubernetes-native experience using the VPC Lattice APIs. The following figure illustrates how VPC Lattice objects connect to [Kubernetes Gateway API](https://gateway-api.sigs.k8s.io/) objects: -![VPC Lattice objects relation to Kubernetes objects](images/personae.png) +![VPC Lattice objects relation to Kubernetes objects](../images/personae.png) As shown in the figure, there are different personas associated with different levels of control in VPC Lattice. Notice that the Kubernetes Gateway API syntax is used to create the gateway, HTTPRoute and services, but Kubernetes gets the details of those items from VPC Lattice: -* Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass. -* Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies. -* Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case. +- Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass. +- Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies. +- Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case. This is all done by checking the related VPC Lattice Services (and related policies), Target Groups, and Targets Keep in mind that Target Groups v1 and v2 can be on different clusters in different VPCs. diff --git a/docs/guides/deploy.md b/docs/guides/deploy.md index 01760930..b7057597 100644 --- a/docs/guides/deploy.md +++ b/docs/guides/deploy.md @@ -14,20 +14,21 @@ Run through them again for a second cluster to use with the extended example sho ```bash eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION ``` -3. Configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. See [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) for details. Lattice has both an IPv4 and IPv6 prefix lists available. +3. Configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. See [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) for details. Lattice has both an IPv4 and IPv6 prefix lists available. - ```bash - CLUSTER_SG=$(aws eks describe-cluster --name $CLUSTER_NAME --output json| jq -r '.cluster.resourcesVpcConfig.clusterSecurityGroupId') - PREFIX_LIST_ID=$(aws ec2 describe-managed-prefix-lists --query "PrefixLists[?PrefixListName=="\'com.amazonaws.$AWS_REGION.vpc-lattice\'"].PrefixListId" | jq -r '.[]') - aws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions "PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID}}],IpProtocol=-1" - PREFIX_LIST_ID_IPV6=$(aws ec2 describe-managed-prefix-lists --query "PrefixLists[?PrefixListName=="\'com.amazonaws.$AWS_REGION.ipv6.vpc-lattice\'"].PrefixListId" | jq -r '.[]') - aws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions "PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID_IPV6}}],IpProtocol=-1" - ``` -3. Create an IAM OIDC provider: See [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) for details. + ```bash + CLUSTER_SG=$(aws eks describe-cluster --name $CLUSTER_NAME --output json| jq -r '.cluster.resourcesVpcConfig.clusterSecurityGroupId') + PREFIX_LIST_ID=$(aws ec2 describe-managed-prefix-lists --query "PrefixLists[?PrefixListName=="\'com.amazonaws.$AWS_REGION.vpc-lattice\'"].PrefixListId" | jq -r '.[]') + aws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions "PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID}}],IpProtocol=-1" + PREFIX_LIST_ID_IPV6=$(aws ec2 describe-managed-prefix-lists --query "PrefixLists[?PrefixListName=="\'com.amazonaws.$AWS_REGION.ipv6.vpc-lattice\'"].PrefixListId" | jq -r '.[]') + aws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions "PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID_IPV6}}],IpProtocol=-1" + ``` + +4. Create an IAM OIDC provider: See [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) for details. ```bash eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve --region $AWS_REGION ``` -4. Create a policy (`recommended-inline-policy.json`) in IAM with the following content that can invoke the gateway API and copy the policy arn for later use: +5. Create a policy (`recommended-inline-policy.json`) in IAM with the following content that can invoke the gateway API and copy the policy arn for later use: ```bash { "Version": "2012-10-17", @@ -57,15 +58,15 @@ Run through them again for a second cluster to use with the extended example sho --policy-name VPCLatticeControllerIAMPolicy \ --policy-document file://examples/recommended-inline-policy.json ``` -5. Create the `aws-application-networking-system` namespace: +6. Create the `aws-application-networking-system` namespace: ```bash kubectl apply -f examples/deploy-namesystem.yaml ``` -6. Retrieve the policy ARN: +7. Retrieve the policy ARN: ```bash export VPCLatticeControllerIAMPolicyArn=$(aws iam list-policies --query 'Policies[?PolicyName==`VPCLatticeControllerIAMPolicy`].Arn' --output text) ``` -7. Create an iamserviceaccount for pod level permission: +8. Create an iamserviceaccount for pod level permission: ```bash eksctl create iamserviceaccount \ --cluster=$CLUSTER_NAME \ @@ -76,11 +77,14 @@ Run through them again for a second cluster to use with the extended example sho --region $AWS_REGION \ --approve ``` -8. Run either `kubectl` or `helm` to deploy the controller: +9. Run either `kubectl` or `helm` to deploy the controller: + ```bash kubectl apply -f examples/deploy-v0.0.18.yaml ``` + or + ```bash # login to ECR aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws @@ -89,16 +93,17 @@ Run through them again for a second cluster to use with the extended example sho oci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart\ --version=v0.0.18 \ --set=serviceAccount.create=false --namespace aws-application-networking-system \ - # awsRegion, clusterVpcId, clusterName, awsAccountId are required for case where IMDS is NOT AVAILABLE, e.g Fargate, self-managed clusters with IMDS access blocked + # awsRegion, clusterVpcId, clusterName, awsAccountId are required for case where IMDS is NOT AVAILABLE, e.g Fargate, self-managed clusters with IMDS access blocked --set=awsRegion= \ --set=clusterVpcId= \ --set=clusterName= \ --set=awsAccountId= \ # latticeEndpoint is required for the case where the VPC Lattice endpoint is being overridden --set=latticeEndpoint= \ - - ``` -9. Create the `amazon-vpc-lattice` GatewayClass: - ```bash - kubectl apply -f examples/gatewayclass.yaml + ``` + +10. Create the `amazon-vpc-lattice` GatewayClass: + ```bash + kubectl apply -f examples/gatewayclass.yaml + ``` diff --git a/docs/guides/getstarted.md b/docs/guides/getstarted.md index 6fd18747..4f4d347a 100644 --- a/docs/guides/getstarted.md +++ b/docs/guides/getstarted.md @@ -6,8 +6,7 @@ The first part of this section provides an example of setting up of service-to-s The second section extends that example by creating another inventory service on a second cluster on a different VPC, and spreading traffic to that service across the two clusters and VPCs. Both clusters are created using `eksctl`, with both clusters created from the same account by the same cluster admin. -Using these examples as a foundation, see the [Configuration](configure/index.md) section for ways to further configure service-to-service communications. - +Using these examples as a foundation, see the [Configuration](../concepts/index.md) section for ways to further configure service-to-service communications. **NOTE**: You can get the yaml files used on this page by cloning the [AWS Gateway API Controller for VPC Lattice](https://github.com/aws/aws-application-networking-k8s) site. The files are in the `examples/` directory. @@ -15,7 +14,7 @@ Using these examples as a foundation, see the [Configuration](configure/index.md This example creates a single cluster in a single VPC, then configures two routes (rates and inventory) and three services (parking, review, and inventory-1). The following figure illustrates this setup: -![Single cluster/VPC service-to-service communications](images/example1.png) +![Single cluster/VPC service-to-service communications](../images/example1.png) ### Steps @@ -23,7 +22,7 @@ This example creates a single cluster in a single VPC, then configures two route 1. Use AWS CLI to create a VPC Lattice service network, with the name `my-hotel`: ```bash - aws vpc-lattice create-service-network --name my-hotel + aws vpc-lattice create-service-network --name my-hotel { "arn": "", "authType": "NONE", @@ -31,8 +30,8 @@ This example creates a single cluster in a single VPC, then configures two route "name": "my-hotel" } ``` - 1. Create the service network VPC association between current k8s cluster VPC and `my-hotel` service network: + ```bash aws vpc-lattice create-service-network-vpc-association --service-network-identifier --vpc-identifier { @@ -41,9 +40,10 @@ This example creates a single cluster in a single VPC, then configures two route "id": "", "status": "CREATE_IN_PROGRESS" } - ``` - + ``` + Wait until above ServiceNetworkVpcAssociation status change to `ACTIVE`: + ```bash aws vpc-lattice get-service-network-vpc-association --service-network-vpc-association-identifier snva-0041ace3a8658371e { @@ -51,14 +51,18 @@ This example creates a single cluster in a single VPC, then configures two route "status": "ACTIVE", } ``` + 1. Create the Kubernetes Gateway `my-hotel`: + ```bash kubectl apply -f examples/my-hotel-gateway.yaml ``` + Verify that `my-hotel` Gateway is created with `PROGRAMMED` status equals to `True`: + ```bash - kubectl get gateway - + kubectl get gateway + NAME CLASS ADDRESS PROGRAMMED AGE my-hotel amazon-vpc-lattice True 7d12h ``` @@ -75,37 +79,40 @@ This example creates a single cluster in a single VPC, then configures two route kubectl apply -f examples/inventory-route.yaml ``` 1. Find out HTTPRoute's DNS name from HTTPRoute status: + ```bash kubectl get httproute - + NAME HOSTNAMES AGE inventory 51s rates 6m11s ``` + 1. Check VPC Lattice generated DNS address for HTTPRoute `inventory` and `rates` : - ```bash - kubectl get httproute inventory -o yaml - - apiVersion: gateway.networking.k8s.io/v1beta1 - kind: HTTPRoute - metadata: - annotations: - application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-02fb06f1acdeb5b55.7d67968.vpc-lattice-svcs.us-west-2.on.aws - ... - ``` - - ```bash - kubectl get httproute rates -o yaml - - apiVersion: v1 - items: - - apiVersion: gateway.networking.k8s.io/v1beta1 - kind: HTTPRoute - metadata: - annotations: - application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-0d38139624f20d213.7d67968.vpc-lattice-svcs.us-west-2.on.aws - ... - ``` + + ```bash + kubectl get httproute inventory -o yaml + + apiVersion: gateway.networking.k8s.io/v1beta1 + kind: HTTPRoute + metadata: + annotations: + application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-02fb06f1acdeb5b55.7d67968.vpc-lattice-svcs.us-west-2.on.aws + ... + ``` + + ```bash + kubectl get httproute rates -o yaml + + apiVersion: v1 + items: + - apiVersion: gateway.networking.k8s.io/v1beta1 + kind: HTTPRoute + metadata: + annotations: + application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-0d38139624f20d213.7d67968.vpc-lattice-svcs.us-west-2.on.aws + ... + ``` 1. If the previous step returns the expected response, store VPC Lattice assigned DNS names to variables. @@ -115,17 +122,20 @@ This example creates a single cluster in a single VPC, then configures two route ``` Confirm that the URLs are stored correctly: - + ```bash echo $ratesFQDN $inventoryFQDN rates-default-034e0056410499722.7d67968.vpc-lattice-svcs.us-west-2.on.aws inventory-default-0c54a5e5a426f92c2.7d67968.vpc-lattice-svcs.us-west-2.on.aws ``` + **Verify service-to-service communications** -1. Check connectivity from the `inventory-ver1` service to `parking` and `review` services: +1. Check connectivity from the `inventory-ver1` service to `parking` and `review` services: + ```bash kubectl exec deploy/inventory-ver1 -- curl $ratesFQDN/parking $ratesFQDN/review ``` + ``` Requsting to Pod(parking-8548d7f98d-57whb): parking handler pod Requsting to Pod(review-6df847686d-dhzwc): review handler pod @@ -138,29 +148,29 @@ This example creates a single cluster in a single VPC, then configures two route ``` Requsting to Pod(inventory-ver1-99d48958c-whr2q): Inventory-ver1 handler pod ``` -Now you could confirm the service-to-service communications within one cluster is working as expected. + Now you could confirm the service-to-service communications within one cluster is working as expected. ## Set up multi-cluster/multi-VPC service-to-service communications This sections builds on the previous section by migrating a Kubernetes service (HTTPRoute inventory) from one Kubernetes cluster to a different Kubernetes cluster. For example, it will: -* Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC. -* Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster. +- Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC. +- Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster. The following figure illustrates this: -![Multiple clusters/VPCs service-to-service communications](images/example2.png) +![Multiple clusters/VPCs service-to-service communications](../images/example2.png) ### Steps -**Set up `inventory-ver2` service and serviceExport in the second cluster** +**Set up `inventory-ver2` service and serviceExport in the second cluster** 1. Create a second Kubernetes cluster `cluster2` (using the same instructions used to create the first). -1. Ensure you're using the second cluster's `kubectl` context. +1. Ensure you're using the second cluster's `kubectl` context. ```bash - kubectl config get-contexts + kubectl config get-contexts ``` If your context is set to the first cluster, switch it to use the second cluster one: ```bash @@ -171,10 +181,11 @@ The following figure illustrates this: kubectl apply -f examples/inventory-ver2.yaml ``` 1. Export this Kubernetes inventory-ver2 from the second cluster, so that it can be referenced by HTTPRoute in the first cluster: + ```bash kubectl apply -f examples/inventory-ver2-export.yaml ``` - + **Switch back to the first cluster** 1. Switch context back to the first cluster @@ -190,9 +201,10 @@ The following figure illustrates this: kubectl apply -f examples/inventory-route-bluegreen.yaml ``` 1. Check the service-to-service connectivity from `parking`(in cluster1) to `inventory-ver1`(in cluster1) and `inventory-ver2`(in cluster2): + ```bash kubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl "$0"; done' "$inventoryFQDN" - + Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod <----> in 2nd cluster Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod @@ -203,7 +215,8 @@ The following figure illustrates this: Requsting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod Requsting to Pod(inventory-ver1-74fc59977-wg8br): Inventory-ver1 handler pod.... ``` - You can see that the traffic is distributed between *inventory-ver1* and *inventory-ver2* as expected. + + You can see that the traffic is distributed between _inventory-ver1_ and _inventory-ver2_ as expected. ## IPv6 Support diff --git a/docs/guides/multi-sn.md b/docs/guides/multi-sn.md index a6272c64..f4b8ef3f 100644 --- a/docs/guides/multi-sn.md +++ b/docs/guides/multi-sn.md @@ -4,22 +4,22 @@ Here is one popular multi-cluster architecture: -* config cluster, where is used for configuration management -* multiple work-load cluster(s), where are used to run application workload(s) +- config cluster, where is used for configuration management +- multiple work-load cluster(s), where are used to run application workload(s) You can see a production usecase at AirBnb [airbnb mullti-cluster](https://www.youtube.com/watch?v=1D8lg36ZNHs) Here is our example -![Config Cluster and multiple workload cluster](images/multi-sn.png) +![Config Cluster and multiple workload cluster](../images/multi-sn.png) -* there are 2 gateway(s), gateway-1/lattice-service-network-1 and gateway-2/lattice-service-network-2 -* gateway-1 contains HTTPRoute1 and HTTPRoute2 -* gateway-2 contains HTTPRoute2 and HTTPRoute3 -* blue workload cluster(s) are using gateway-1 to access HTTPRoute1 an HTTPRoute2 -* orange workload cluster(s) are using gateway-2 to access HTTPRoute2 an HTTPRoute3 +- there are 2 gateway(s), gateway-1/lattice-service-network-1 and gateway-2/lattice-service-network-2 +- gateway-1 contains HTTPRoute1 and HTTPRoute2 +- gateway-2 contains HTTPRoute2 and HTTPRoute3 +- blue workload cluster(s) are using gateway-1 to access HTTPRoute1 an HTTPRoute2 +- orange workload cluster(s) are using gateway-2 to access HTTPRoute2 an HTTPRoute3 -### Config Cluster Gateway Configuration +### Config Cluster Gateway Configuration ``` # gateway-1 @@ -33,7 +33,7 @@ spec: gatewayClassName: amazon-vpc-lattice listeners: ... -``` +``` ``` # gateway-2 @@ -47,7 +47,7 @@ spec: gatewayClassName: amazon-vpc-lattice listeners: ... -``` +``` ``` # httproute-1 @@ -59,7 +59,7 @@ spec: parentRefs: - name: gateway-1 # part of gateway-1/service-network-1 ... -``` +``` ``` # httproute-2 @@ -69,26 +69,27 @@ metadata: name: httproute-2 spec: parentRefs: - - name: gateway-1 # part of both gateway-1 and gateway-2 + - name: gateway-1 # part of both gateway-1 and gateway-2 sectionName: http - name: gateway-2 sectionName: http - ... - ``` + ... +``` - ``` +``` # httproute-3 apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute metadata: - name: httproute-3 +name: httproute-3 spec: - parentRefs: - - name: gateway-2 # part of gateway-2/service-network-2 - ... +parentRefs: +- name: gateway-2 # part of gateway-2/service-network-2 +... ``` ### blue workload cluster(s) + Associate cluster's VPC to gateway-1/service-network-1 so that all Pod(s) in blue workload clusters can access HTTPRoute(s)of gateway-1, HTTPRoute-1 and HTTPRoute-2 ``` @@ -101,9 +102,10 @@ spec: gatewayClassName: amazon-vpc-lattice listeners: ... -``` +``` ### orange workload cluster(s) + Associate cluster's VPC to gateway-2/service-network-2, so that all Pod(s) in orange workload clusters can access HTTPRoute(s) of gateway-2, HTTPRoute-2 an HTTPRoute-3 ``` @@ -116,11 +118,11 @@ spec: gatewayClassName: amazon-vpc-lattice listeners: ... -``` +``` ## Defining HTTPRoute in Config Cluster -![ServiceImport](images/serviceimport.png) +![ServiceImport](../images/serviceimport.png) ### Exporting Kubernetes Service to AWS Lattice Service @@ -134,7 +136,7 @@ metadata: name: service-1 annotations: application-networking.k8s.aws/federation: "amazon-vpc-lattice" # AWS VPC Lattice -``` +``` ### Configure HTTPRoute in config cluster to reference K8S service(s) in worload cluster(s) @@ -152,7 +154,7 @@ spec: ``` ``` -# httproute +# httproute apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute metadata: @@ -160,31 +162,19 @@ metadata: spec: parentRefs: - name: gateway-1 - sectionName: http + sectionName: http rules: - - backendRefs: + - backendRefs: - name: service-1 kind: ServiceImport weight: 25 - name: service-2 kind: ServiceImport - weight: 25 + weight: 25 - name: service-3 kind: ServiceImport weight: 25 - name: service-4 kind: ServiceImport - weight: 25 -``` - - - - - - - - - - - - + weight: 25 +``` diff --git a/mkdocs.yml b/mkdocs.yml index 5ce66327..aeeb817c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -19,11 +19,13 @@ nav: - Cross-Account Sharing: concepts/ram-sharing.md - API Specification: api-reference.md - API Reference: - - GRPCRoute: reference/grpc-route.md - - TargetGroupPolicy: reference/target-group-policy.md - - VpcAssociationPolicy: reference/vpc-association-policy.md - - AccessLogPolicy: reference/access-log-policy.md - - IAMAuthPolicy: reference/iam-auth-policy.md + - AccessLogPolicy: api-types/access-log-policy.md + - GRPCRoute: api-types/grpc-route.md + - HttpRoute: api-types/http-route.md + - IAMAuthPolicy: api-types/iam-auth-policy.md + - Service: api-types/service.md + - TargetGroupPolicy: api-types/target-group-policy.md + - VpcAssociationPolicy: api-types/vpc-association-policy.md - Contributing: - Developer Guide: contributing/developer.md - Developer Cheat Sheet: contributing/developer-cheat-sheet.md