You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/concepts/overview.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,10 @@ For medium and large-scale customers, applications can often spread across multi
4
4
For example, information pertaining to a company’s authentication, billing, and inventory may each be served by services running on different VPCs in AWS.
5
5
Someone wanting to run an application that is spread out in this way might find themselves having to work with multiple ways to configure:
6
6
7
-
* Authentication and authorization
8
-
* Observability
9
-
* Service discovery
10
-
* Network connectivity and traffic routing
7
+
- Authentication and authorization
8
+
- Observability
9
+
- Service discovery
10
+
- Network connectivity and traffic routing
11
11
12
12
This is not a new problem.
13
13
A common approach to interconnecting services that span multiple VPCs is to use service meshes. But these require sidecars, which can introduce scaling problems and present their own management challenges, such as dealing with control plane and data plane at scale.
@@ -20,38 +20,38 @@ The goal of VPC Lattice is to provide a way to have a single, over-arching servi
20
20
You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless.
21
21
The components making up that view include:
22
22
23
-
* Service Directory: This is an account-level directory for gathering your services in once place.
24
-
This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you.
25
-
A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80).
26
-
However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter.
23
+
- Service Directory: This is an account-level directory for gathering your services in once place.
24
+
This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you.
25
+
A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80).
26
+
However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter.
27
27
28
-
* Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items.
28
+
- Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items.
29
29
These networks let you register services to run across accounts and VPCs.
30
30
You can create common authorization rules to simplify connectivity.
31
31
32
-
* Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway.
32
+
- Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway.
33
33
You configure rules for handling traffic and for authorizing access.
34
34
For now, you can assign IAM roles to allow certain requests.
35
35
These are similar to S3 or IAM resource policies.
36
36
Overall, this provides a common way to apply access rules at the service or service network levels.
37
37
38
-
* Service Gateway: This feature is not yet implemented.
38
+
- Service Gateway: This feature is not yet implemented.
39
39
It is meant to centralize management of ingress and egress gateways.
40
40
The Service Gateway will also let you manage access to external dependencies and clients using a centrally managed VPC.
41
41
42
42
If all goes well, you should be able to achieve some of the following goals:
43
43
44
-
* Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs.
44
+
- Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs.
45
45
After configuring your services with the Kubernetes Gateway API, you can facilitate communications between services on those clusters without dealing with the underlying infrastructure.
46
46
VPC Lattice handles a lot of the details for you without needing things like sidecars.
47
-
* Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features.
47
+
- Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features.
48
48
This gives you a way to have a consistent interface to multiple types of platforms.
49
49
50
50
With VPC Lattice you can also avoid some of these common problems:
51
51
52
-
* Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together.
52
+
- Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together.
53
53
IP address conflicts can also occur if you wanted to manage resources across multiple Kubernetes clusters.
54
-
* Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted.
54
+
- Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted.
55
55
While this might not be a big issue for a handful of sidecars, it can be disruptive if you have thousands of pods, each with its own sidecar.
56
56
57
57
## Relationship between VPC Lattice and Kubernetes
@@ -64,8 +64,8 @@ The following figure illustrates how VPC Lattice objects connect to [Kubernetes
64
64
As shown in the figure, there are different personas associated with different levels of control in VPC Lattice.
65
65
Notice that the Kubernetes Gateway API syntax is used to create the gateway, HTTPRoute and services, but Kubernetes gets the details of those items from VPC Lattice:
66
66
67
-
* Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass.
68
-
* Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies.
69
-
* Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case.
67
+
- Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass.
68
+
- Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies.
69
+
- Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case.
70
70
This is all done by checking the related VPC Lattice Services (and related policies), Target Groups, and Targets
71
71
Keep in mind that Target Groups v1 and v2 can be on different clusters in different VPCs.
1. Configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. See [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) for details. Lattice has both an IPv4 and IPv6 prefix lists available.
1. Create a policy (`recommended-inline-policy.json`) in IAM with the following content that can invoke the gateway API and copy the policy arn for later use:
37
+
36
38
```bash
37
39
{
38
40
"Version": "2012-10-17",
@@ -62,6 +64,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
1. Create the `aws-application-networking-system` namespace:
66
69
```bash
67
70
kubectl apply -f examples/deploy-namesystem.yaml
@@ -71,6 +74,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
71
74
export VPCLatticeControllerIAMPolicyArn=$(aws iam list-policies --query 'Policies[?PolicyName==`VPCLatticeControllerIAMPolicy`].Arn' --output text)
72
75
```
73
76
1. Create an iamserviceaccount for pod level permission:
77
+
74
78
```bash
75
79
eksctl create iamserviceaccount \
76
80
--cluster=$CLUSTER_NAME \
@@ -128,10 +132,13 @@ Alternatively, you can manually provide configuration variables when installing
128
132
## Controller Installation
129
133
130
134
1. Run either `kubectl` or `helm` to deploy the controller. Check [Environment Variables](../concepts/environment.md) for detailed explanation of each configuration option.
Copy file name to clipboardExpand all lines: docs/guides/getstarted.md
+51-37Lines changed: 51 additions & 37 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,6 @@ Both clusters are created using `eksctl`, with both clusters created from the sa
8
8
9
9
Using these examples as a foundation, see the [Configuration](../concepts/index.md) section for ways to further configure service-to-service communications.
10
10
11
-
12
11
**NOTE**: You can get the yaml files used on this page by cloning the [AWS Gateway API Controller](https://github.com/aws/aws-application-networking-k8s) repository.
13
12
14
13
## Set up single-cluster/VPC service-to-service communications
@@ -25,12 +24,14 @@ This example creates a single cluster in a single VPC, then configures two route
25
24
When `DEFAULT_SERVICE_NETWORK` environment variable is specified, the controller will automatically configure a service network for you.
Requsting to Pod(parking-8548d7f98d-57whb): parking handler pod
129
139
Requsting to Pod(review-6df847686d-dhzwc): review handler pod
@@ -136,29 +146,29 @@ This example creates a single cluster in a single VPC, then configures two route
136
146
```
137
147
Requsting to Pod(inventory-ver1-99d48958c-whr2q): Inventory-ver1 handler pod
138
148
```
139
-
Now you could confirm the service-to-service communications within one cluster is working as expected.
149
+
Now you could confirm the service-to-service communications within one cluster is working as expected.
140
150
141
151
## Set up multi-cluster/multi-VPC service-to-service communications
142
152
143
153
This sections builds on the previous section by migrating a Kubernetes service (HTTPRoute inventory) from one Kubernetes cluster to a different Kubernetes cluster.
144
154
For example, it will:
145
155
146
-
* Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC.
147
-
* Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster.
156
+
- Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC.
157
+
- Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster.
0 commit comments