A simple demo of gRPC load balancing.
The following software is required to run the demo:
Build the native binaries:
./build.sh
The following binaries will be generated in the out
directory:
-
server: sends its IP address back to HTTP and gRPC clients.
-
client-http: connects to server via HTTP.
-
client-grpc: connects to server via gRPC.
Arrange your terminal as follows for better experience:
Pane A Start the server:
out/server
The server provides the same addr service via 2 TCP ports:
- Port 80 for http
- Port 30051 for gRPC
Pane B Connect to server's addr service via http endpoint:
out/client-http http://127.0.0.1:80/addr
You can see the IP address of the connected server instance.
Pane C Connect to server's addr service via gRPC endpoint:
out/client-grpc localhost:30051
You can see the IP address of the connected server instance.
Create grpc-lb
namespace for this demo:
kubectl create ns grpc-lb
Build images:
skaffold build
Run server (only one pod) in grpc-lb
namespace:
skaffold dev -n grpc-lb
The server provides the same addr service via 2 TCP ports:
- Port 30080 for http
- Port 30051 for gRPC
Run native client-http:
out/client-http http://127.0.0.1:30080/addr
Run native client-grpc:
out/client-grpc localhost:30051
You can see the IP address of the single server pod is 10.1.0.8:
Now, scale the server to 5 pods:
kubectl scale -n grpc-lb --replicas=5 deployment/addr-server
You can see that only one of the gRPC server pods is being connected to and serving the same client-grpc instance:
Install Linkerd 2 into active Kubernetes cluster:
# Install
linkerd install | kubectl apply -f -
# Check
linkerd check
Delete the old namespace grpc-lb
:
kubectl delete ns grpc-lb
Create a new namespace grpc-lb
with Linkerd injected:
kubectl apply -f ns.yml
Check data plane again, if necessary:
linkerd -n grpc-lb check --proxy
Uncomment the following line in skaffold.yaml
:
- client-grpc.yml
Run server and k8s version of client-grpc in grpc-lb
namespace:
skaffold dev -n grpc-lb
Run native client-grpc, as a comparison:
out/client-grpc localhost:30051
Make sure you have run these commands as follows:
Wait for the whole clients and server becoming stable.
Now, scale the server to 5 pods:
kubectl scale -n grpc-lb --replicas=5 deployment/addr-server
You can see that all gRPC server pods (10.1.0.71 -- 10.1.0.75) take turns in serving the same client instance within the same grpc-lb
namespace.
View Linkerd dashboard:
linkerd dashboard &
You can see the topology of native client-grpc, meshed client-grpc and meshed server:
You can also see that all gRPC server pods take turns in serving the same client instance within the same grpc-lb
namespace.