Dedicated game server deployment method for Unreal engine-based games on EKS. It demonstrates (1) Dynamic port allocation with kubernetes-native networking (v1/Service); and (2) public IPv4 IP with Elastic IP or EC2 public IPv4. Dedicated servers that run in a pod like game servers, require public IPv4 access via EIP for predictable IPv4 address allocation. This is to mask public IPv4 addresses with customer DNS rather than .compute.amazonaws.com or mitigate network volumetric attacks (layer 3) with AWS Shield.
- Build the game image manually or with code-pipeline
- Create an EKS cluster with Karpenter
- Deploy Container Insights
- Download the windows game client. Discover the game endpoint (
kubectl get gs
) and connect or spectate the bot playing.
Use https://docs.unrealengine.com/5.0/en-US/setting-up-dedicated-servers-in-unreal-engine/ to build two sets of server binaries and content. We already built the server binaries and content. Follow the following for manual steps on your local development host.
1/ Populate the following enviroment variables.
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
export AWS_REGION=us-west-2
export BUILDX_VER=v0.10.3
export BASE_REPO=baseimage
export BASE_IMAGE_TAG=multiarch-py3
export BASE_IMAGE_ARM_TAG=arm-py3
export BASE_IMAGE_AMD_TAG=amd-py3
export GAME_REPO=lyra
export GAME_SERVER_TAG=lyra-server-multiarch
export GAME_ASSETS_TAG=lyra_starter_game
export GAME_ARM_ASSETS_TAG=LinuxArm64Server
export GAME_AMD_ASSETS_TAG=LinuxServer
export GAME_ARM_SERVER_TAG=lyra_starter_game_arm64
export GAME_AMD_SERVER_TAG=lyra_starter_game_amd64
export GAME_ATTACKER_TAG=lyra_attacker
export GAME_ATTACKER_ARM_IMAGE=lyra_attacker_arm64
export GAME_ATTACKER_AMD_IMAGE=lyra_attacker_amd64
export S3_LYRA_ASSETS=lyra-starter-game
export GITHUB_USER=mygithubusername
export GITHUB_BRANCH=master, main etc
export GITHUB_REPO=containerized-game-servers
export CLUSTER_NAME=lyra-usw-2
2/ Build the base image
This is the image that includes the generic tools and libraries needed for the game. We used CPU architecture agnostic packages to allow dynamic compile and linkage to local architecture. The persona that most interested in this build is the IT/Devops that optimizes for stability and security
cd ./server/base-image-multiarch-python3
./buildx.sh
3/ Link the game assets
This is the game images, video, and audio files. The persona that owns this step is the game artist. The media files (audio, images, and videos) can be stored on storage solutions such as S3.
cd ./server/lyra-assets-image-multiarch
./buildx.sh
This is the game governance piece. It includes the scripts the controls the game lifecycle and the game ecosystem like matchmaking, leaderboard, and messaging applications. The persona that owns this step is the game live/devops team that operates the game.
cd ./server/stk-game-server-image-multiarch
./buildx.sh
The following will create a CodePipline that copy the build scripts in server/
folder into a CodeCommit repository and run the steps above in a separate CodeBuild jobs.
1/ deploy the pipeline that creates the base image
./deploy-base-pipeline.sh
2/ deploy the pipeline that creates the stk image game
./deploy-lyra-pipeline.sh
The Source stage includes the code and config. Note the Dockerfile includes no processor architecture specific so the libraries and packages linked dynamically via the packaged tools e.g., apt
or yum
The resulted images of the base-image pipeline are two images: a 601.11 MB (AMD64) and 582.56 MB (ARM64) docker images.
The source stage includes the code and config.
The resulted image of the developer pipeline are two images, lyra-assets-amd
for AMD64 and lyra-assets-arm
for ARM64 and Image Index.
The source stage includes the code and config.
Note that this step pulls only the compiled code produced by the game developer pipeline and the assets from the game artist pipeline.
FROM stk_base AS lyra_game
COPY --from=1 /lyra-assets /lyra-assets
COPY --from=2 /lyra-code /lyra-code
cat lyra-deploy.yaml | envsubst | kubectl apply -f -
cat lyra-provisioner | envsubst | kubectl apply -f -
We create a service that exposes the Service on each Node's IP at a static port (the NodePort). We use a service definition template that is mutated by a postStart pod lifecycle hook.
apiVersion: v1
kind: Service
metadata:
name: $SVC_NAME
spec:
selector:
gamepod: $POD_NAME
ports:
- protocol: UDP
port: 7777
targetPort: 7777
type: NodePort
To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.
Next, we create a pod-specific label that we use for the Service selector to assure 1:1 traffic routing between the service and the pod.
kubectl label pod $POD_NAME gamepod=$POD_NAME
Finally, we pull the public IPv4 IP the game is exposed by:
service_name=$POD_NAME-svc-$(kubectl get no -o wide `kubectl get po $POD_NAME -o wide | awk '{print $7}'|grep -v NODE`| awk '{print $7}' | grep -v EXTERNAL-IP|sed "s/\./-/g")
export SVC_NAME=$service_name
and create the service in the pod lifecycle.postStart
phase.
We delete the sevrvice upon the pod termination using the pod lifecycle preStop hook.
lifecycle:
postStart:
exec:
command: ["/usr/local/lyra_starter_game/create_node_port_svc.sh"]
preStop:
exec:
command: ["/bin/sh","-c","kubectl delete svc `kubectl get svc|grep $POD_NAME | awk '{print $1}'`"]
Two methods are reviewed for allocating public IPv4 addresses to game servers. (1) the public IPv4 address of the EC2 instance; and (2) EC2 elastic IP (EIP).
The EC2 instance public IPv4 address requires the instance to be launched in a public subnet with an ingress route table to an internet gateway. We use Karpenter to provision an EC2 instance so we configure
subnetSelector:
karpenter.sh/subnet/discovery: lyra-usw2-public
And decorate the public subnets with the tag karpenter.sh/subnet/discovery: lyra-usw2-public
The elastic IP option is beneficial if you require public IPv4 for predictable IPv4 address allocation. This is to mask public IPv4 addresses with customer DNS rather than .compute.amazonaws.com or mitigate network volumetric attacks (layer 3) with AWS Shield.
We use karpenter.k8s.aws/v1alpha1
API to define a AWSNodeTemplate
that provides elastic IP on demand upon EC2 instance launch. Elastic IP provisioning logic:
- Get one of the available elastic IP
elastic_ip=`aws ec2 describe-addresses --filters "Name=tag:game-name,Values=lyra" --query "Addresses[?AssociationId==null].AllocationId"`
- Create one if none exists
elastic_ip=`aws ec2 allocate-address --tag-specifications 'ResourceType=elastic-ip,Tags=[{Key=game-name,Value=lyra},{Key=Name,Value=lyra}]' --query "AllocationId"`
- Discover the EC2 instance ID
instance_id=`curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id`
- Discover the elastic network interface to use for the elastic IP using the EC2 instacne ID
eni_id=`aws ec2 describe-network-interfaces --filters Name=tag:Name,Values=karpenter.sh/provisioner-name/lyra --query "NetworkInterfaces[?Attachment.InstanceId == '$instance_id'].NetworkInterfaceId"`
- Finally,
aws ec2 associate-address --network-interface-id $eni_id --allocation-id $elastic_ip