An experiment in testing the performance difference between container-to-container communications when both containers are co-located with AWS Fargate.
Install runtime and dev dependencies with:
poetry install --no-rootRun local app via:
poetry run uvicorn example:app --reloadFor a production-level workload, run:
poetry run gunicorn -k uvicorn.workers.UvicornWorker example:appOpen http://localhost:8000 to see the response.
Using docker-compose, we can create the same images that we would expect to
run in the target ECS environment.
Build and run with: docker-compose up
Test with:
# tcp service
curl http://localhost:8080
# combined service
curl http://localhost:8081
# sharedvol service
curl http://localhost:8082All should respond similarly - with the request flowing through the nginx container to the Python application container, and return a JSON response.
Make sure to run docker-compose down to stop & remove any running containers.
Read about the Copilot concepts here: https://aws.github.io/copilot-cli/docs/concepts/overview/
For the purpose of this test, we'll create one Environment named test, and
deploy multiple Services with the different configurations we want to compare.
We need to create the Environment before we can deploy Services to it.
copilot env init --name test --profile <named profile> --default-configAgain, in a production environment, you may want to audit and/or customize these choices.
As the Copilot CLI doesn't yet know how to build&push a "sidecar" (proxy) container, we have to build and push it locally, and then reference it in the manifest. See:
- https://aws.github.io/copilot-cli/docs/developing/sidecars/
- https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#http-target-container
Steps are shown below on where and when to push the sidecar for each service.
Initialize the tcp service:
copilot svc init --name tcp --svc-type "Load Balanced Web Service" --dockerfile tcp.DockerfileThis will initialize from the existing copilot/tcp/manifest.yml file.
Before we can deploy the service, we need to push the nginx proxy sidecar image:
docker compose build tcp_nginx
docker push "$COPILOT_AWS_ACCOUNT_ID.dkr.ecr.$COPILOT_AWS_REGION.amazonaws.com/ecs-network-perf/tcp:nginx"Deploy the tcp Service to the test Environment:
copilot svc deploy --name tcp --env testAfter about 5 minutes or so, you'll have a URL that looks something like:
http://ecs-n-Publi-<some weird set of characters>.us-east-1.elb.amazonaws.com.
And the service should be available!
Clean up the service with:
copilot svc delete --name tcp --env testcopilot svc init --name combined --svc-type "Load Balanced Web Service" --dockerfile combined.Dockerfilecopilot svc deploy --name combined --env testKeep in mind: The output URL should have the service name as part of the URL,
like http://..../combined - as all services share the same Application Load Balancer.
copilot svc delete --name combined --env testcopilot svc init --name sharedvol --svc-type "Load Balanced Web Service" --dockerfile sharedvol.Dockerfiledocker compose build sharedvol_nginx
docker push "$COPILOT_AWS_ACCOUNT_ID.dkr.ecr.$COPILOT_AWS_REGION.amazonaws.com/ecs-network-perf/sharedvol:nginx"copilot svc deploy --name sharedvol --env testcopilot svc delete --name sharedvol --env test