From 15120af91f4c046298d4b03ec8c9e8caa9473293 Mon Sep 17 00:00:00 2001 From: Claudia Date: Fri, 5 Aug 2022 12:40:26 +0100 Subject: [PATCH] Apply suggestions from code review Co-authored-by: Richard Case <198425+richardcase@users.noreply.github.com> --- README.md | 4 ++-- cmd/README.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 9aceaa9..a96ade6 100644 --- a/README.md +++ b/README.md @@ -57,8 +57,8 @@ The sequence of events for a full run is: - E2E section (streamed over SSH from the management host)... - Create a kind cluster - Initialise the cluster with required CAPI controllers - - Generate a template for the CAPMVM workload - - Apply the workload to the kind cluster + - Generate a template for the CAPMVM workload cluster + - Apply the workload cluster yaml to the kind cluster - Ensure all supplied flintlock hosts have been used - Deploy an application to the workload cluster - Teardown diff --git a/cmd/README.md b/cmd/README.md index db12ef9..1cbe023 100644 --- a/cmd/README.md +++ b/cmd/README.md @@ -13,7 +13,7 @@ There are 2 reasons it exists: 1. To save time on networking complexity during my initial stab at these tests, I chose not to set it up so that the CAPI management cluster could be run from outside the Equinix infra network. - _Technically_ then can be since the flintlock servers are bound to a public + _Technically_ it could be since the flintlock servers are bound to a public interface, but the next hurdle then would have been the control plane load balancer address: I would have had to figure out a way to dynamically reserve an IPv4 address and then ensure that it was allocated to the workload cluster.