-
Notifications
You must be signed in to change notification settings - Fork 6
Load balancers are not torn down after cluster deletion #26
Comments
Uninstalling the gaffer Helm chart before destroying the cluster correctly deletes the application load-balancers and the associated target groups. Running |
Just trying to understand what is missing here - it sounds like the CDK construct that is used to deploy a Helm chart will correctly uninstall the chart when you run |
Thanks for the comment - Yes that's right, I think adding the dependency would fix it but maybe it's not straight forward to do that dynamically at runtime? If not then that would result in a change to the gaffer chart that coupled it to the AWS ALB ingress controller - at the moment the Kind deployment of this chart uses an Nginx implementation. Maybe there's a better way, I'm a Helm novice! |
Yes, though as @m29827 says the graphs are added dynamically by users via the REST API. At the moment all the graphs are deployed to the default namespace so it could be a case of uninstalling all helm releases in the default namespace before uninstalling the ingress controller. It might be possible to do this using a custom resource. Though if not, then we should add a comment in the docs reminding administrators to uninstall all graphs before destroying the stack for this reason. |
Ahh I see - I incorrectly assumed that the project deployed 1 cluster per graph (not actually properly looked at the code :S). In which case everything would be deployed via a single CloudFormation stack, so you just need to add a DependsOn to the Gaffer Helm Chart that references the ALB Helm Chart. As you are deploying multiple graphs, outside of CloudFormation, you will likely need to add a Custom Resource. That can be backed by a lambda function, which detects when it is being told to delete, and calls an API endpoint to destroy/uninstall all the deployed graphs. In addition to ALBs, you will probably also find that EBS volumes are currently being left behind too. |
…uster deleted - Work in progress
…uster deleted - Work in progress
…uster deleted - Work in progress
…uster deleted - Work in progress
…uster deleted - Work in progress
…aph SQS queue to initiate deletion.
…when-stack-deleted Conflicts: lib/rest-api/kai-rest-api.ts
…r deleted. (#39) * gh-26 Remove application load-balancers and target groups when cluster deleted. * gh-26 Removing *.pyc files and ignoring __pycache__ directory * gh-26 Uninstall graphs is now asynchronous and uses the delete graph SQS queue to initiate deletion. * gh-26 Code review comments: reverting changes to generated Accumulo passwords. * gh-26 Uplifting cdk version * gh-26 Correcting add_graph.py * gh-26 Fixing bug caused by merging gh-35 changes. Co-authored-by: d47853 <d47853@users.noreply.github.com>
Merged into develop |
* gh-26 Remove application load-balancers and target groups when cluster deleted. * gh-26 Removing *.pyc files and ignoring __pycache__ directory * gh-40 Remove EBS volumes when cluster deleted. * gh-40 Remove EBS Volumes when graph uninstalled / deleted. * gh-40 Remove volumes with kubectl instead of the aws-sdk
Gh 07 user login
When the cluster is torn down as the stack is deleted, the load balancers and their target groups remain. This probably happens because the alb helm chart is torn down before the graphs have unregistered.
The text was updated successfully, but these errors were encountered: