### Description Instead of spinning up a GPU nodegroup, spin up a CPU nodegroup with Elastic Inference (GPU accelerated inference). ### Additional Context * https://aws.amazon.com/machine-learning/elastic-inference/ * [Optimizing TensorFlow model serving with Kubernetes and Amazon Elastic Inference](https://aws.amazon.com/blogs/machine-learning/optimizing-tensorflow-model-serving-with-kubernetes-and-amazon-elastic-inference) * https://github.com/weaveworks/eksctl/issues/643