-
Notifications
You must be signed in to change notification settings - Fork 71
Description
Hi,
I just helm upgraded from v1.0.1 to v1.0.2 using mostly default values
values:
serviceAccount:
create: false
name: vpc-lattice-controller
deployment:
replicas: 2
Now the new pods fail to start with this kube event
3s Warning Failed pod/gateway-api-controller-aws-gateway-controller-chart-647859v68n9 Error: couldn't find key awsRegion in ConfigMap aws-application-networking-system/env-config
I investigated and sure enough the defaulted empty fields are gone
apiVersion: v1
items:
- apiVersion: v1
data:
logLevel: info
kind: ConfigMap
Originally the configmap looks like
apiVersion: v1
items:
- apiVersion: v1
data:
awsAccountId: ""
awsRegion: ""
clusterName: ""
clusterVpcId: ""
defaultServiceNetwork: ""
latticeEndpoint: ""
logLevel: info
kind: ConfigMap
I did update my helm values to
values:
awsRegion: ""
awsAccountId: ""
clusterVpcId: ""
clusterName: ""
defaultServiceNetwork: ""
latticeEndpoint: ""
serviceAccount:
create: false
name: vpc-lattice-controller
deployment:
replicas: 2
and the helm upgrade succeeded and 1.0.2 pods spun up.
So I am not sure if the default values in the chart's values.yaml need to be set to "" instead of blank or I am required to supply "" values?
I followed https://www.gateway-api-controller.eks.aws.dev/guides/deploy/#controller-installation and am under the impression I did not need to supply these values from the comments since we are on EKS, don't want to create a defaultServiceNetwork, and expect IMDS to be available
# awsRegion, clusterVpcId, awsAccountId are required for case IMDS is not available.
--set=awsRegion= \
--set=clusterVpcId= \
--set=awsAccountId= \
# clusterName is required except for EKS cluster.
--set=clusterName= \
# When specified, the controller will automatically create a service network with the name.
--set=defaultServiceNetwork=my-hotel
Please advise, thanks!