-
Notifications
You must be signed in to change notification settings - Fork 295
waitSignal cluster.yaml section is not - respected in the template #371
Comments
Hi @gianrubio This is not really what we need. The issue is that the Waitsignal is not working and is causing the stack to fail as it does not get a valid response from all the worker node, even if they are properly created.
But it is not working so we have to manually remove it from the stack template. |
@Camsteack I accidentally pushed the code, I haven't finished yet. Sorry |
Hi! Just a quick reply but as the comment implies, please try changing
`worker.nodePools[].waitSignal` to disable it for a node pool.
2017年3月2日(木) 0:22 Camille <notifications@github.com>:
… Hi @gianrubio <https://github.com/gianrubio>
This is not really what we need. The issue is that the Waitsignal is not
working and is causing the stack to fail as it does not get a valid
response from all the worker node, even if they are properly created.
We want to be able to disable it using
waitSignal:
enabled: false
But it is not working so we have to manually remove it from the stack
template.
Thanks a lot.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABV-aQem0RCc2NllL_Uk3VVFsr1gEplks5rhY03gaJpZM4MPqmV>
.
|
We will try that. Thanks a lot @mumoshu and @gianrubio |
@javapapo @Camsteack Did it work for you? Basically, you have to differentiate the top level |
No, unfortunately |
@javapapo The fix is included in v0.9.5-rc.2. |
Thank you for your hard work and the replies! Will try it asap |
So on the root level of my cluster yaml I have waitSignal:
enabled: false
maxBatchSize: 1 On the generated "Resources": {
"Controllers": {
"Type": "AWS::AutoScaling::AutoScalingGroup", I see "UpdatePolicy": {
"AutoScalingRollingUpdate": {
"MinInstancesInService": "2",
"MaxBatchSize": "1",
"WaitOnResourceSignals": "true",
"PauseTime": "PT15M"
} Or Should I just place it on ? controller:
autoScalingGroup:
waitSignal:
enabled: false
maxBatchSize: 1 |
Apart from the above, the cluster creation was successful, so I think the issue is resolved |
Yes, this is correct. |
* kubernetes-incubator/master: (29 commits) Emit errors when kube-aws sees unexpected keys in cluster.yaml Resolves kubernetes-retired#404 Tag controller nodes appropriately with `kubernetes.io/role`. Resolves kubernetes-retired#370 Make Container Linux AMI fetching a bit more reliable Stop locksmithd errors on etcd nodes Upgrade heapster to version 1.3.0 (kubernetes-retired#420) Auth token file support (kubernetes-retired#418) Update README.md Update README accordingly to the new git repo AWS China region support (kubernetes-retired#390) Conform as a Kubernetes Incubator Project Fixed typo in template upgrade aws-sdk to latest version Fix kubernetes-retired#388 Upgrade Kubernetes version to v1.5.4 Fix assumed public hostnames for EC2 instances in us-east-1 Fix assumed public hostnames for EC2 instances in us-east-1 typo fix: etcdDataVolumeEncrypted not creating encrypted volumes fixes kubernetes-retired#383 Allow disabling wait signals fixes kubernetes-retired#371 Update file paths in readme Fix an issue with glue security group documentation ...
* commit '09366deebc35f602e6d87ea69d7cb5e56d113a5f': fix: etcdDataVolumeEncrypted not creating encrypted volumes fixes kubernetes-retired#383 Allow disabling wait signals fixes kubernetes-retired#371 Update file paths in readme Fix an issue with glue security group documentation Update kubernetes-on-aws-prerequisites.md Add apiserver-count parameter in kube-apiserver config
Assuming that you have the following section configured on your cluster.yaml
Unfortunately on the generated cloudformation (after the stack is rendered) - or even in the intermediate template. We always have the section generated!
For example looking at the sections of
node-pool.json.tmpl
it seems that this part is always rendered on the generate stack.json.
Since this, part is causing us problems on our AWS account/setup, we need to manually remove the above part from the
node-pool.json.tmpl
so that is not rendered at all.Thanks for your time anyway & and many many thanks for your great tool!
The text was updated successfully, but these errors were encountered: