You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running 3 Nomad nodes in client mode, each running a RabbitMQ container with static ports. For my test (and probably for production as well), my max_parallel (stanza update) is equal to count (stanza group).
When I update the job file, then apply it, Nomad try to start 3 new containers before stopping old RabbitMQ instances. Unfortunately, because I have only 3 nodes, ports are already busy: new containers cannot start, but old instances are killed anyway.
Could you implement a retry or manage this special case ? It would be pity to have to start 3 more vm, only for rolling-upgrade !
The text was updated successfully, but these errors were encountered:
Hi guys,
I just found an abnormal behaviour:
I am running 3 Nomad nodes in client mode, each running a RabbitMQ container with static ports. For my test (and probably for production as well), my
max_parallel
(stanzaupdate
) is equal tocount
(stanzagroup
).When I update the job file, then apply it, Nomad try to start 3 new containers before stopping old RabbitMQ instances. Unfortunately, because I have only 3 nodes, ports are already busy: new containers cannot start, but old instances are killed anyway.
Could you implement a retry or manage this special case ? It would be pity to have to start 3 more vm, only for rolling-upgrade !
The text was updated successfully, but these errors were encountered: