-
Notifications
You must be signed in to change notification settings - Fork 562
Docker version was downgraded in 1.9.x #2589
Comments
@jackfrancis can you give me a hand here please? |
You can specify a custom version of docker in the
If you want to update the cluster in-place:
|
thanks @jackfrancis it is possible a previous version of acs-engine installed by default a newer version of docker? |
Possible, but I don't recall that being the case. |
it's interesting because I never setup an specific version of docker, but yet in previous cluster I have docker 10.x |
Hey @jackfrancis I'm amongst those trying to upgrade my docker runtime so that I can use run build agents in AKS capable of supporting multi-stage docker files. Some searching has lead me to this page. I wonder if you might be able to elaborate your guidance for specifying a custom version of docker in the kubetnetesConfig. My inference from this was that I add this section into the Resource Manager template .json file and push this up to Azure via the ACS Engine. I wasn't clear on where exactly to place it though since the template file you can generate when creating a new AKS resource via the Portal does not contain a kubernetesConfig section. Omitting the rest for brevity, but I've placed it as shown in the template.json file: The AKS cluster was successfully created but the docker runtime version on the resulting node is reported back to be 1.13.1 Any chance you could expand your guidance on this workaround? |
In extension to that last question, maybe I should mention that I've tried it in here too: "agentPoolProfiles": [ Still using api version "2018-03-31". I guess the initial question before the 'how', is can you provide this kubernetesConfig section in the templates for AKS (AgentPoolOnly) builds? (In the API documentation I don't see it listed under this version or Vlabs). But then snippets of merge history and issues like this one offer hope that I can? |
Currently acs-engine only supports docker-engine (not docker CE) due to distribution licensing issues. Here's the current list of available versions: https://github.com/Azure/acs-engine/blob/master/parts/k8s/kubernetesparams.t#L614 |
@MarkTopping Correction to the above statement: there are some old CE versions in there before we stopped updating due to not wanting to violate distribution licensing. |
Thanks for responding @jackfrancis. I feel encouraged by your response and having seen 17.05.* in the list you reference. With respect to the second of my two comments above, you can see I referenced 17.05.* as my desired runtime but alas the AKS cluster that was subsequently created still had 1.13.1 as its container runtime version. Did I place the configuration values in the wrong section? I nested it under agentPoolProfiles in my last attempt at this. |
@MarkTopping It's possible we override any non-docker-engine installs due to said unresolved licensing issues, so I'd expect this to be broken until #3213 is resolved. |
Right ok; I'll keep watch on #3213 then. |
Is it possible to get a particular docker version eg. 17.03.2, installed on the new nodes created by cluster-autoscaler too? |
@diwakar-s-maurya It's probably because autoscaler is basing its node configuration on the original deployment template. Thanks for hanging in there while we navigate this annoying landscape. |
does anyone have an example of how to apply a specific version? I've created a cluster with terraform but struggling to get the right ARM template. Is it possible to just apply the docker version configuration? any help is much appreciated! |
@pavelm It would be easier to do this manually on the provisioned nodes after deployment. Manually editing the ARM templates is tricky, and would require you to refer to a publically available download that matches the expected artifacts on the Azure CDN (which only has the docker-engine versions) |
im aslo one of those that followed guidance from https://anthonychu.ca/post/vsts-agent-docker-kubernetes/ and quickly found out that my multi-staged build were failing because of a very old docker runtime (very unexpected considering all the work in the portal around supporting a "first class" |
Moby is now the default starting at acs-engine 0.25.0 https://github.com/Azure/acs-engine/releases/tag/v0.25.0 |
Is this a request for help?:
YES & BUG REPORT
Is this an ISSUE or FEATURE REQUEST? (choose one):
ISSUE
What version of acs-engine?:
0.14.5
My previous version of the cluster created with older version of acs-engine used docker CE version (dont' remember exactly which version) new cluster created with latest acs-engine version uses docker 1.13.
This is, again, a breaking change, as 1.13 doesn't support multistage Dockerfiles, so CI systems fails.
How can I update docker without creating a new cluster and without destroying it ?
(see related: #2567)
For reference:
Version 17.03.x is validated upstream since k8s 1.8:
Version 17.09 supported upstream:
Multi stage support begins on 17.05
The text was updated successfully, but these errors were encountered: