Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

Docker version was downgraded in 1.9.x #2589

Closed
jalberto opened this issue Apr 4, 2018 · 19 comments
Closed

Docker version was downgraded in 1.9.x #2589

jalberto opened this issue Apr 4, 2018 · 19 comments

Comments

@jalberto
Copy link

jalberto commented Apr 4, 2018

Is this a request for help?:
YES & BUG REPORT

Is this an ISSUE or FEATURE REQUEST? (choose one):
ISSUE

What version of acs-engine?:
0.14.5

My previous version of the cluster created with older version of acs-engine used docker CE version (dont' remember exactly which version) new cluster created with latest acs-engine version uses docker 1.13.

This is, again, a breaking change, as 1.13 doesn't support multistage Dockerfiles, so CI systems fails.

How can I update docker without creating a new cluster and without destroying it ?

(see related: #2567)

For reference:

Version 17.03.x is validated upstream since k8s 1.8:

Version 17.09 supported upstream:

Multi stage support begins on 17.05

@jalberto
Copy link
Author

jalberto commented Apr 4, 2018

@jackfrancis can you give me a hand here please?

@jackfrancis
Copy link
Member

You can specify a custom version of docker in the kubernetesConfig of your api model. E.g.,

"kubernetesConfig": {
    "dockerEngineVersion": "17.05.*"
}

If you want to update the cluster in-place:

sudo vi /etc/apt/preferences.d/docker.pref
sudo apt-get update
sudo apt-get install -y docker-engine

@jalberto
Copy link
Author

jalberto commented Apr 6, 2018

thanks @jackfrancis

it is possible a previous version of acs-engine installed by default a newer version of docker?

@jackfrancis
Copy link
Member

Possible, but I don't recall that being the case.

@jalberto
Copy link
Author

jalberto commented Apr 9, 2018

it's interesting because I never setup an specific version of docker, but yet in previous cluster I have docker 10.x

@MarkTopping
Copy link

Hey @jackfrancis

I'm amongst those trying to upgrade my docker runtime so that I can use run build agents in AKS capable of supporting multi-stage docker files. Some searching has lead me to this page.

I wonder if you might be able to elaborate your guidance for specifying a custom version of docker in the kubetnetesConfig.

My inference from this was that I add this section into the Resource Manager template .json file and push this up to Azure via the ACS Engine.

I wasn't clear on where exactly to place it though since the template file you can generate when creating a new AKS resource via the Portal does not contain a kubernetesConfig section.

Omitting the rest for brevity, but I've placed it as shown in the template.json file:
"resources": [
{
"apiVersion": "2018-03-31",
"dependsOn": [],
"type": "Microsoft.ContainerService/managedClusters",
"location": "[parameters('location')]",
"name": "[parameters('resourceName')]",
"properties": {
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"kubernetesConfig": {
"dockerEngineVersion": "17.06.0-ce"
},
"enableRBAC": false,
...

The AKS cluster was successfully created but the docker runtime version on the resulting node is reported back to be 1.13.1

Any chance you could expand your guidance on this workaround?
p.s. I'm a Windows Based user should that influence your response at all.

@MarkTopping
Copy link

In extension to that last question, maybe I should mention that I've tried it in here too:

"agentPoolProfiles": [
{
"name": "agentpool",
"osDiskSizeGB": "[parameters('osDiskSizeGB')]",
"count": "[parameters('agentCount')]",
"vmSize": "[parameters('agentVMSize')]",
"osType": "[parameters('osType')]",
"storageProfile": "ManagedDisks",
"kubernetesConfig": {
"containerRuntime": "docker",
"dockerEngineVersion": "17.05.*"
}

}
]

Still using api version "2018-03-31".

I guess the initial question before the 'how', is can you provide this kubernetesConfig section in the templates for AKS (AgentPoolOnly) builds? (In the API documentation I don't see it listed under this version or Vlabs). But then snippets of merge history and issues like this one offer hope that I can?

@jackfrancis
Copy link
Member

Currently acs-engine only supports docker-engine (not docker CE) due to distribution licensing issues. Here's the current list of available versions:

https://github.com/Azure/acs-engine/blob/master/parts/k8s/kubernetesparams.t#L614

@jackfrancis
Copy link
Member

@MarkTopping Correction to the above statement: there are some old CE versions in there before we stopped updating due to not wanting to violate distribution licensing.

@jackfrancis
Copy link
Member

#3213

@MarkTopping
Copy link

Thanks for responding @jackfrancis. I feel encouraged by your response and having seen 17.05.* in the list you reference. With respect to the second of my two comments above, you can see I referenced 17.05.* as my desired runtime but alas the AKS cluster that was subsequently created still had 1.13.1 as its container runtime version.

Did I place the configuration values in the wrong section? I nested it under agentPoolProfiles in my last attempt at this.

@jackfrancis
Copy link
Member

@MarkTopping It's possible we override any non-docker-engine installs due to said unresolved licensing issues, so I'd expect this to be broken until #3213 is resolved.

@MarkTopping
Copy link

Right ok; I'll keep watch on #3213 then.
Thank you @jackfrancis

@diwakar-s-maurya
Copy link
Contributor

Is it possible to get a particular docker version eg. 17.03.2, installed on the new nodes created by cluster-autoscaler too?
I have manually upgraded docker to 17.03.2 on all nodes including master but, the new nodes created by cluster-autoscaler always have docker 17.03.1. From where cluster-autoscaler gets this version info?

@jackfrancis
Copy link
Member

@diwakar-s-maurya It's probably because autoscaler is basing its node configuration on the original deployment template. Thanks for hanging in there while we navigate this annoying landscape.

@pavelm
Copy link

pavelm commented Jun 18, 2018

does anyone have an example of how to apply a specific version? I've created a cluster with terraform but struggling to get the right ARM template. Is it possible to just apply the docker version configuration?

any help is much appreciated!

@jackfrancis
Copy link
Member

@pavelm It would be easier to do this manually on the provisioned nodes after deployment. Manually editing the ARM templates is tricky, and would require you to refer to a publically available download that matches the expected artifacts on the Azure CDN (which only has the docker-engine versions)

@mnadeau2
Copy link

mnadeau2 commented Jul 7, 2018

im aslo one of those that followed guidance from https://anthonychu.ca/post/vsts-agent-docker-kubernetes/ and quickly found out that my multi-staged build were failing because of a very old docker runtime (very unexpected considering all the work in the portal around supporting a "first class"
kubernetes update...). Could you point us to an alternative/preview/something else solution to quickly setup vsts-agent in azure that could work ?

@CecileRobertMichon
Copy link
Contributor

Moby is now the default starting at acs-engine 0.25.0 https://github.com/Azure/acs-engine/releases/tag/v0.25.0

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants