Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS] CoreDNS v1.7.0 fails to start after upgrading cluster to Kubernetes 1.18 #1115

Closed
gitfool opened this issue Oct 14, 2020 · 14 comments
Closed
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue

Comments

@gitfool
Copy link

gitfool commented Oct 14, 2020

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

I upgraded an existing EKS cluster from Kubernetes 1.17 to Kubernetes 1.18 and followed the steps for updating an Amazon EKS cluster Kubernetes version. I updated the VPC CNI and KubeProxy images without incident, however the CoreDNS image fails to start with the following in the logs:

plugin/kubernetes: /etc/coredns/Corefile:6 - Error during parsing: unknown property 'upstream'
stream closed 

coredns config map:

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

Are you currently working around this issue?

Yes, by reverting to CoreDNS v1.6.6.

@gitfool gitfool added the Proposed Community submitted issue label Oct 14, 2020
@ueokande
Copy link

upstream option is no longer supported in CoreDNS v1.7.0.

plugin/kubernetes: Remove already-deprecated options resyncperiod and upstream (coredns/coredns#3737)
https://coredns.io/2020/06/15/coredns-1.7.0-release/

I cannot find an upgrade guide about CoreDNS' ConfigMap in AWS's document.

@gitfool
Copy link
Author

gitfool commented Oct 14, 2020

@ueokande good to know, thanks! I deleted the upstream option and it works.

@mikestef9 mikestef9 added the EKS Amazon Elastic Kubernetes Service label Oct 14, 2020
@iamsudip
Copy link

iamsudip commented Oct 14, 2020

Yes. An upgrade guide will be helpful. I went through this to figure out the correct config: coredns/deploy.sh

@ueokande
Copy link

ueokande commented Oct 14, 2020

see also awsdocs/amazon-eks-user-guide#212

@mikestef9
Copy link
Contributor

Thanks for reporting. We have updated our user guide with instructions to remove the upstream directive as part of upgrading to CoreDNS 1.7.0

https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html

@Ghazgkull
Copy link

@mikestef9 Thank you for this, but the documentation only includes manual steps for fixing the configmap (kubectl edit...). How are customers who manage infrastructure via automation intended to handle this?

@ghdiska
Copy link

ghdiska commented Oct 15, 2020

you can run a kubectl patch as one of your iaac step

@Ghazgkull
Copy link

Yes. I'm working on that now. Unfortunately, the change required here is a surgical edit inside the content of a multi-line string value. So it's not a straightforward patch operation.

@Ghazgkull
Copy link

Ghazgkull commented Oct 15, 2020

Here's a working version of a bit of bash script which fetches the configmap, strips out the "upstream" line, and patches the converted string back to the configmap. Note that it requires kubectl, jq, and sed.

UPDATED_COREFILE="$(kubectl get configmap -n kube-system coredns -o json | jq .data.Corefile | sed 's/\\n[ ]*upstream\\n/\\n/g')"
kubectl patch configmap -n kube-system coredns -p "{\"data\": { \"Corefile\": $UPDATED_COREFILE }}"

@ghdiska
Copy link

ghdiska commented Oct 15, 2020

You don't need jq, can filter output with jsonpath . Something like -o=jsonpath="{$.data.Corefile)

@Ghazgkull
Copy link

You don't need jq, can filter output with jsonpath . Something like -o=jsonpath="{$.data.Corefile)

You're probably right, but again it's not that simple. :) Your suggestion outputs a multiline string, which won't feed into kubectl patch the way you want it to. If you get a working example without jq, I would be interested though.

@ghdiska
Copy link

ghdiska commented Oct 16, 2020

this example works for me on a test config map ( sorry i can't use it on a k8s 1.18 ) , but i think if you can use jq is more clear

UPDATED_COREFILE="$(kubectl get cm/coredns -n kube-system -o=jsonpath='"{.data.Corefile}"' | sed -z "s/\n/\\\n/g" | sed 's/\\n[ ]*upstream\\n/\\n/g')"

@martinwoods
Copy link

martinwoods commented Mar 10, 2021

We had the same issue just recently updating our EKS from v1.17 to v1.18, we also like to automate this type of stuff as much as possible

As part of our automation we use Octopus Deploy to manage our EKS cluster deployments with many of the process steps using Powershell

The below takes a wee bit from the bash comment by @Ghazgkull and @ghdiska (thanks)

I'm NO Powershell expert but this what I cobbled together yesterday to manage this:

### If you originally deployed your cluster on Kubernetes 1.17 or earlier, then you may need to remove a discontinued term from your CoreDNS manifest ###

### START ###
# Check to see if your CoreDNS manifest has the line "upstream"
$checkCoreDnsUpstream = kubectl get configmap coredns -n kube-system -o jsonpath='{$.data.Corefile}' | Select-String "upstream"


if (! ([string]::IsNullOrEmpty($checkCoreDnsUpstream)))
# If the value of $checkCoreDnsUpstream is NOT null
# then patch the coredns configmap to remove the line containing the string "upstream"
{
    # Print the values of $checkCoreDnsUpstream
    $checkCoreDnsUpstreamToStringTrim = ($checkCoreDnsUpstream.ToString()).Trim()
    Write-Verbose "The variable checkCoreDnsUpstream does NOT contain NULL, it has the string $checkCoreDnsUpstreamToStringTrim"
    
    # Store the json from the coredns configmap under data.Corefile in an array,
    # which contains the line upstream
    $dataCorefile = kubectl get cm/coredns -n kube-system -o=jsonpath='"{.data.Corefile}"'

    # Iterate over the array variable $dataCorefile,
    # to create a new array to remove upstream
    $newDataCorefile = @()
    foreach ($i in $dataCorefile)
    {
        if ($i.Trim() -ne "upstream")
        {
            $newDataCorefile = $newDataCorefile += $i
        }
    }

    # Convert $newDataCorefile to Json, edit to fit syntax when patching back into coredns configmap
    $updatedCorefile = ($newDataCorefile | ConvertTo-Json -Compress ).replace('","',"\n").replace("[","").replace('"]','\n"').replace('"','')
   

    Write-Verbose "New Json block to be patched to coredns data.Corefile:" 
    Write-Verbose "$updatedCorefile"


    # Patch coredns configmap
    kubectl patch configmap -n kube-system coredns -p "{ \`"data\`": { \`"Corefile\`": \`"$updatedCorefile\`" }}"

} else {
    Write-Verbose "The variable checkCoreDnsUpstream contains a NULL value, nothing to do here skipping"
}
### END ###

If anyone has a better grasp of the powershell syntax and thinks this script can be refactored feel free to amend, comment and make it better

Thanks

@stafot
Copy link

stafot commented Apr 27, 2021

@mikestef9 Some people may bump 2 versions at once, so in my understanding it would be great to have instructions for upstream to all versions documentation after 1.7.0 where starting to fail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests

8 participants