Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

coredns-1.8.yaml not found #737

Open
rlratcliffe opened this issue May 28, 2023 · 9 comments
Open

coredns-1.8.yaml not found #737

rlratcliffe opened this issue May 28, 2023 · 9 comments

Comments

@rlratcliffe
Copy link

working on the first step of Deploying the DNS Cluster Add-on and tried to do both

kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
and
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml

and received error: unable to read URL "https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml", server reported 404 Not Found, status code=404

@exdial
Copy link

exdial commented May 30, 2023

and received error: unable to read URL "https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml", server reported 404 Not Found, status code=404

Use helm chart instead.

$ helm repo add coredns https://coredns.github.io/helm
$ helm --namespace=kube-system install coredns coredns/coredns

@hkz-aarvesen
Copy link

hkz-aarvesen commented Jun 1, 2023

I was able to install coredns using helm, but I think the coredns-1.8.yml is installing more than just coredns. After running the helm chart on controller-0 (from this helpful advice) and running the check command, there are no pods running in the system namespace:

$ kubectl get pods -l k8s-app=kube-dns -n kube-system
No resources found in kube-system namespace.

When I try to go to https://storage.googleapis.com/kubernetes-the-hard-way in the browser, I get the following error:

<Error>
  <Code>NoSuchBucket</Code>
  <Message>The specified bucket does not exist.</Message>
</Error>

I think the bucket has been accidentally destroyed. Or maybe the permissions are now set incorrectly.

Edit: Hack to get working

I took the 1.7.0 yaml file out of the repo (https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/deployments/coredns-1.7.0.yaml), updated coredns from 1.7.0 to 1.8.0 (just search for the one place in the yaml), and it worked.

Note this is all on controller-0, not locally.

# if you followed the advice to update this via helm, uninstall it
$ helm uninstall coredns

# get the 1.7.0 file
$ wget https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/coredns-1.7.0.yaml

# fix it
$ cp coredns-1.7.0.yaml coredns-1.8.0.yaml
$ vi coredns-1.8.0.yaml

# apply it
$ kubectl apply -f coredns-1.8.0.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

$ kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME                      READY   STATUS    RESTARTS   AGE
coredns-6955db5cc-fz9lp   1/1     Running   0          20s
coredns-6955db5cc-k6476   1/1     Running   0          20s

@rlratcliffe
Copy link
Author

@hkz-aarvesen didn't know the coredns-1.7.0.yaml was in the repo. it worked to copy it locally and modify the image version and then apply it. appreciate your help!

@exdial
Copy link

exdial commented Jun 10, 2023

@hkz-aarvesen

I was able to install coredns using helm, but I think the coredns-1.8.yml is installing more than just coredns.

coredns-1.8.yaml installs 6 types of resources: ServiceAccount, ClusterRole, ClusterRoleBinding, ConfigMap, Deployment and Service.

curl -s https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/coredns-1.7.0.yaml | grep ^kind | wc -l

With the default installation of helm chart you will have all of these resources, except ServiceAccount, ClusterRole and ClusterRoleBinding. ServiceAccount and roles will only be installed if you specify the serviceAccount.create flag (see configuration options)

$ kubectl -n kube-system get all -l k8s-app=coredns

NAME                                   READY   STATUS    RESTARTS   AGE
pod/coredns-coredns-55b8869fc9-qlj2t   1/1     Running   1          11d

NAME                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/coredns-coredns   ClusterIP   10.32.0.57   <none>        53/UDP,53/TCP   11d

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns-coredns   1/1     1            1           11d

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-coredns-55b8869fc9   1         1         1       11d

After running the helm chart on controller-0 (from this helpful advice) and running the check command, there are no pods running in the system namespace:

$ kubectl get pods -l k8s-app=kube-dns -n kube-system
No resources found in kube-system namespace.

There are no pods because you are trying to select pods with the k8s-app=kube-dns label, but the default label for the coredns helm chart is k8s-app=coredns.
The original manifest was renamed from kube-dns to coredns. Looks like @kelseyhightower forgot to rename the labels.

$ kubectl -n kube-system get deploy coredns-coredns -o jsonpath='{.metadata.labels}' | jq
{
  "app.kubernetes.io/instance": "coredns",
  "app.kubernetes.io/managed-by": "Helm",
  "app.kubernetes.io/name": "coredns",
  "app.kubernetes.io/version": "1.10.1",
  "helm.sh/chart": "coredns-1.23.0",
  "k8s-app": "coredns",
  "kubernetes.io/cluster-service": "true",
  "kubernetes.io/name": "CoreDNS"
}

You can always use customLabels option for the helm chart to set necessary labels, eg. k8s-app=kube-dns.

I would say that using the official CoreDNS helm chart is the most correct way to get DNS on the cluster.

@lianzeng
Copy link

download coredns-1.7.0.yaml locally , and fix image to 1.8.0, then run locally: kubectl apply -f coredns-1.8.0.yaml

@chungheon
Copy link

chungheon commented Jul 14, 2023

Hi, I followed the suggestion. But now I have a running coredns but it is 0/1 ready.
image

When i call nslookup it fails, and this is the log from the coredns pod. "kubectl logs coredns-76cfcdf788-cfv2n -n kube-system"
image
Not sure if its related but im thinking this is why the smoke test for exposing service through nodeport is failing as well.

@koenry
Copy link

koenry commented Jul 14, 2023

Hey! You can fork the 1.7.0 file to your own repository and change the image version to 1.8.0 and use it as raw content you wont need to ssh and you will be able to perform as the original guide was intended. I have it here: https://github.com/koenry/k8s-hard-way-core-dns-1.8
Also you can just use the 1.7 version without any issues and you will be able to finish the guide with fully functional cluster
kubectl apply -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/coredns-1.7.0.yaml

@krosibahili
Copy link

Good!

@jg3
Copy link

jg3 commented Mar 31, 2024

See also: there is a ./manifests/ with a coredns-1.10.1.yaml here ...
https://github.com/kelseyhightower/kubernetes-the-hard-way/tree/af7ffdb8e610d31a417a3ce1e876f107e777e34b/manifests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants