-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Statefulset DNS not function well after upgrade to 1.7.3 #50227
Comments
/sig area/dns |
/sig network |
Encountered the same issue with StatefulSets after nodes upgraded from 1.7.2 to 1.7.3 on GKE. Reverting back to 1.7.2 solved the issue. |
Having the same issue with StatefulSets with kubernetes 1.7.3 and mongo image |
Humm...Checked a bit, didn't spot relevant changes on kube-dns side. Seems like that random number came from // HashServiceRecord hashes the string representation of a DNS
// message.
func HashServiceRecord(msg *msg.Service) string {
s := fmt.Sprintf("%v", msg)
h := fnv.New32a()
h.Write([]byte(s))
return fmt.Sprintf("%x", h.Sum32())
} That hash number is returned by recordValue, endpointName := util.GetSkyMsg(endpointIP, 0)
if hostLabel, exists := getHostname(address); exists {
endpointName = hostLabel
} And eventually it is passed in So it looks like the direct cause for this issue is that the received EndpointAddress doesn't not have Hostname set (which is supposed to be |
Could be related to #48327 - specifically the bottom comments related to DNS? |
@CallMeFoxie Thanks, that seems to be the cause. |
Encountered this problem in kube 1.7.6. Found that DNS resolution fails for the stateful set if the service name has a hyphen (-) in it even though the service will list the pods as endpoints. To reproduce use the web.yaml example from: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/ |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
after I upgrade from 1.6.6 yo 1.7.3
statefulset drone-server with headless service drone-server
nslookup drone-server-0.drone-server return
nslookup: can't resolve 'drone-server-0.drone-server': Name does not resolve
What you expected to happen:
return a valid dns record
How to reproduce it (as minimally and precisely as possible):
kubectl apply the yaml from https://github.com/yaoshipu/aslan-platform/tree/spock/reaper/kubernetes/http on a k8s 1.6.6 cluster
I believe my statefullset yaml file for drone-server is valid:
Anything else we need to know?:
This happens after I upgrade k8s from 1.6.6 to 1.7.3
I created this statefulset on a k8s cluster with version 1.6.6(apiserver, scheduler, controller-manager), and It works as I expected.
I upgrade the k8s cluster to 1.7.3 by upgrade apiserver, scheduler, controller-manager and kubelet step by step.
kube-dns version: 1.14.1(gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1)
kube-dns logs
before upgrade:
after upgrade
seems drone-server-0 changed to 3465346530386466 a random value?
Environment:
kubectl version
):uname -a
):The text was updated successfully, but these errors were encountered: