You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On our developer machines, we have the issue that when we start the operator, the starter pod tries to make http GET calls to the k6 services but can't dns resolve them.
We've tried this on 4 separate machines, with a mix of linux/windows and docker for desktop / minikube and get the same results.
When we manually do an nslookup from the starter pod, it times out, and ping doesn't find the services.
Manually adding the domain ".svc.cluster.local" does enable nslookup to find the services.
It does work after we disconnect the local machine from the network (!)
We don't know if it's related, but the IP addresses of the service nodes and the other pods could be in conflict with the ip addresses that our internal network devices are using (all in the 10...* range)
I'm attaching the starter logs, the dns logs, a successful nslookup, and the list of services. Let me know if you need more details, Cheers, pekavau.
Also, on my linux machine, when I switch minikube from the kvm2 virtualization driver to the docker driver, it also works (doesn't work with kvm2 driver)
@pekavau IIRC, something similar came up on the community forum once before but it wasn't resolved then. Thank you for the issue and finding the fix! Would you like to make a PR?
Hi!
On our developer machines, we have the issue that when we start the operator, the starter pod tries to make http GET calls to the k6 services but can't dns resolve them.
We've tried this on 4 separate machines, with a mix of linux/windows and docker for desktop / minikube and get the same results.
When we manually do an nslookup from the starter pod, it times out, and ping doesn't find the services.
Manually adding the domain ".svc.cluster.local" does enable nslookup to find the services.
It does work after we disconnect the local machine from the network (!)
We don't know if it's related, but the IP addresses of the service nodes and the other pods could be in conflict with the ip addresses that our internal network devices are using (all in the 10...* range)
I'm attaching the starter logs, the dns logs, a successful nslookup, and the list of services. Let me know if you need more details, Cheers, pekavau.
coredns-6d4b75cb6d-62mf8.log
k6-sample-starter-94jcd.log
The text was updated successfully, but these errors were encountered: