-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync-catalog: option to create k8s Service with Endpoints #122
Comments
We agree that that would be ideal, however we were worried about the performance of watching each address and that's why we implemented through DNS. When streaming APIs have been implemented we're going to revisit this. |
So what is estimated date for this feature? |
This feature would be cool as DNS discovery does not work when each instance of a service uses a different and/or unpredictable port (in my case they are dynamically allocated). |
Closing as we don't intend to support this use case with Catalog Sync. We currently support routing requests to health instances of Endpoints within Consul Service Mesh by leveraging the pod health determined by readiness probes as described here: https://www.consul.io/docs/k8s/connect/health. I realize that may be different than what was described in the original issue however. |
That doesn't meet the original use case since it requires Consul Service mesh |
@howardjohn We are happy to review any PRs and design if you are open to seeing this through! |
The Consul to Kubernetes docs explain that an ExternalName Service will be created. Doing a DNS lookup on that Service will return a CNAME to
${SVC_NAME}.service.consul
.I would like to have a
Service
of typeClusterIP
to be created, and theEndpoints
of that Service populated with the instances of the Consul Service that are passing their health check. This would be equivalent to how the Kubernetes control plane updates the Endpoints of a Service to be the set of Pods that are Ready.With this, a persistent
ClusterIP
would be allocated for the Service, and pods running in Kubernetes would not have to worry about DNS lookups and TTLs. Unhealthy instances would be removed from the Endpoints of that service, and kube-proxy and would update the iptables rules to stop sending traffic to that instance.I recognize this is a significant change to how Consul Sync would create Services, and the
consul-k8s
controller would need to update the Endpoints each time a Consul service instance transitioned to healthy or unhealthy. So this should be an optional flag, rather than changing default behavior. But this would result in a much more consistent experience of service discovery and traffic routing for Kubernetes services and Consul services.The text was updated successfully, but these errors were encountered: