Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SRV support for proxy upstream #1915

Merged
merged 3 commits into from Nov 6, 2017

Conversation

Gufran
Copy link

@Gufran Gufran commented Oct 12, 2017

1. What does this change do, exactly?

Adds support in proxy package to discover upstream using SRV lookup

2. Please link to the relevant issues.

Closes ##1803

3. Which documentation changes (if any) need to be made because of this PR?

The proxy module documentation page https://caddyserver.com/docs/proxy required as update:

- to is the destination endpoint to proxy to. At least one is required, but multiple may be specified. If a scheme (http/https/quic) is not specified, http is used. Unix sockets may also be used by prefixing "unix:". QUIC connections are experimental, but to try it, just use "quic://" for the scheme.
+ to is the destination endpoint to proxy to. At least one is required, but multiple may be specified. If a scheme (http/https/quic/srv) is not specified, http is used. Unix sockets may also be used by prefixing "unix:". QUIC connections are experimental, but to try it, just use "quic://" for the scheme. Service discovery using SRV lookup is supported. If the endpoint start with `srv://` or `srv+https://` it will be considered as a service locator and caddy will attempt to resolve available services via SRV DNS lookup.

- policy is the load balancing policy to use; applies only with multiple backends. May be one of random, least_conn, round_robin, first, ip_hash, uri_hash, or header. If header is chosen, the header name must also be provided. Default is random.
+ policy is the load balancing policy to use; applies only with multiple backends. May be one of random, least_conn, round_robin, first, ip_hash, uri_hash, or header. If header is chosen, the header name must also be provided. Default is random. Policy is not applicable If destination is a service locator and will be silently ignored.

- health_check_port will use port to perform the health check instead of the port provided for the upstream. This is useful if you use an internal port for debug purposes where your health check endpoint is hidden from public view.
+ health_check_port will use port to perform the health check instead of the port provided for the upstream. This is useful if you use an internal port for debug purposes where your health check endpoint is hidden from public view. health_check_port is not supported when destination is a service locator.

- upstream specifies another backend. It may use a port range like ":8080-8085" if desired. It is often used multiple times when there are many backends to route to.
- upstream specifies another backend. It may use a port range like ":8080-8085" if desired. It is often used multiple times when there are many backends to route to. Upstream directive is not supported if target specified by `to` directive is a service locator.

And a note:

To use SSL with service discovery, the locator must use protocol srv+https://

Rationale behind using srv+https protocol

An SRV service locator consists mainly of three parts:

  1. service name
  2. protocol
  3. host name

the protocol piece is the ideal way to specify the protocol but it wouldn't be possible to specify all three pieces without changing the proxy directive syntax, or at least without adding new directive in proxy block.
It is also not possible to deduce this information from FQDN. Take for example these FQDNs

production.redis.http.pod-16.my-org.internal
production.redis.service.nvirginia.consul
redis.service.consul
  • All three don't conform to RFC 2782 but are more likely to be supported than RFC 2782 by a reasonably modern infra
  • It is hard to tell the host name in the record. Is it .pod-16.my-org.internal? .my-org.internal?
  • Is not a valid SRV symbolic name but still supported by a widely used service discovery tool.

4. Checklist

  • I have written tests and verified that they fail without my change
  • I have squashed any insignificant commits
  • This change has comments for package types, values, functions, and non-obvious lines of code
  • I am willing to help maintain this change if there are issues with it later

@Gufran
Copy link
Author

Gufran commented Oct 12, 2017

@abiosoft @mholt Freshly baked goodies.

@abiosoft abiosoft added the under review 🧐 Review is pending before merging label Oct 12, 2017
@Gufran
Copy link
Author

Gufran commented Oct 27, 2017

@abiosoft I'd be happy if there is anything I can do to make the review easier for you.

@mholt
Copy link
Member

mholt commented Oct 27, 2017

@Gufran Thank you! @abiosoft, do you have the ability to review this one?

@Gufran
Copy link
Author

Gufran commented Nov 1, 2017

@abiosoft I'm sorry to be a pest, just want to know if I can help in any way here?

@abiosoft
Copy link

abiosoft commented Nov 1, 2017

Got my hands tied recently, I should complete the review today. Thanks for the patience.

if u.HealthCheck.ContentString == "" { // don't check for content string
return false
}
// TODO ReadAll will be replaced if deemed necessary
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think ReadAll is fine here. Considering this is an health check, it is not expected to have large body.

@mholt mholt merged commit 63fd264 into caddyserver:master Nov 6, 2017
@mholt
Copy link
Member

mholt commented Nov 6, 2017

Thank you for your contribution! And thanks Abiola for the review!

@mholt mholt added this to the 0.10.11 milestone Nov 6, 2017
@mholt mholt removed the under review 🧐 Review is pending before merging label Feb 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants