Skip to content
This repository has been archived by the owner on Nov 1, 2023. It is now read-only.

Fixes #44. Respect DNS TTL for proxies. #63

Merged
merged 1 commit into from
Mar 25, 2017
Merged

Fixes #44. Respect DNS TTL for proxies. #63

merged 1 commit into from
Mar 25, 2017

Conversation

hone
Copy link
Member

@hone hone commented Mar 18, 2017

Fixes #44

https://tenzer.dk/nginx-with-dynamic-upstreams/

Nginx currently does not resolve domains passed to proxy_pass. By using a variable it works around the issue. The resolver is set to what's in /etc/resolv.conf and failing that 8.8.8.8 which is a Google DNS.

This also fixes a bug where fallback proxy match would just match the first proxy vs the longest.

https://tenzer.dk/nginx-with-dynamic-upstreams/

Nginx currently does not resolve domains passed to `proxy_pass`. By
using a variable it works around the issue.

This also fixes a bug where fallback proxy match would just match the
first proxy vs the longest.
@hone
Copy link
Member Author

hone commented Mar 18, 2017

@daemonsy @wcurtis @vladucu @mars can you test the branch proxy_resolver and let me know if this fixes #44 for you?

@daemonsy
Copy link

Amazing, thanks @hone. Going to give this a try now.

@mars
Copy link
Member

mars commented Mar 18, 2017

I just verified (using this branch) that the proxy config now honors DNS TTL. Using a CNAME with a 60-second TTL, I confirmed that whenever the CNAME's target is changed, then the proxy re-resolves to the new target within 60-seconds.

Awesome work @hone 🎯

@mars
Copy link
Member

mars commented Mar 18, 2017

BTW, I also confirmed that this behavior is broken on master, requiring a restart to re-resolve to the new CNAME target.

@wcurtis
Copy link

wcurtis commented Mar 19, 2017

This looks fantastic @hone thank you! 🙌

The code looks solid but having trouble thinking of a good way to test it (mimic heroku's change in ips).

@mars or @hone any tips on how best I can help test on my end?

@daemonsy
Copy link

Our app fails "reliably" everyday during dyno restarts between the client and the server app. Will let you know if the branch fixes it.

On a sidenote, is there intention to expose the ability to customize the resolver via config? I see you are already joining multiple resolvers if they exist in /etc/resolv.conf but the default uses 8.8.8.8?

@hone
Copy link
Member Author

hone commented Mar 20, 2017

@daemonsy thanks, definitely interested to hear if it works for you. Yeah, every dyno should have /etc/resolv.conf which will have amazon DNS. I added the 8.8.8.8 (google dns) just in case it doesn't exist or if you're using this not on Heroku. What's the use case for your own dns resolution?

@daemonsy
Copy link

Its mostly just to have more than 1 resolver for fallback. All our apps are on Heroku, so that part about using /etc/resolv.conf already does it for us.

@hone
Copy link
Member Author

hone commented Mar 20, 2017

@daemonsy gotcha. would you be ok with this as is then or would you still want a section in static.json? If we were to add it, i wonder if ti should go under a separate section like "advanced/more options thing".

@daemonsy
Copy link

I think we'll never use the buildpack outside a Heroku environment, so abstracting away the DNS resolution is actually a nice thing for us. Based on our other discussion, I feel that this exposes too much of NGINX in static.json just like the max_body_size parameter.

Might be useful for advanced cases if it was exposed as some advanced settings section or an nginx.conf override.

@hone
Copy link
Member Author

hone commented Mar 20, 2017

@daemonsy I'd still like to resolve that for you, but it's a real problem and the buildpack should support that use case.

@daemonsy
Copy link

@hone the branch has been working beautifully for us in the last two days. No more failed proxy requests 👍

@daemonsy
Copy link

Sorry let me play the nag for this. If everyone involved is having no problems with the buildpack (we had it in production for 4 days) and @hone has no reservations, this can be merged?

@hone
Copy link
Member Author

hone commented Mar 24, 2017 via email

@daemonsy
Copy link

Thanks! No rush at all if you're still waiting for feedback. Just checking in to make sure it doesn't slide off our radars by accident.

@hone hone merged commit 0aebb10 into master Mar 25, 2017
@hone hone deleted the proxy_resolver branch March 25, 2017 02:27
@wcurtis
Copy link

wcurtis commented Mar 25, 2017

@hone thanks for the merge! 🙏

Can someone help me understand if buildpacks that are built on top of heroku-buildpack-static will automatically pick this change up on the master branch?

@mars
Copy link
Member

mars commented Mar 25, 2017

Revised to avoid misinformation: @hone will cut a new release.

@wcurtis
Copy link

wcurtis commented Mar 25, 2017

Fantastic 👌

@hone
Copy link
Member Author

hone commented Mar 25, 2017

@wcurtis I'll need to cut a new release of the static buildpack before it gets picked up. I'll take a look at doing that in the next day.

@wcurtis
Copy link

wcurtis commented Mar 30, 2017

@hone any update on cutting that new release?

@mars
Copy link
Member

mars commented Mar 30, 2017

If you set the buildpack to https://github.com/heroku/heroku-buildpack-static instead of heroku/static, you can deploy right off master with the fix.

Make sure to switch back to heroku/static after it's release, because using that official identifier avoids the GitHub dependency to deploy.

@wcurtis
Copy link

wcurtis commented Mar 30, 2017

@mars make sense. My understanding is since I'm using an Ember buildpack that uses this one as a dependency it will require a version bump to pick this fix up.

@hone
Copy link
Member Author

hone commented Mar 30, 2017 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

502 errors
4 participants