-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support load balancing with nginx dynamic upstreams #157
Comments
I'd love to see that too. My use-case is routing requests to dynamic mesos tasks with zoidberg, kong would be a good candidate to do the routing part. I was going to use nginx with 2 ports anyway. Let me know if it makes sense to use kong for this. Here are the options I see:
Reloading nginx is not an option since it triggers graceful restart for all the worker processes. We use that mechanism with haproxy in marathoner, but long-lived sessions force previous instances of haproxy to stay alive for extended periods of time. Nginx is using more processes than haproxy, so it would be even worse. Deploying a lot can cause spinning thousands of proxying processes for no good reason. |
Nginx can already do this in the Plus version: http://nginx.com/products/on-the-fly-reconfiguration/ |
Yep, that's another option starting at $7000 annually for 5 servers. |
You can also buy the license for one server at $1500 per year: http://nginx.com/products/pricing/. I'm not saying it's cheap or anything but it's an alternative if people wasn't aware of it. |
Another option: https://github.com/yzprofile/ngx_http_dyups_module |
Looks like I'm not sure about stability of this thing, though. |
Just a quick update on this, this feature is important and we feel like an initial implementation should be done. It's not going to make it into Of course pull-requests are also welcome. |
Can you tell me how this is going to be implemented? On Wednesday, May 20, 2015, Marco Palladino notifications@github.com
Regards, Ian Babrou |
@bobrik As you pointed out in one of your links, apparently the creator of OpenResty is building a The alternative is taking one of the existing pull requests on the lua-upstream module and contribute to them to make them acceptable and implement any missing feature we might need. |
@thefosk the only public info about Take a look at |
@bobrik yes, we will start working on this feature in the next releases, so we will monitor any announcement about The requirement for Kong would be to dynamically create an upstream configuration from Lua, then dynamically populate the upstream object with servers and use it in the The use case wouldn't be to update an existing upstream configuration, but to create a brand new from scratch, in pseudo-code: set $upstream nil;
access_by_lua '
local upstream = upstream:new()
upstream.add_server("backend1.example.com", { weight = 5 })
upstream.add_server("backend2.example.com:8080", { fail_timeout = 5, slow_start = 30 })
ngx.var.upstream = upstream
';
proxy_pass http://$upstream |
@thefosk I create upstreams on the fly and update them on the fly with Take a look: https://github.com/bobrik/zoidberg-nginx/blob/master/nginx.conf Your pseudo-code implies that you create upstream on every request, I only do that on every upstream update. In zoidberg-nginx there is also some code for checking if upstream exists to prevent endless loops, but I found out that it is avoidable with this trick:
Upstream |
@bobrik thank you, I will look into this |
I'm also looking at an API manager that makes sense for mesos/marathon. After spending much time with my friend Google I came to the conclusion that right now there is only one option available : choose a service discovery (consul, haproxy brige, zoidberg,...) and add an API Proxy on top of it (Kong, Repose, Tyk, ApiAxle, WSO2 AM, etc.). |
+1 for this feature |
same here 👍 |
Another +1 for me good sir |
The balancer_by_lua* directives from ngx_lua will get opensourced soon, in the next 3 months or so. |
@agentzh very good news, looking forward to trying it |
@thefosk +1 |
+1 |
@agentzh any chance to get TCP support in |
@bobrik I'm not sure I understand that question. Are you talking about doing cosockets in the Lua code run by |
@Tieske Hi Tieske, what is Kong mgt API? And how to use POST /upstreams ? I didn't find reference in Kong document(v0.8) . I also need this feature, but I don't know how to do. :-( |
@andy-zhangtao this feature is currently being built. We are aiming to release it in the 0.10 version. |
@Tieske Got it! Thanks Tieske |
@thefosk awesome! We dream to build microservice that can be autoregistered against kong <3 |
@iam-merlin this is exactly our vision. The next couple of releases will be very exciting for what it concerns microservices orchestration. |
btw, @Tieske , if you want a beta tester or a feedback, don't hesitate to ping me. I've some code ready for that :D (I started to write an autoregister library... before I found that kong does not have this feature yet :'( ) |
@iam-merlin @thefosk how about a Kong adapter for registrator? This could register Docker-based services with Kong, if the service has the environment n.b. this wouldn't replace other registrator adapters e.g. the Consul adapter, it would compliment them (i.e. could be used in combination, or just use the Kong adapter on it's own). I am planning to deploy Kong for a work project when it supports SRV records fully, in the meantime I have a nginx Docker container which dynamically adds upstreams based on |
@alzadude , I don't know registrator but from my point of view (and what I read), registrator needs to be ran on each host... with a feature like this issue, you don't need docker (and consul or other service registry), just a Plain Old Request (^^) and maybe we will have health check after (in another plugin). From my point of view, Consul is very heavy for microservice... I really like the simplicity with Kong (and no dependencies) but right now... doing a microservice architecture with Kong and without a service registry is a pain. Anyway, I don't think my point of view is relevant... I'm just a user of kong, not the best one and not a lua developer :P xD. |
Just so you know, once @Tieske finishes this implementation, I will provide some sample Docker-Compose templates that show how to use Kong with a microservice orchestration pattern. I do personally like ContainerPilot more than registrator though. We will also be able to provide a pattern that doesn't involve having any third-party service discovery in place because, effectively, Kong's new |
A PR with intermediate status is available now (see #1541) Any input and testing is highly appreciated, see #1541 (comment) so if anyone else wants to test like @iam-merlin, please check it out. |
@Tieske it works xD I've made some tests and it seems to works as expected (just the dns part, I didn't test yet It seems dns is missing in your requirement (I'm not a lua dev) and I've to installed it manually (I got on error at the first start and I install Do you have any documentation about your code or this will be ready later? |
@iam-merlin thx for testing 👍. Docs will be later, as it might still change. |
Cant not deploy load balancing with round robin. We need least open connections or fastest response time - anything but round robin. |
* adds loadbalancing on specified targets * adds service registry * implements #157 * adds entities: upstreams and targets * modifies timestamps to millisecond precision (except for the non-related tables when using postgres) * adds collecting health-data on a per-request basis (unused for now)
'least open connections' does not make sense in a Kong cluster. 'response time' is being considered, but not prioritized yet. |
closing this as #1735 has been merged into the |
* adds loadbalancing on specified targets * adds service registry * implements #157 * adds entities: upstreams and targets * modifies timestamps to millisecond precision (except for the non-related tables when using postgres) * adds collecting health-data on a per-request basis (unused for now)
How to implement keep_alive without upstream? Because upstream doesn't support dynamic IP resolution and we cannot use keep_alive outside of upstream. |
### Summary #### libyaml 0.2.2 release - #95 -- build: do not install config.h - #97 -- appveyor.yml: fix Release build - #103 -- Remove unused code in yaml_document_delete - #104 -- Allow colons in plain scalars inside flow collections - #109 -- Fix comparison in tests/run-emitter.c - #117 -- Fix typo error - #119 -- The closing single quote needs to be indented... - #121 -- fix token name typos in comments - #122 -- Revert removing of open_ended after top level plain scalar - #125 -- Cherry-picks from PR 27 - #135 -- Windows/C89 compatibility - #136 -- allow override of Windows static lib name #### libyaml 0.2.3 release - #130 Fixed typo. - #144 Fix typo in comment - #140 Use pointer to const for strings that aren't/shouldn't be modified - #128 Squash a couple of warnings in example-deconstructor-alt - #151 Fix spelling for error message - #161 Make appveyor config be a hidden file - #159 Add CHANGES file - #160 Always output document end before directive (YAML 1.2 compatibility) - #162 Output document end marker after open ended scalars - #157 change cmake target name from libOFF.a to libyaml.a - #155 include/yaml.h: fix comments - #169 Fixed missing token in example - #127 Avoid recursion in the document loader. - #172 Support %YAML 1.2 directives - #66 Change dllexport controlling macro to use _WIN32
Support for dynamic upstreams that will enable dynamic load balancing per API.
So we can
proxy_pass
like:proxy_pass http://backend;
The text was updated successfully, but these errors were encountered: