New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support load balancing with nginx dynamic upstreams #157

Closed
thefosk opened this Issue Apr 24, 2015 · 97 comments

Comments

Projects
None yet
@thefosk
Member

thefosk commented Apr 24, 2015

Support for dynamic upstreams that will enable dynamic load balancing per API.

 # sample upstream block:
 upstream backend {
    server 127.0.0.1:12354;
    server 127.0.0.1:12355;
    server 127.0.0.1:12356 backup;
}

So we can proxy_pass like:

proxy_pass http://backend;

@thefosk thefosk self-assigned this Apr 24, 2015

@thibaultcha thibaultcha added the proxy label Apr 24, 2015

@thibaultcha thibaultcha changed the title from Dynamic upstreams to Support load balancing with nginx dynamic upstreams Apr 24, 2015

@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik May 5, 2015

I'd love to see that too. My use-case is routing requests to dynamic mesos tasks with zoidberg, kong would be a good candidate to do the routing part. I was going to use nginx with 2 ports anyway. Let me know if it makes sense to use kong for this.

Here are the options I see:

  1. openresty/lua-upstream-nginx-module#11 can be used to preallocate list of upstreams and only use some subset of that list
  2. openresty/lua-upstream-nginx-module#12 can be used for dynamic allocation of upstreams, but list can only grow, so previous PR is needed as well
  3. https://twitter.com/agentzh/status/580170442150846464 — not public yet, needs manual handling of retry logic, but can replace both previous versions

Reloading nginx is not an option since it triggers graceful restart for all the worker processes. We use that mechanism with haproxy in marathoner, but long-lived sessions force previous instances of haproxy to stay alive for extended periods of time. Nginx is using more processes than haproxy, so it would be even worse. Deploying a lot can cause spinning thousands of proxying processes for no good reason.

bobrik commented May 5, 2015

I'd love to see that too. My use-case is routing requests to dynamic mesos tasks with zoidberg, kong would be a good candidate to do the routing part. I was going to use nginx with 2 ports anyway. Let me know if it makes sense to use kong for this.

Here are the options I see:

  1. openresty/lua-upstream-nginx-module#11 can be used to preallocate list of upstreams and only use some subset of that list
  2. openresty/lua-upstream-nginx-module#12 can be used for dynamic allocation of upstreams, but list can only grow, so previous PR is needed as well
  3. https://twitter.com/agentzh/status/580170442150846464 — not public yet, needs manual handling of retry logic, but can replace both previous versions

Reloading nginx is not an option since it triggers graceful restart for all the worker processes. We use that mechanism with haproxy in marathoner, but long-lived sessions force previous instances of haproxy to stay alive for extended periods of time. Nginx is using more processes than haproxy, so it would be even worse. Deploying a lot can cause spinning thousands of proxying processes for no good reason.

@Tenzer

This comment has been minimized.

Show comment
Hide comment
@Tenzer

Tenzer May 5, 2015

Nginx can already do this in the Plus version: http://nginx.com/products/on-the-fly-reconfiguration/

Tenzer commented May 5, 2015

Nginx can already do this in the Plus version: http://nginx.com/products/on-the-fly-reconfiguration/

@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik May 5, 2015

Yep, that's another option starting at $7000 annually for 5 servers.

bobrik commented May 5, 2015

Yep, that's another option starting at $7000 annually for 5 servers.

@Tenzer

This comment has been minimized.

Show comment
Hide comment
@Tenzer

Tenzer May 5, 2015

You can also buy the license for one server at $1500 per year: http://nginx.com/products/pricing/. I'm not saying it's cheap or anything but it's an alternative if people wasn't aware of it.

Tenzer commented May 5, 2015

You can also buy the license for one server at $1500 per year: http://nginx.com/products/pricing/. I'm not saying it's cheap or anything but it's an alternative if people wasn't aware of it.

@bobrik

This comment has been minimized.

Show comment
Hide comment

bobrik commented May 15, 2015

@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik May 18, 2015

Looks like dyups can do the trick: https://github.com/bobrik/zoidberg-nginx. Nginx config and lua scripts could give an idea how it works.

I'm not sure about stability of this thing, though.

bobrik commented May 18, 2015

Looks like dyups can do the trick: https://github.com/bobrik/zoidberg-nginx. Nginx config and lua scripts could give an idea how it works.

I'm not sure about stability of this thing, though.

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk May 20, 2015

Member

Just a quick update on this, this feature is important and we feel like an initial implementation should be done. It's not going to make it into 0.3.0 (currently working on implementing other major features, including SSL support and path-based routing), but it definitely should show up within the next two releases. We are trying to keep the release cycles very short, so it shouldn't take too much time.

Of course pull-requests are also welcome.

Member

thefosk commented May 20, 2015

Just a quick update on this, this feature is important and we feel like an initial implementation should be done. It's not going to make it into 0.3.0 (currently working on implementing other major features, including SSL support and path-based routing), but it definitely should show up within the next two releases. We are trying to keep the release cycles very short, so it shouldn't take too much time.

Of course pull-requests are also welcome.

@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik May 20, 2015

Can you tell me how this is going to be implemented?

On Wednesday, May 20, 2015, Marco Palladino notifications@github.com
wrote:

Just a quick update on this, this feature is important and we feel like an
initial implementation should be done. It's not going to make it into
0.3.0 (currently working on implementing other major features, including
SSL support and path-based routing), but it definitely should show up
within the next two releases. We are trying to keep the release cycles very
short, so it shouldn't take too much time.


Reply to this email directly or view it on GitHub
#157 (comment).

Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou

bobrik commented May 20, 2015

Can you tell me how this is going to be implemented?

On Wednesday, May 20, 2015, Marco Palladino notifications@github.com
wrote:

Just a quick update on this, this feature is important and we feel like an
initial implementation should be done. It's not going to make it into
0.3.0 (currently working on implementing other major features, including
SSL support and path-based routing), but it definitely should show up
within the next two releases. We are trying to keep the release cycles very
short, so it shouldn't take too much time.


Reply to this email directly or view it on GitHub
#157 (comment).

Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk May 21, 2015

Member

@bobrik As you pointed out in one of your links, apparently the creator of OpenResty is building a balancer_by_lua functionality which should do the job - so I am investigating this option.

The alternative is taking one of the existing pull requests on the lua-upstream module and contribute to them to make them acceptable and implement any missing feature we might need.

Member

thefosk commented May 21, 2015

@bobrik As you pointed out in one of your links, apparently the creator of OpenResty is building a balancer_by_lua functionality which should do the job - so I am investigating this option.

The alternative is taking one of the existing pull requests on the lua-upstream module and contribute to them to make them acceptable and implement any missing feature we might need.

@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik May 21, 2015

@thefosk the only public info about balancer_by_lua is the tweet by @agentzh. Contributing to existing PRs to lua-upstream module also involves his approval :)

Take a look at ngx_http_dyups_module, this stuff reuses logic from nginx, as opposed to balancer_by_lua. It worked for me in my tests without crashes: 1-100 upstreams, updating every second with full gradual upstream list replace, 8k rps on 1 core with literally no lua code execution when serving user requests. Not sure about keepalive to upstreams, https and tcp upstreams, though.

bobrik commented May 21, 2015

@thefosk the only public info about balancer_by_lua is the tweet by @agentzh. Contributing to existing PRs to lua-upstream module also involves his approval :)

Take a look at ngx_http_dyups_module, this stuff reuses logic from nginx, as opposed to balancer_by_lua. It worked for me in my tests without crashes: 1-100 upstreams, updating every second with full gradual upstream list replace, 8k rps on 1 core with literally no lua code execution when serving user requests. Not sure about keepalive to upstreams, https and tcp upstreams, though.

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk May 21, 2015

Member

@bobrik yes, we will start working on this feature in the next releases, so we will monitor any announcement about balancer_by_lua in the meanwhile. If balancer_by_lua won't be released publicly during this time, then we will need to find another solution.

The requirement for Kong would be to dynamically create an upstream configuration from Lua, then dynamically populate the upstream object with servers and use it in the proxy_pass directive. Do you think we can invoke ngx_http_dyups_module functions directly from Lua bypassing its RESTful API?

The use case wouldn't be to update an existing upstream configuration, but to create a brand new from scratch, in pseudo-code:

set $upstream nil;
access_by_lua '
  local upstream = upstream:new()
  upstream.add_server("backend1.example.com", { weight = 5 })
  upstream.add_server("backend2.example.com:8080", { fail_timeout = 5, slow_start = 30 })
  ngx.var.upstream = upstream
';
proxy_pass http://$upstream
Member

thefosk commented May 21, 2015

@bobrik yes, we will start working on this feature in the next releases, so we will monitor any announcement about balancer_by_lua in the meanwhile. If balancer_by_lua won't be released publicly during this time, then we will need to find another solution.

The requirement for Kong would be to dynamically create an upstream configuration from Lua, then dynamically populate the upstream object with servers and use it in the proxy_pass directive. Do you think we can invoke ngx_http_dyups_module functions directly from Lua bypassing its RESTful API?

The use case wouldn't be to update an existing upstream configuration, but to create a brand new from scratch, in pseudo-code:

set $upstream nil;
access_by_lua '
  local upstream = upstream:new()
  upstream.add_server("backend1.example.com", { weight = 5 })
  upstream.add_server("backend2.example.com:8080", { fail_timeout = 5, slow_start = 30 })
  ngx.var.upstream = upstream
';
proxy_pass http://$upstream
@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik May 22, 2015

@thefosk I create upstreams on the fly and update them on the fly with ngx_http_dyups_module. Moreover, I do it from lua code in RESTful API.

Take a look:

https://github.com/bobrik/zoidberg-nginx/blob/master/nginx.conf
https://github.com/bobrik/zoidberg-nginx/blob/master/zoidberg-state-handler.lua
https://github.com/bobrik/zoidberg-nginx/blob/master/zoidberg-proxy-rewrite.lua

Your pseudo-code implies that you create upstream on every request, I only do that on every upstream update. In zoidberg-nginx there is also some code for checking if upstream exists to prevent endless loops, but I found out that it is avoidable with this trick:

        location / {
            set $where where-am-i.zoidberg;
            proxy_pass http://$where;
        }

Upstream where-am-i.zoidberg is created on the fly and it's not a real domain name, no recurse proxying to itself until worker_connections are exhausted happens.

bobrik commented May 22, 2015

@thefosk I create upstreams on the fly and update them on the fly with ngx_http_dyups_module. Moreover, I do it from lua code in RESTful API.

Take a look:

https://github.com/bobrik/zoidberg-nginx/blob/master/nginx.conf
https://github.com/bobrik/zoidberg-nginx/blob/master/zoidberg-state-handler.lua
https://github.com/bobrik/zoidberg-nginx/blob/master/zoidberg-proxy-rewrite.lua

Your pseudo-code implies that you create upstream on every request, I only do that on every upstream update. In zoidberg-nginx there is also some code for checking if upstream exists to prevent endless loops, but I found out that it is avoidable with this trick:

        location / {
            set $where where-am-i.zoidberg;
            proxy_pass http://$where;
        }

Upstream where-am-i.zoidberg is created on the fly and it's not a real domain name, no recurse proxying to itself until worker_connections are exhausted happens.

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk May 26, 2015

Member

@bobrik thank you, I will look into this

Member

thefosk commented May 26, 2015

@bobrik thank you, I will look into this

@Gingonic

This comment has been minimized.

Show comment
Hide comment
@Gingonic

Gingonic Jun 18, 2015

Contributor

I'm also looking at an API manager that makes sense for mesos/marathon. After spending much time with my friend Google I came to the conclusion that right now there is only one option available : choose a service discovery (consul, haproxy brige, zoidberg,...) and add an API Proxy on top of it (Kong, Repose, Tyk, ApiAxle, WSO2 AM, etc.).
Frankly I don't see why I should put a proxy on front of a proxy. It would make a lot of sense to have a lightweight API manager+service discovery service in one piece of middleware. So +1 for this feature. What is the planing state?

Contributor

Gingonic commented Jun 18, 2015

I'm also looking at an API manager that makes sense for mesos/marathon. After spending much time with my friend Google I came to the conclusion that right now there is only one option available : choose a service discovery (consul, haproxy brige, zoidberg,...) and add an API Proxy on top of it (Kong, Repose, Tyk, ApiAxle, WSO2 AM, etc.).
Frankly I don't see why I should put a proxy on front of a proxy. It would make a lot of sense to have a lightweight API manager+service discovery service in one piece of middleware. So +1 for this feature. What is the planing state?

@krishnaarava

This comment has been minimized.

Show comment
Hide comment
@krishnaarava

krishnaarava Jul 24, 2015

+1 for this feature

krishnaarava commented Jul 24, 2015

+1 for this feature

@ngbinh

This comment has been minimized.

Show comment
Hide comment
@ngbinh

ngbinh Jul 24, 2015

same here 👍

ngbinh commented Jul 24, 2015

same here 👍

@neilalbrock

This comment has been minimized.

Show comment
Hide comment
@neilalbrock

neilalbrock Aug 4, 2015

Another +1 for me good sir

neilalbrock commented Aug 4, 2015

Another +1 for me good sir

@agentzh

This comment has been minimized.

Show comment
Hide comment
@agentzh

agentzh Aug 4, 2015

The balancer_by_lua* directives from ngx_lua will get opensourced soon, in the next 3 months or so.

agentzh commented Aug 4, 2015

The balancer_by_lua* directives from ngx_lua will get opensourced soon, in the next 3 months or so.

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk Aug 6, 2015

Member

@agentzh very good news, looking forward to trying it

Member

thefosk commented Aug 6, 2015

@agentzh very good news, looking forward to trying it

@jdrake

This comment has been minimized.

Show comment
Hide comment
@jdrake

jdrake commented Aug 7, 2015

@sonicaghi

This comment has been minimized.

Show comment
Hide comment
@sonicaghi
Member

sonicaghi commented Aug 7, 2015

+1

@bobrik

This comment has been minimized.

Show comment
Hide comment
@bobrik

bobrik Aug 11, 2015

@agentzh any chance to get TCP support in balancer_by_lua as well?

bobrik commented Aug 11, 2015

@agentzh any chance to get TCP support in balancer_by_lua as well?

@agentzh

This comment has been minimized.

Show comment
Hide comment
@agentzh

agentzh Aug 12, 2015

@bobrik I'm not sure I understand that question. Are you talking about doing cosockets in the Lua code run by balancer_by_lua or you mean using balancer_by_lua in stream {} configuration blocks instead of http {} blocks?

agentzh commented Aug 12, 2015

@bobrik I'm not sure I understand that question. Are you talking about doing cosockets in the Lua code run by balancer_by_lua or you mean using balancer_by_lua in stream {} configuration blocks instead of http {} blocks?

@VectorHo

This comment has been minimized.

Show comment
Hide comment
@VectorHo

VectorHo Aug 3, 2016

+1 for this feature!!!

VectorHo commented Aug 3, 2016

+1 for this feature!!!

@Tieske

This comment has been minimized.

Show comment
Hide comment
@Tieske

Tieske Aug 8, 2016

Member

So I've been working on this and this is how I am currently implementing it;

As Kong supports multiple apis, we als need multiple sets of hosts it can load balance on. As such I'm defining a virtual host, that gets redirected to a set of real hosts. This requires a new data-entity in Kong.

Using terminology upstream (a loadbalanced pool of targets) and target (an individual target; host + port combo)

On the Kong mgt api;

POST /upstreams/
  name=service-xyz-v1

POST /upstreams/service-xyz-v1/targets
  target=internal.host1:80
  weight=3

POST /upstreams/service-xyz-v1/targets
  target=internal.host2:80
  weight=2

To use this, use the upstream name as the hostname in the upstream_url

POST /apis/
  request_path = "/xyz/v1"
  upstream_url=http://service-xyz-v1/

Currently using the resty-dns library to do DNS resolution. For resolving A, AAAA and SRV records. The balancer will incorporate the DNS results into the algorithm. As a note; port and weight ,in the example above, will be ignored for SRV records, as the DNS will provide that information.
The balancer_by_lua directive will be used to set the targets in nginx.

So if in the above example the internal.host1 resolves to an A-record with 2 entries, and the internal.host2 resolves to an SRV record with 3 entries then the resulting balancer pool will be;

name             ip              port  weight
---------------------------------------------
internal.host1   192.168.23.1      80    3  -> port and weight from mgt api
internal.host1   192.168.23.2      80    3
internal.host2   192.168.23.51   8000    5  -> port and weight from the SRV record
internal.host2   192.168.23.51   8001    5
internal.host2   192.168.23.52   8000   10

For regular upstream_url entries, the resty-dns library will also be used to resolve A, AAAA and SRV records. But only the first entry returned will be used, such that load-balancing is left to the DNS server.

Results;

  • support for SRV records
  • loadbalancing directly by Kong
  • supports loadbalancing by DNS
  • service registry; Kong management api kan be used to (un)register services
  • using a 'virtual hostname' provides future flexibility for dynamically changing the upstream-pool used, by plugins and other features (eg. regex substitutions, or header-based-versioning)

Though there is still lots of work to do, these are the lines along which I'm implementing it.

Member

Tieske commented Aug 8, 2016

So I've been working on this and this is how I am currently implementing it;

As Kong supports multiple apis, we als need multiple sets of hosts it can load balance on. As such I'm defining a virtual host, that gets redirected to a set of real hosts. This requires a new data-entity in Kong.

Using terminology upstream (a loadbalanced pool of targets) and target (an individual target; host + port combo)

On the Kong mgt api;

POST /upstreams/
  name=service-xyz-v1

POST /upstreams/service-xyz-v1/targets
  target=internal.host1:80
  weight=3

POST /upstreams/service-xyz-v1/targets
  target=internal.host2:80
  weight=2

To use this, use the upstream name as the hostname in the upstream_url

POST /apis/
  request_path = "/xyz/v1"
  upstream_url=http://service-xyz-v1/

Currently using the resty-dns library to do DNS resolution. For resolving A, AAAA and SRV records. The balancer will incorporate the DNS results into the algorithm. As a note; port and weight ,in the example above, will be ignored for SRV records, as the DNS will provide that information.
The balancer_by_lua directive will be used to set the targets in nginx.

So if in the above example the internal.host1 resolves to an A-record with 2 entries, and the internal.host2 resolves to an SRV record with 3 entries then the resulting balancer pool will be;

name             ip              port  weight
---------------------------------------------
internal.host1   192.168.23.1      80    3  -> port and weight from mgt api
internal.host1   192.168.23.2      80    3
internal.host2   192.168.23.51   8000    5  -> port and weight from the SRV record
internal.host2   192.168.23.51   8001    5
internal.host2   192.168.23.52   8000   10

For regular upstream_url entries, the resty-dns library will also be used to resolve A, AAAA and SRV records. But only the first entry returned will be used, such that load-balancing is left to the DNS server.

Results;

  • support for SRV records
  • loadbalancing directly by Kong
  • supports loadbalancing by DNS
  • service registry; Kong management api kan be used to (un)register services
  • using a 'virtual hostname' provides future flexibility for dynamically changing the upstream-pool used, by plugins and other features (eg. regex substitutions, or header-based-versioning)

Though there is still lots of work to do, these are the lines along which I'm implementing it.

@Tieske Tieske self-assigned this Aug 8, 2016

@andy-zhangtao

This comment has been minimized.

Show comment
Hide comment
@andy-zhangtao

andy-zhangtao Aug 12, 2016

@Tieske Hi Tieske, what is Kong mgt API? And how to use POST /upstreams ? I didn't find reference in Kong document(v0.8) . I also need this feature, but I don't know how to do. :-(

andy-zhangtao commented Aug 12, 2016

@Tieske Hi Tieske, what is Kong mgt API? And how to use POST /upstreams ? I didn't find reference in Kong document(v0.8) . I also need this feature, but I don't know how to do. :-(

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk Aug 12, 2016

Member

@andy-zhangtao this feature is currently being built. We are aiming to release it in the 0.10 version.

Member

thefosk commented Aug 12, 2016

@andy-zhangtao this feature is currently being built. We are aiming to release it in the 0.10 version.

@andy-zhangtao

This comment has been minimized.

Show comment
Hide comment
@andy-zhangtao

andy-zhangtao Aug 13, 2016

@Tieske Got it! Thanks Tieske

andy-zhangtao commented Aug 13, 2016

@Tieske Got it! Thanks Tieske

@iam-merlin

This comment has been minimized.

Show comment
Hide comment
@iam-merlin

iam-merlin Aug 16, 2016

@thefosk awesome!

We dream to build microservice that can be autoregistered against kong <3

iam-merlin commented Aug 16, 2016

@thefosk awesome!

We dream to build microservice that can be autoregistered against kong <3

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk Aug 16, 2016

Member

We dream to build microservice that can be autoregistered against kong <3

@iam-merlin this is exactly our vision. The next couple of releases will be very exciting for what it concerns microservices orchestration.

Member

thefosk commented Aug 16, 2016

We dream to build microservice that can be autoregistered against kong <3

@iam-merlin this is exactly our vision. The next couple of releases will be very exciting for what it concerns microservices orchestration.

@iam-merlin

This comment has been minimized.

Show comment
Hide comment
@iam-merlin

iam-merlin Aug 17, 2016

btw, @Tieske , if you want a beta tester or a feedback, don't hesitate to ping me. I've some code ready for that :D (I started to write an autoregister library... before I found that kong does not have this feature yet :'( )

iam-merlin commented Aug 17, 2016

btw, @Tieske , if you want a beta tester or a feedback, don't hesitate to ping me. I've some code ready for that :D (I started to write an autoregister library... before I found that kong does not have this feature yet :'( )

@alzadude

This comment has been minimized.

Show comment
Hide comment
@alzadude

alzadude Aug 19, 2016

We dream to build microservice that can be autoregistered against kong <3

@iam-merlin @thefosk how about a Kong adapter for registrator? This could register Docker-based services with Kong, if the service has the environment SERVICE_TAGS=kong for example.

n.b. this wouldn't replace other registrator adapters e.g. the Consul adapter, it would compliment them (i.e. could be used in combination, or just use the Kong adapter on it's own).

I am planning to deploy Kong for a work project when it supports SRV records fully, in the meantime I have a nginx Docker container which dynamically adds upstreams based on SERVICE_TAGS=nginx.

alzadude commented Aug 19, 2016

We dream to build microservice that can be autoregistered against kong <3

@iam-merlin @thefosk how about a Kong adapter for registrator? This could register Docker-based services with Kong, if the service has the environment SERVICE_TAGS=kong for example.

n.b. this wouldn't replace other registrator adapters e.g. the Consul adapter, it would compliment them (i.e. could be used in combination, or just use the Kong adapter on it's own).

I am planning to deploy Kong for a work project when it supports SRV records fully, in the meantime I have a nginx Docker container which dynamically adds upstreams based on SERVICE_TAGS=nginx.

@iam-merlin

This comment has been minimized.

Show comment
Hide comment
@iam-merlin

iam-merlin Aug 19, 2016

@alzadude , I don't know registrator but from my point of view (and what I read), registrator needs to be ran on each host... with a feature like this issue, you don't need docker (and consul or other service registry), just a Plain Old Request (^^) and maybe we will have health check after (in another plugin).

From my point of view, Consul is very heavy for microservice... I really like the simplicity with Kong (and no dependencies) but right now... doing a microservice architecture with Kong and without a service registry is a pain.

Anyway, I don't think my point of view is relevant... I'm just a user of kong, not the best one and not a lua developer :P xD.

iam-merlin commented Aug 19, 2016

@alzadude , I don't know registrator but from my point of view (and what I read), registrator needs to be ran on each host... with a feature like this issue, you don't need docker (and consul or other service registry), just a Plain Old Request (^^) and maybe we will have health check after (in another plugin).

From my point of view, Consul is very heavy for microservice... I really like the simplicity with Kong (and no dependencies) but right now... doing a microservice architecture with Kong and without a service registry is a pain.

Anyway, I don't think my point of view is relevant... I'm just a user of kong, not the best one and not a lua developer :P xD.

@thefosk

This comment has been minimized.

Show comment
Hide comment
@thefosk

thefosk Aug 19, 2016

Member

Just so you know, once @Tieske finishes this implementation, I will provide some sample Docker-Compose templates that show how to use Kong with a microservice orchestration pattern.

I do personally like ContainerPilot more than registrator though.

We will also be able to provide a pattern that doesn't involve having any third-party service discovery in place because, effectively, Kong's new /upstreams will become the service discovery layer.

Member

thefosk commented Aug 19, 2016

Just so you know, once @Tieske finishes this implementation, I will provide some sample Docker-Compose templates that show how to use Kong with a microservice orchestration pattern.

I do personally like ContainerPilot more than registrator though.

We will also be able to provide a pattern that doesn't involve having any third-party service discovery in place because, effectively, Kong's new /upstreams will become the service discovery layer.

@Tieske

This comment has been minimized.

Show comment
Hide comment
@Tieske

Tieske Aug 26, 2016

Member

A PR with intermediate status is available now (see #1541)

Any input and testing is highly appreciated, see #1541 (comment)

so if anyone else wants to test like @iam-merlin, please check it out.

Member

Tieske commented Aug 26, 2016

A PR with intermediate status is available now (see #1541)

Any input and testing is highly appreciated, see #1541 (comment)

so if anyone else wants to test like @iam-merlin, please check it out.

@Tieske Tieske referenced this issue Aug 26, 2016

Closed

feat(core) upstreams #1541

8 of 10 tasks complete
@iam-merlin

This comment has been minimized.

Show comment
Hide comment
@iam-merlin

iam-merlin Aug 29, 2016

@Tieske it works xD

I've made some tests and it seems to works as expected (just the dns part, I didn't test yet balancer_by_lua).

It seems dns is missing in your requirement (I'm not a lua dev) and I've to installed it manually (I got on error at the first start and I install https://github.com/Mashape/dns.lua).

Do you have any documentation about your code or this will be ready later?

iam-merlin commented Aug 29, 2016

@Tieske it works xD

I've made some tests and it seems to works as expected (just the dns part, I didn't test yet balancer_by_lua).

It seems dns is missing in your requirement (I'm not a lua dev) and I've to installed it manually (I got on error at the first start and I install https://github.com/Mashape/dns.lua).

Do you have any documentation about your code or this will be ready later?

@sonicaghi sonicaghi added this to the 0.10 milestone Aug 30, 2016

@Tieske

This comment has been minimized.

Show comment
Hide comment
@Tieske

Tieske Sep 1, 2016

Member

@iam-merlin thx for testing 👍. Docs will be later, as it might still change.

Member

Tieske commented Sep 1, 2016

@iam-merlin thx for testing 👍. Docs will be later, as it might still change.

@thibaultcha thibaultcha removed size/L labels Sep 1, 2016

@Tieske Tieske referenced this issue Oct 11, 2016

Merged

feat(core) upstreams #1735

6 of 7 tasks complete
@Tieske

This comment has been minimized.

Show comment
Hide comment
@Tieske

Tieske Oct 11, 2016

Member

besides #1541 (internal dns) there is now #1735 which implements the upstreams feature discussed above (1735 builds on top of 1541).

testing is once again highly appreciated!

Member

Tieske commented Oct 11, 2016

besides #1541 (internal dns) there is now #1735 which implements the upstreams feature discussed above (1735 builds on top of 1541).

testing is once again highly appreciated!

@tomdavidson

This comment has been minimized.

Show comment
Hide comment
@tomdavidson

tomdavidson Oct 19, 2016

Cant not deploy load balancing with round robin. We need least open connections or fastest response time - anything but round robin.

tomdavidson commented Oct 19, 2016

Cant not deploy load balancing with round robin. We need least open connections or fastest response time - anything but round robin.

Tieske added a commit that referenced this issue Dec 28, 2016

feat(core) upstreams (#1735)
* adds loadbalancing on specified targets
* adds service registry
* implements #157 
* adds entities: upstreams and targets
* modifies timestamps to millisecond precision (except for the non-related tables when using postgres)
* adds collecting health-data on a per-request basis (unused for now)
@Tieske

This comment has been minimized.

Show comment
Hide comment
@Tieske

Tieske Dec 29, 2016

Member

'least open connections' does not make sense in a Kong cluster. 'response time' is being considered, but not prioritized yet.

Member

Tieske commented Dec 29, 2016

'least open connections' does not make sense in a Kong cluster. 'response time' is being considered, but not prioritized yet.

@Tieske

This comment has been minimized.

Show comment
Hide comment
@Tieske

Tieske Dec 29, 2016

Member

closing this as #1735 has been merged into the next branch for the upcoming release.

Member

Tieske commented Dec 29, 2016

closing this as #1735 has been merged into the next branch for the upcoming release.

@Tieske Tieske closed this Dec 29, 2016

thibaultcha added a commit that referenced this issue Jan 12, 2017

feat(core) upstreams (#1735)
* adds loadbalancing on specified targets
* adds service registry
* implements #157
* adds entities: upstreams and targets
* modifies timestamps to millisecond precision (except for the non-related tables when using postgres)
* adds collecting health-data on a per-request basis (unused for now)
@avanathan

This comment has been minimized.

Show comment
Hide comment
@avanathan

avanathan Nov 30, 2017

How to implement keep_alive without upstream? Because upstream doesn't support dynamic IP resolution and we cannot use keep_alive outside of upstream.

avanathan commented Nov 30, 2017

How to implement keep_alive without upstream? Because upstream doesn't support dynamic IP resolution and we cannot use keep_alive outside of upstream.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment