Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign uppublic network identities for pets #28660
Comments
bprashanth
added
sig/network
team/cluster
area/stateful-apps
labels
Jul 8, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
smarterclayton
Jul 8, 2016
Contributor
I do have some guys working on bare metal l4 ingress, but it's not going to be available anytime soon in a way that would be useful for this.
Service per pet seems like we're not solving the right problem. Should loadbalancer service work differently with a headless service?
|
I do have some guys working on bare metal l4 ingress, but it's not going to be available anytime soon in a way that would be useful for this. Service per pet seems like we're not solving the right problem. Should loadbalancer service work differently with a headless service? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
You'd needs DNS outside too |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
thockin
Jul 8, 2016
Member
On Thu, Jul 7, 2016 at 7:09 PM, Prashanth B notifications@github.com wrote:
This has been asked for more than once by petset dogfooders in the context of running dbaas or WAN petset deployment. There are at least 2 ways to achieve it:
- Headless service (already a requirement for petset) of type={loadbalancer, nodeport}, resulting in each endpoint getting a public identity
I remembered one of the issues here. NodePorts are allocated as part
of apiserver, and the current API has room for 1 nodePort per
service-port. Making a headless Service of type NodePort means we
need to a) dynamically allocate NodePort and b) report more than one
NP per SP. Puke.
- PetSet controller creates a service per pet
Works with existing infrastructure...that's a biggggg win to me.
Public DNS per pet would probably work just as well, but the advantage with IPs is one can write a type=loadbalancer nginx backend.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
On Thu, Jul 7, 2016 at 7:09 PM, Prashanth B notifications@github.com wrote:
I remembered one of the issues here. NodePorts are allocated as part
Works with existing infrastructure...that's a biggggg win to me.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bprashanth
Jul 8, 2016
Member
Service per pet seems like we're not solving the right problem. Should loadbalancer service work differently with a headless service?
I meant if one were to flip the "governingService" that manages the network identity of the petset to type=lb (it's currently just a headless service) each endpoint will get a public ip.
I can't put my finger on why I don't like the Service per pet, it feels like allocating an array to store a loop counter. I will try to think of some valid counter arguments :)
I meant if one were to flip the "governingService" that manages the network identity of the petset to type=lb (it's currently just a headless service) each endpoint will get a public ip. I can't put my finger on why I don't like the Service per pet, it feels like allocating an array to store a loop counter. I will try to think of some valid counter arguments :) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
thockin
Jul 8, 2016
Member
Service per pet is a clunky substitute for migratable IPs. I'm the first
to admit it. But migratable IP is v.hard and we have Services...
On Thu, Jul 7, 2016 at 10:01 PM, Prashanth B notifications@github.com
wrote:
Service per pet seems like we're not solving the right problem. Should
loadbalancer service work differently with a headless service?I meant if one were to flip the "governingService" that manages the
network identity of the petset to type=lb (it's currently just a headless
service) each endpoint will get a public ip.I can't put my finger on why I don't like the Service per pet, it feels
like allocating an array to store a loop counter. I will try to think of
some valid counter arguments :)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#28660 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AFVgVGy5lm8KThFpX_VMC5zcoIMsUcyxks5qTdmigaJpZM4JHqfO
.
|
Service per pet is a clunky substitute for migratable IPs. I'm the first On Thu, Jul 7, 2016 at 10:01 PM, Prashanth B notifications@github.com
|
webwurst
referenced this issue
Jul 25, 2016
Open
Create example for a distributed database with PetSet #1
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jeremyong
Aug 22, 2016
I'm curious what the status on this discussion is. PetSets are potentially a big win but one of the main reasons to use them is because they might allow an elegant way to provide sticky public endpoints,. As things stand currently, I'm doing this awful replication controller per node + service per replication controller. I could get binding preallocated public IPs to each service to work without making them LoadBalancer types (to direct traffic to a single node !!). The service per pet abstraction might seem heavy, but frankly, it could be a win if we want more sophisticated associations with multiple services (imagine a service hierarchy for example). Incidentally, is there anywhere that documents the actual specification of the PetSet? I was only able to find the user guide after 30 minutes of searching.
jeremyong
commented
Aug 22, 2016
|
I'm curious what the status on this discussion is. PetSets are potentially a big win but one of the main reasons to use them is because they might allow an elegant way to provide sticky public endpoints,. As things stand currently, I'm doing this awful replication controller per node + service per replication controller. I could get binding preallocated public IPs to each service to work without making them LoadBalancer types (to direct traffic to a single node !!). The service per pet abstraction might seem heavy, but frankly, it could be a win if we want more sophisticated associations with multiple services (imagine a service hierarchy for example). Incidentally, is there anywhere that documents the actual specification of the PetSet? I was only able to find the user guide after 30 minutes of searching. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bprashanth
Aug 22, 2016
Member
Incidentally, is there anywhere that documents the actual specification of the PetSet? I was only able to find the user guide after 30 minutes of searching.
User guide (http://kubernetes.io/docs/search/?q=petset) and comments around types.go (https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/v1alpha1/types.go) that translate to swagger in the public website.
We're currently focussing on adding features that would progress production readiness of existing database petset examples. Public identites isn't one of these blockers. It is probably necessary for many other use cases (dbaas, single cluster going cross AZ), but I'm not focussing on those right now.
The workaround is to deploy a haproxy/nginx pod that uses session stickyness to address individual pets. The petset already has a governing service so all you need to do is point something like service-loadbalancer at this service and turn on sticky sessions, like this example: https://gist.github.com/bprashanth/507f61f9cefa465c3d6d (assuming l7), or do something like SNI at l4.
Probably the fastest way public identities willl become available is if we find a need for sticky ips, which results in a service per pet, which opens up the possibility of just converting that service per pet to a loadbalancer per pet.
User guide (http://kubernetes.io/docs/search/?q=petset) and comments around types.go (https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/v1alpha1/types.go) that translate to swagger in the public website. We're currently focussing on adding features that would progress production readiness of existing database petset examples. Public identites isn't one of these blockers. It is probably necessary for many other use cases (dbaas, single cluster going cross AZ), but I'm not focussing on those right now. The workaround is to deploy a haproxy/nginx pod that uses session stickyness to address individual pets. The petset already has a governing service so all you need to do is point something like service-loadbalancer at this service and turn on sticky sessions, like this example: https://gist.github.com/bprashanth/507f61f9cefa465c3d6d (assuming l7), or do something like SNI at l4. Probably the fastest way public identities willl become available is if we find a need for sticky ips, which results in a service per pet, which opens up the possibility of just converting that service per pet to a loadbalancer per pet. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jeremyong
Aug 22, 2016
Thanks for the reply. The types.go file was really what I was looking for.
Here are some motivating examples I could think of for public identity use cases:
- Game servers (clients need persistent sockets open or udp ports to access)
- Proxy servers (for files, media, etc)
- Audio/video/text chat servers (selective forwarding units, media control units, etc)
I thought about the solution you mentioned but it only works for certain types of traffic. If you want a public stateful endpoint that serves other protocols (SCTP, DTLS, RTP, SRTP, STUN, etc) you'll need a much smarter load balancer or write your own. Generally, the point of a "pet" in the examples I gave above is maintaining persistent connections that most load balancers are capable of in a very limited sense (HTTP keep alive).
Thanks
jeremyong
commented
Aug 22, 2016
|
Thanks for the reply. The types.go file was really what I was looking for. Here are some motivating examples I could think of for public identity use cases:
I thought about the solution you mentioned but it only works for certain types of traffic. If you want a public stateful endpoint that serves other protocols (SCTP, DTLS, RTP, SRTP, STUN, etc) you'll need a much smarter load balancer or write your own. Generally, the point of a "pet" in the examples I gave above is maintaining persistent connections that most load balancers are capable of in a very limited sense (HTTP keep alive). Thanks |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cammm
Aug 31, 2016
I have been struggling with how to arrange game servers in the Kubernetes way of doing things. Game servers are generally spun up and down as capacity requirements change over the course of a day and more gradually as concurrent users grow or shrink over time. Clients are matched into a server and passed connection details directly to the server or a specially designed proxy and will remain on that server for the life of the game session. Depending on the game that can be 10 minutes to several hours.
My hope was PetSets would be the perfect answer to that arrangement, as they can be uniquely identified but also scaled easily. However without anyway to route traffic from an external client to a pet directly we are forced to write a custom load balancer, which is complicated and a potential point of failure.
Ideally we would want a pet to be assigned an external IP without having to create a service for each or use a static IP with a variable port similar to a NodePort (ie an external NodePort per pet) so we don't overuse IP addresses for pet instances. My current project would have thousands of external pets so an IP for each starts getting expensive and unnecessary.
Thanks and hope this explanation is useful. Great work on the 1.3 release.
cammm
commented
Aug 31, 2016
|
I have been struggling with how to arrange game servers in the Kubernetes way of doing things. Game servers are generally spun up and down as capacity requirements change over the course of a day and more gradually as concurrent users grow or shrink over time. Clients are matched into a server and passed connection details directly to the server or a specially designed proxy and will remain on that server for the life of the game session. Depending on the game that can be 10 minutes to several hours. My hope was PetSets would be the perfect answer to that arrangement, as they can be uniquely identified but also scaled easily. However without anyway to route traffic from an external client to a pet directly we are forced to write a custom load balancer, which is complicated and a potential point of failure. Ideally we would want a pet to be assigned an external IP without having to create a service for each or use a static IP with a variable port similar to a NodePort (ie an external NodePort per pet) so we don't overuse IP addresses for pet instances. My current project would have thousands of external pets so an IP for each starts getting expensive and unnecessary. Thanks and hope this explanation is useful. Great work on the 1.3 release. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bprashanth
Aug 31, 2016
Member
If this is http you should just and RC + ingress with sticky sessions
(https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/configuration.md#allowed-parameters-in-configuration-config-map, another example is the gist mentioned here #28660 (comment))
|
If this is http you should just and RC + ingress with sticky sessions |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jeremyong
Aug 31, 2016
@cammm Just FYI, I have very similar problems to you and also hoped PetSets would be the answer. I ended up using hostNetwork: true in order to leverage the host's external IP directly. Doing this however basically interfered with all of Kubernetes' built in service resolution.
So in the end, I did the painful thing of ditching services all together! (ugh!). Kubernetes REALLY wants you to not talk to individual pods from external clients/devices (and usually as HTTP requests) and so I have this thing which is basically a container starter, monitor, and such. This isn't so bad in and of itself, but is a far cry from what I had hoped modern orchestration would afford me.
@bprashanth In my case, I don't want a load balancer that terminates http. I exchange UDP, SRTP, DTLS, custom protocols etc.
jeremyong
commented
Aug 31, 2016
|
@cammm Just FYI, I have very similar problems to you and also hoped PetSets would be the answer. I ended up using hostNetwork: true in order to leverage the host's external IP directly. Doing this however basically interfered with all of Kubernetes' built in service resolution. So in the end, I did the painful thing of ditching services all together! (ugh!). Kubernetes REALLY wants you to not talk to individual pods from external clients/devices (and usually as HTTP requests) and so I have this thing which is basically a container starter, monitor, and such. This isn't so bad in and of itself, but is a far cry from what I had hoped modern orchestration would afford me. @bprashanth In my case, I don't want a load balancer that terminates http. I exchange UDP, SRTP, DTLS, custom protocols etc. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bprashanth
Aug 31, 2016
Member
Addressing a specific backend is protocol specific though, for example the nginx ingress controller supports udp, but not udp stickyness (because how do you express this?). You can already use hostPort in your RC, and leverage the public identity of your node. Is the request here to allocated a public DNS name per pet in cloud dns?
|
Addressing a specific backend is protocol specific though, for example the nginx ingress controller supports udp, but not udp stickyness (because how do you express this?). You can already use hostPort in your RC, and leverage the public identity of your node. Is the request here to allocated a public DNS name per pet in cloud dns? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
smarterclayton
Sep 1, 2016
Contributor
The UDP / custom protocol visible externally certainly needs a solution -
we have no good solution today even without pet sets.
On Wed, Aug 31, 2016 at 7:02 PM, Prashanth B notifications@github.com
wrote:
Addressing a specific backend is protocol specific though, for example the
nginx ingress controller supports udp, but not udp stickyness (because how
do you express this?). You can already use hostPort in your RC, and
leverage the public identity of your node. Is the request here to allocated
a public DNS name per pet in cloud dns?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#28660 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p0NpEn4WDe7v6ZGRT4BegumViUHlks5qlgf6gaJpZM4JHqfO
.
|
The UDP / custom protocol visible externally certainly needs a solution - On Wed, Aug 31, 2016 at 7:02 PM, Prashanth B notifications@github.com
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Can we get this in 1.5 @thockin? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
smarterclayton
Oct 10, 2016
Contributor
It's not currently on the minimum requirements list. If someone wants to
take ownership of the issue and write up a proposal we could at least
commit to getting the proposal reviewed in 1.5, but there is no bandwidth
for an implementation.
On Oct 9, 2016, at 3:28 PM, Chris Love notifications@github.com wrote:
Can we get this in 1.5?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#28660 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pxIFsvYMcrIiDxGvebuTuUKTpZtxks5qyVxmgaJpZM4JHqfO
.
|
It's not currently on the minimum requirements list. If someone wants to On Oct 9, 2016, at 3:28 PM, Chris Love notifications@github.com wrote: Can we get this in 1.5? — |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
chrislovecnm
Oct 10, 2016
Member
@smarterclayton this is not good ;( I would be happy to work on it, but I have zero clue where to even begin. @bprashanth should we talk about this at sig-apps? What is the best way to get more traction?
|
@smarterclayton this is not good ;( I would be happy to work on it, but I have zero clue where to even begin. @bprashanth should we talk about this at sig-apps? What is the best way to get more traction? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
thockin
Oct 10, 2016
Member
I'm not sure anyone knows what "this" is, so 1.5 seems exceedingly unlikely. The requirements just are not clear yet. at least, not to me...
|
I'm not sure anyone knows what "this" is, so 1.5 seems exceedingly unlikely. The requirements just are not clear yet. at least, not to me... |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
chrislovecnm
Oct 10, 2016
Member
I may be commenting on the wrong issue, Arg. Let me ping @bprashanth
|
I may be commenting on the wrong issue, Arg. Let me ping @bprashanth |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
thockin
Oct 10, 2016
Member
I don't have a great answer right now. Some environments, like GCE, just
do not play nicely with that. Does a service per-pet not work, or is it
just fugly and offensive to you?
On Sun, Oct 9, 2016 at 10:02 PM, Chris Love notifications@github.com
wrote:
I may be commenting on the wrong issue, Arg. Let me ping @bprashanth
https://github.com/bprashanth—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#28660 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVNylTs46S4E716GgzLUTWjwuQb18ks5qycbbgaJpZM4JHqfO
.
|
I don't have a great answer right now. Some environments, like GCE, just On Sun, Oct 9, 2016 at 10:02 PM, Chris Love notifications@github.com
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
donovanhide
Oct 30, 2016
Hi, is this still unsolved? I have been working with peer to peer blockchain apps (ethereum and hyperledger fabric) for private evaluation purposes, where peer discovery greatly benefits from the predictable host name allocation in petsets. However, I need to expose some ports on an external ip. Is HAProxy still the recommendation?
donovanhide
commented
Oct 30, 2016
•
|
Hi, is this still unsolved? I have been working with peer to peer blockchain apps (ethereum and hyperledger fabric) for private evaluation purposes, where peer discovery greatly benefits from the predictable host name allocation in petsets. However, I need to expose some ports on an external ip. Is HAProxy still the recommendation? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
@donovanhide not this is not resolved.... |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
krmayankk
Dec 1, 2016
Contributor
Does public ip also mean here stable ip for a pet ie if a pet dies it comes up with the same ip it was already assigned. Is that being tracked in a different issue ? We internally have a use case for this. @bprashanth @smarterclayton @thockin
|
Does public ip also mean here stable ip for a pet ie if a pet dies it comes up with the same ip it was already assigned. Is that being tracked in a different issue ? We internally have a use case for this. @bprashanth @smarterclayton @thockin |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
slaskawi
Dec 9, 2016
Contributor
Hey guys!
I would like to ask what is the main motivation for having Public IPs for StatefulSets?
The main use case that I can see is that the client (which is outside of the Kubernetes cluster) makes a request (e.g. using TCP?) and needs to decide which replica receives it. Data Grid apps might be used as an example here. A client app "knows" which replica stored data for a given key.
If my understanding is correct, maybe we need a mechanism, which will allow client apps to decide where to forward data. With TLS/SNI enabled we could use "host_name" and for non-encrypted TCP connections, maybe we could use something like "HOST_ID" but the other way around? A client app could set it to inform the Load Balancer where to forward the request.
What do you think about this?
|
Hey guys! I would like to ask what is the main motivation for having Public IPs for StatefulSets? The main use case that I can see is that the client (which is outside of the Kubernetes cluster) makes a request (e.g. using TCP?) and needs to decide which replica receives it. Data Grid apps might be used as an example here. A client app "knows" which replica stored data for a given key. If my understanding is correct, maybe we need a mechanism, which will allow client apps to decide where to forward data. With TLS/SNI enabled we could use "host_name" and for non-encrypted TCP connections, maybe we could use something like "HOST_ID" but the other way around? A client app could set it to inform the Load Balancer where to forward the request. What do you think about this? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jeremyong
Dec 9, 2016
There are many server applications which should not sit behind a load balancer and clients should connect to directly. The two most common examples I can think of are a game server or a media SFU/MCU. In such cases, clients may not be communicating via TCP at all but UDP, RTP, or some other custom protocol which interacts badly with standard load balancing implementations. Without a true stable network identity, non-session based protocols won't work with the scheme you proposed I believe.
edit: typo
jeremyong
commented
Dec 9, 2016
•
|
There are many server applications which should not sit behind a load balancer and clients should connect to directly. The two most common examples I can think of are a game server or a media SFU/MCU. In such cases, clients may not be communicating via TCP at all but UDP, RTP, or some other custom protocol which interacts badly with standard load balancing implementations. Without a true stable network identity, non-session based protocols won't work with the scheme you proposed I believe. edit: typo |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cammm
Dec 9, 2016
^exactly this
It needs to work with existing clients without modifications and without a load balancer involved, so just using IP + PORT with support for UDP and TCP. These media / game server examples mentioned in the thread all have existing implementations and expect to connect users to the same session on the same pet.
cammm
commented
Dec 9, 2016
|
^exactly this It needs to work with existing clients without modifications and without a load balancer involved, so just using IP + PORT with support for UDP and TCP. These media / game server examples mentioned in the thread all have existing implementations and expect to connect users to the same session on the same pet. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
smarterclayton
Dec 9, 2016
Contributor
|
The naive solution here is stable external sub-service IPs, which requires
a LB with range IP ingress or VIP allocator / manager. It would be
possible to build that integration today (VIP allocator) on a number of
clouds - would someone who has a near term reqt for public IP be interested
in driving that?
As a straw man, watch endpoints for a service, ensure a VIP is allocated
for each unique slot, and update vips as the slots change to either point
to pod IP (assuming VIP can reach pod network) or set network forwarding
rules so that packets destined for the VIP end up on the node IP
On Dec 9, 2016, at 11:39 AM, Cameron Royal <notifications@github.com> wrote:
^exactly this
It needs to work with existing clients without modifications and without a
load balancer involved, so just using IP + PORT with support for UDP and
TCP. These media / game server examples mentioned in the thread all have
existing implementations and expect to connect users to the same session on
the same pet.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#28660 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p9NBRdhNh265DR3pm2EvcbgEKuFvks5rGYQugaJpZM4JHqfO>
.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
slaskawi
Dec 13, 2016
Contributor
@smarterclayton I would be more than happy to experiment with it. Let me contact you offline and ask for some guidance.
|
@smarterclayton I would be more than happy to experiment with it. Let me contact you offline and ask for some guidance. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
CJKay
Mar 7, 2017
@slaskawi Have you had any luck? The game server/VOIP service situation is exactly the situation I've been trying to solve for several months now on GKE.
CJKay
commented
Mar 7, 2017
|
@slaskawi Have you had any luck? The game server/VOIP service situation is exactly the situation I've been trying to solve for several months now on GKE. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
slaskawi
Mar 13, 2017
Contributor
I have created a design proposal for this functionality: kubernetes/community#446
All comments are more than welcome!
|
I have created a design proposal for this functionality: kubernetes/community#446 All comments are more than welcome! |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
smifun
May 22, 2017
Yet another example why this would be useful is in cases where client is talking just to one of servers to discover remaining servers. This is the case for kafka/zookeeper.
smifun
commented
May 22, 2017
|
Yet another example why this would be useful is in cases where client is talking just to one of servers to discover remaining servers. This is the case for kafka/zookeeper. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kincl
Aug 3, 2017
The issue here for us was getting around the Service load balancer and being able to address a pod directly from external to the cluster.
We were able to achieve this by using the preserve source IP feature[0] that has been in 1.5+ combined with NodePort which only routes traffic to a pod located on the same node. This along with the downward API to give the pod the hostname of the node allows us to run Kafka with the brokers exposed outside the cluster.
One catch here is you have to know which node has the pod(s) running on it. Kafka takes care of this by directing clients to brokers holding those topics. Another is that your StatefuSet is limited to the total size of schedulable nodes.
Hope this helps!
kincl
commented
Aug 3, 2017
•
|
The issue here for us was getting around the Service load balancer and being able to address a pod directly from external to the cluster. We were able to achieve this by using the preserve source IP feature[0] that has been in 1.5+ combined with NodePort which only routes traffic to a pod located on the same node. This along with the downward API to give the pod the hostname of the node allows us to run Kafka with the brokers exposed outside the cluster. One catch here is you have to know which node has the pod(s) running on it. Kafka takes care of this by directing clients to brokers holding those topics. Another is that your StatefuSet is limited to the total size of schedulable nodes. Hope this helps! |
jpiper
referenced this issue
Sep 28, 2017
Closed
Enable generating random ports with host port mapping #49792
pushed a commit
to ii/kubernetes
that referenced
this issue
Nov 22, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Fixed by #55329 |
bprashanth commentedJul 8, 2016
This has been asked for more than once by petset dogfooders in the context of running dbaas or WAN petset deployment. There are at least 2 ways to achieve it:
Public DNS per pet would probably work just as well, but the advantage with IPs is one can write a type=loadbalancer nginx backend.
@smarterclayton @thockin