New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-proxy iptables load-balancing probability is not equal #37932

Closed
JackTiger opened this Issue Dec 2, 2016 · 17 comments

Comments

Projects
None yet
5 participants
@JackTiger

JackTiger commented Dec 2, 2016

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.7", GitCommit:"a2cba278cba1f6881bb0a7704d9cac6fca6ed435", GitTreeState:"clean", BuildDate:"2016-09-12T23:15:30Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.7", GitCommit:"a2cba278cba1f6881bb0a7704d9cac6fca6ed435", GitTreeState:"clean", BuildDate:"2016-09-12T23:08:43Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

Cloud provider or hardware configuration:
Amazon AWS
OS (e.g. from /etc/os-release):
CentOS Linux 7 (Core)
Kernel (e.g. uname -a):
Linux bastion 4.7.5-1.el7.elrepo.x86_64 #1 SMP Sat Sep 24 11:54:29 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Custom install
Others:
What happened:

I've created a deployment (nginx) with 5 replicas like so:
kubectl run nginx --image=nginx --replicas=5

I've exposed a service like so:
kubectl expose deployment nginx --port=80 --target-port=80

On a worker node, I see this in iptables for this service:
-A KUBE-SVC-H2F4SOSDHAEHZFXQ -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-W565FQEHFWB3IXKJ -A KUBE-SVC-H2F4SOSDHAEHZFXQ -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-EZJQVRVACWHUOBBQ -A KUBE-SVC-H2F4SOSDHAEHZFXQ -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-FPX6KRWBZCCVJJV6 -A KUBE-SVC-H2F4SOSDHAEHZFXQ -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-PHUVLZZC77CF4GSN -A KUBE-SVC-H2F4SOSDHAEHZFXQ -m comment --comment "default/nginx:" -j KUBE-SEP-DCNJBZVVUJKLG55E

Those probabilities look really odd.

What you expected to happen:

All probabilities should be 0.2.

How to reproduce it (as minimally and precisely as possible):

See above.

Anything else do we need to know:

Looks like it's line 953 in pkg/proxy/iptables/proxier.go.

It seems like that there is some problem in kube-proxy iptables rules.

// Now write loadbalancing & DNAT rules.
		n := len(endpointChains)
		for i, endpointChain := range endpointChains {
			// Balancing rules in the per-service chain.
			args := []string{
				"-A", string(svcChain),
				"-m", "comment", "--comment", svcName.String(),
			}
			if i < (n - 1) {
				// Each rule is a probabilistic match.
				args = append(args,
					"-m", "statistic",
					"--mode", "random",
					"--probability", fmt.Sprintf("%0.5f", 1.0/float64(n-i)))
			}

In the code above, I find that the probability is not equal, and the total of the probability is above 1. For example:

I have 5 pod in one server, and I use the command iptables-save to see the chain of the iptables, the result is as below:

-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y

I find the probability is 1/5, 1/4, 1/3, 1/2, so when the access request is coming, the choose probability of KUBE-SEP-CGDKBCNM24SZWCMS pod is the latest, and I test I send the request to the server at the same time, and a lot of time this pod can handle 3 request, so if I want to every pod handle one task at the same time is not done.

@MrHohn

This comment has been minimized.

Show comment
Hide comment
@MrHohn

MrHohn Dec 5, 2016

Member

This is intended because these iptables rules will be examined from top to bottom.

Take your case as an example:

-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y

When it reach the first rule, there are still 5 possible endpoints to choose from. Hence the possibility should be set to 1/5 to achieve equality.
If the first endpoint has not be chosen, we have 4 possible endpoints left to choose from. Hence the possibility should be 1/4 now. So on so forth.
After all, the possibilities to reach each of these endpoints would be:

1th endpoint: 1/5
2th endpoint: 4/5 * 1/4 = 1/5
3th endpoint: 4/5 * 3/4 * 1/3 = 1/5
4th endpoint: 4/5 * 3/4 * 2/3 * 1/2 = 1/5
5th endpoint: 4/5 * 3/4 * 2/3 * 1/2 * 1 = 1/5
Member

MrHohn commented Dec 5, 2016

This is intended because these iptables rules will be examined from top to bottom.

Take your case as an example:

-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI 
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y

When it reach the first rule, there are still 5 possible endpoints to choose from. Hence the possibility should be set to 1/5 to achieve equality.
If the first endpoint has not be chosen, we have 4 possible endpoints left to choose from. Hence the possibility should be 1/4 now. So on so forth.
After all, the possibilities to reach each of these endpoints would be:

1th endpoint: 1/5
2th endpoint: 4/5 * 1/4 = 1/5
3th endpoint: 4/5 * 3/4 * 1/3 = 1/5
4th endpoint: 4/5 * 3/4 * 2/3 * 1/2 = 1/5
5th endpoint: 4/5 * 3/4 * 2/3 * 1/2 * 1 = 1/5
@JackTiger

This comment has been minimized.

Show comment
Hide comment
@JackTiger

JackTiger Dec 6, 2016

Hi, @MrHohn, Thank you for your detailed answers. As you said, if I use 5 pods, the probabilities of every pod is 1/5. But I actually test, I run the same server in every pod to receive restful http request, and I send the request from the client at the same time, I find that not all the pod handle the request, some requests are handled by the same pod from the log description. So I do not understand that if the probabilities is equal every pod can receive the request.

Thank you very much again!

JackTiger commented Dec 6, 2016

Hi, @MrHohn, Thank you for your detailed answers. As you said, if I use 5 pods, the probabilities of every pod is 1/5. But I actually test, I run the same server in every pod to receive restful http request, and I send the request from the client at the same time, I find that not all the pod handle the request, some requests are handled by the same pod from the log description. So I do not understand that if the probabilities is equal every pod can receive the request.

Thank you very much again!

@MrHohn

This comment has been minimized.

Show comment
Hide comment
@MrHohn

MrHohn Dec 6, 2016

Member

Hi @JackTiger, could you provide more details about:

  • How many clients you ran and where did these clients run on? (If unfairness really happened, I will suspect it is the result of connection tracking)
  • How many requests did the clients send? (Although it is obvious, unfairness may happen if you are sampling a relatively small amount of requests)
  • Which kind of load distribution you got? Does it come with a specific pattern or just random?
Member

MrHohn commented Dec 6, 2016

Hi @JackTiger, could you provide more details about:

  • How many clients you ran and where did these clients run on? (If unfairness really happened, I will suspect it is the result of connection tracking)
  • How many requests did the clients send? (Although it is obvious, unfairness may happen if you are sampling a relatively small amount of requests)
  • Which kind of load distribution you got? Does it come with a specific pattern or just random?
@JackTiger

This comment has been minimized.

Show comment
Hide comment
@JackTiger

JackTiger Dec 6, 2016

Hi, @MrHohn,

I use curl command to send the http request from the local host, as below:

curl 192.168.3.163:9090 & curl 192.168.3.163:9090 & curl 192.168.3.163:9090 & curl 192.168.3.163:9090 &

and my service describe is like this:

root@SZV1000050172:/opt/bin/k8s/healthy# kubectl describe svc showreadiness
Name:			showreadiness
Namespace:		default
Labels:			app=showreadiness
Selector:		app=showreadiness
Type:			NodePort
IP:			192.168.3.163
Port:			showreadiness	9090/TCP
NodePort:		showreadiness	32090/TCP
Endpoints:		172.16.37.4:9090,172.16.37.5:9090,172.16.54.3:9090 + 2 more...
Session Affinity:	None
No events.

The service resource file I created is as below:

root@SZV1000050172:/opt/bin/k8s/healthy# cat showreadiness-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: showreadiness
  labels:
    app: showreadiness
spec:
  type: NodePort
  sessionAffinity: None
  ports:
  - name: showreadiness
    port: 9090 # Internal Port
    targetPort: 9090 # External port
    nodePort: 32090
    protocol: TCP 
  selector:
    app: showreadiness

I will try many times, and the result is the same, and from the logs as below:

2016-12-06T03:03:44.400828413Z 2016/12/06 03:03:44 I am serving traffic!!!
2016-12-06T03:03:44.402812113Z 2016/12/06 03:03:44 I am serving traffic!!!

One of the endpoints handles the two request at the same time.I use the default(iptables)load balance rules.

JackTiger commented Dec 6, 2016

Hi, @MrHohn,

I use curl command to send the http request from the local host, as below:

curl 192.168.3.163:9090 & curl 192.168.3.163:9090 & curl 192.168.3.163:9090 & curl 192.168.3.163:9090 &

and my service describe is like this:

root@SZV1000050172:/opt/bin/k8s/healthy# kubectl describe svc showreadiness
Name:			showreadiness
Namespace:		default
Labels:			app=showreadiness
Selector:		app=showreadiness
Type:			NodePort
IP:			192.168.3.163
Port:			showreadiness	9090/TCP
NodePort:		showreadiness	32090/TCP
Endpoints:		172.16.37.4:9090,172.16.37.5:9090,172.16.54.3:9090 + 2 more...
Session Affinity:	None
No events.

The service resource file I created is as below:

root@SZV1000050172:/opt/bin/k8s/healthy# cat showreadiness-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: showreadiness
  labels:
    app: showreadiness
spec:
  type: NodePort
  sessionAffinity: None
  ports:
  - name: showreadiness
    port: 9090 # Internal Port
    targetPort: 9090 # External port
    nodePort: 32090
    protocol: TCP 
  selector:
    app: showreadiness

I will try many times, and the result is the same, and from the logs as below:

2016-12-06T03:03:44.400828413Z 2016/12/06 03:03:44 I am serving traffic!!!
2016-12-06T03:03:44.402812113Z 2016/12/06 03:03:44 I am serving traffic!!!

One of the endpoints handles the two request at the same time.I use the default(iptables)load balance rules.

@MrHohn

This comment has been minimized.

Show comment
Hide comment
@MrHohn

MrHohn Dec 6, 2016

Member

Correct me if I misunderstood the situation: the curl command sent out 4 requests to the service IP roughly at the same time and two of them were being served by one of the endpoints.

I would say it is normal because we are not using round robin loadbalancing here. The possibility for 2 out of 4 requests being served by one endpoints is not that low.

Suggest make this curl command in a loop that repeats more than 1000 times and check each endpoint serves how many requests. I believe the result will be much more fair.

Member

MrHohn commented Dec 6, 2016

Correct me if I misunderstood the situation: the curl command sent out 4 requests to the service IP roughly at the same time and two of them were being served by one of the endpoints.

I would say it is normal because we are not using round robin loadbalancing here. The possibility for 2 out of 4 requests being served by one endpoints is not that low.

Suggest make this curl command in a loop that repeats more than 1000 times and check each endpoint serves how many requests. I believe the result will be much more fair.

@JackTiger

This comment has been minimized.

Show comment
Hide comment
@JackTiger

JackTiger Dec 6, 2016

@MrHohn Ok, Thanks. I will try as your suggestion. By the way, if I want to use round robin loadbalancing in my server, what should I do, could you have some suggestions? Because I want to every pod to handle the single task that may take a lot of time to execute, so if there comes some request at the same time, I want to handle in more pods, not only in a few of these. At the best solution, the loadbalancing can use round robin method not use random method.

JackTiger commented Dec 6, 2016

@MrHohn Ok, Thanks. I will try as your suggestion. By the way, if I want to use round robin loadbalancing in my server, what should I do, could you have some suggestions? Because I want to every pod to handle the single task that may take a lot of time to execute, so if there comes some request at the same time, I want to handle in more pods, not only in a few of these. At the best solution, the loadbalancing can use round robin method not use random method.

@MrHohn

This comment has been minimized.

Show comment
Hide comment
@MrHohn

MrHohn Dec 6, 2016

Member

I think the userspace proxy mode for kube-proxy does round robin for choosing backends. But as the iptables proxy mode should be faster and more reliable than the userspace proxy, so I don't recommend to switch back only because you want round robin.

I might suggest to keep this logic in the application layer. The server should know what its capacity would be and don't take more requests than it can.

Member

MrHohn commented Dec 6, 2016

I think the userspace proxy mode for kube-proxy does round robin for choosing backends. But as the iptables proxy mode should be faster and more reliable than the userspace proxy, so I don't recommend to switch back only because you want round robin.

I might suggest to keep this logic in the application layer. The server should know what its capacity would be and don't take more requests than it can.

@MrHohn

This comment has been minimized.

Show comment
Hide comment
@MrHohn

MrHohn Dec 6, 2016

Member

And you may be able to utilize the readiness probe feature. If you don't want any more request be forwarded to backends that are already serving enough traffic, mark them as unready and they will be removed from the service endpoints. Mark them as ready when they are done and the endpoints will show up again.

This may not be a general solution, just an idea.

Member

MrHohn commented Dec 6, 2016

And you may be able to utilize the readiness probe feature. If you don't want any more request be forwarded to backends that are already serving enough traffic, mark them as unready and they will be removed from the service endpoints. Mark them as ready when they are done and the endpoints will show up again.

This may not be a general solution, just an idea.

@JackTiger

This comment has been minimized.

Show comment
Hide comment
@JackTiger

JackTiger Dec 7, 2016

Ok, thanks, I use the readiness probe feature now in k8s, it may be the only solution if to meet my requirements. Or I replace the kube-proxy, I see someone use haproxy to replace kube-proxy, The load balance of kube-proxy is low. I will try in next feature.

Thanks very much again!

JackTiger commented Dec 7, 2016

Ok, thanks, I use the readiness probe feature now in k8s, it may be the only solution if to meet my requirements. Or I replace the kube-proxy, I see someone use haproxy to replace kube-proxy, The load balance of kube-proxy is low. I will try in next feature.

Thanks very much again!

@jsravn

This comment has been minimized.

Show comment
Hide comment
@jsravn

jsravn Dec 9, 2016

Contributor

@JackTiger can you link the haproxy solution? kube-proxy, unfortunately, isn't really fit for my purposes (heavy use of persistent connections).

Contributor

jsravn commented Dec 9, 2016

@JackTiger can you link the haproxy solution? kube-proxy, unfortunately, isn't really fit for my purposes (heavy use of persistent connections).

@JackTiger

This comment has been minimized.

Show comment
Hide comment
@JackTiger

JackTiger Dec 9, 2016

@jsravn I am sorry, I have not find the link about the haproxy solution, but I will try this solution to verify the load balance ability in next feature, it will take me a lot of time, because I am not very familiar with haproxy.

JackTiger commented Dec 9, 2016

@jsravn I am sorry, I have not find the link about the haproxy solution, but I will try this solution to verify the load balance ability in next feature, it will take me a lot of time, because I am not very familiar with haproxy.

@bmarks-mylo

This comment has been minimized.

Show comment
Hide comment
@bmarks-mylo

bmarks-mylo Dec 30, 2016

I am seeing something similar. I have 5 hostname pods (gcr.io/google_containers/serve_hostname:1.3) connected to a NodePort service with sessionAffinity: None. When I run:
for i in `seq 1 100`; do curl -s sandbox-hostnames-redacted.us-west-2.elb.amazonaws.com; echo; done | sort | uniq -c
I always get a single unique host. If I jump onto a browser and furiously refresh the url then I at most can get two hosts. Running several consoles with this command doesn't produce more than 1 host either.

This is very troublesome on several fronts because 1. I expect my traffic to be balanced much more evenly than this and 2. the trivial example I'm using has been used as a troubleshooting mechanism when testing session affinity problems (why I'm doing it) and could very easily indicate a false positive test. See this comment by @thockin and others.

My configuration:

apiVersion: v1
kind: Service
metadata:
  name: hostnames
spec:
  type: NodePort
  ports:
    - port: 9376
      protocol: TCP
      nodePort: 31111
  selector:
    app: hostnames
  sessionAffinity: None
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hostnames
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: hostnames
    spec:
      containers:
        - name: hostnames
          image: gcr.io/google_containers/serve_hostname:1.3
          ports:
            - containerPort: 9376

and my pods:

$ kubectl get pods -o wide
NAME                                  READY     STATUS    RESTARTS   AGE       IP          NODE
...
hostnames-884590183-b8q0a             1/1       Running   0          28m       10.2.96.9   ip-10-0-0-201.us-west-2.compute.internal
hostnames-884590183-dq3j4             1/1       Running   0          14m       10.2.96.6   ip-10-0-0-201.us-west-2.compute.internal
hostnames-884590183-g8kq9             1/1       Running   0          28m       10.2.3.3    ip-10-0-0-200.us-west-2.compute.internal
hostnames-884590183-o6jv6             1/1       Running   0          14m       10.2.96.5   ip-10-0-0-201.us-west-2.compute.internal
hostnames-884590183-v4vhb             1/1       Running   0          14m       10.2.96.8   ip-10-0-0-201.us-west-2.compute.internal
...

bmarks-mylo commented Dec 30, 2016

I am seeing something similar. I have 5 hostname pods (gcr.io/google_containers/serve_hostname:1.3) connected to a NodePort service with sessionAffinity: None. When I run:
for i in `seq 1 100`; do curl -s sandbox-hostnames-redacted.us-west-2.elb.amazonaws.com; echo; done | sort | uniq -c
I always get a single unique host. If I jump onto a browser and furiously refresh the url then I at most can get two hosts. Running several consoles with this command doesn't produce more than 1 host either.

This is very troublesome on several fronts because 1. I expect my traffic to be balanced much more evenly than this and 2. the trivial example I'm using has been used as a troubleshooting mechanism when testing session affinity problems (why I'm doing it) and could very easily indicate a false positive test. See this comment by @thockin and others.

My configuration:

apiVersion: v1
kind: Service
metadata:
  name: hostnames
spec:
  type: NodePort
  ports:
    - port: 9376
      protocol: TCP
      nodePort: 31111
  selector:
    app: hostnames
  sessionAffinity: None
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hostnames
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: hostnames
    spec:
      containers:
        - name: hostnames
          image: gcr.io/google_containers/serve_hostname:1.3
          ports:
            - containerPort: 9376

and my pods:

$ kubectl get pods -o wide
NAME                                  READY     STATUS    RESTARTS   AGE       IP          NODE
...
hostnames-884590183-b8q0a             1/1       Running   0          28m       10.2.96.9   ip-10-0-0-201.us-west-2.compute.internal
hostnames-884590183-dq3j4             1/1       Running   0          14m       10.2.96.6   ip-10-0-0-201.us-west-2.compute.internal
hostnames-884590183-g8kq9             1/1       Running   0          28m       10.2.3.3    ip-10-0-0-200.us-west-2.compute.internal
hostnames-884590183-o6jv6             1/1       Running   0          14m       10.2.96.5   ip-10-0-0-201.us-west-2.compute.internal
hostnames-884590183-v4vhb             1/1       Running   0          14m       10.2.96.8   ip-10-0-0-201.us-west-2.compute.internal
...
@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Dec 31, 2016

Member
Member

thockin commented Dec 31, 2016

@bmarks-mylo

This comment has been minimized.

Show comment
Hide comment
@bmarks-mylo

bmarks-mylo Jan 3, 2017

Well this is interesting. Sure enough from a worker to the service IP or nodeport and it is distributing requests very evenly.

From outside into ELB:
$ curl --proxy redacted:80 -s sandbox-hostnames-redacted.us-west-2.elb.amazonaws.com
      hostnames-884590183-dq3j4

To service IP:
core@ip-10-0-0-201 ~ $ for i in `seq 1 100`; do curl -s 10.3.0.191:9376; echo; done | sort | uniq -c
     16 hostnames-884590183-b8q0a
     21 hostnames-884590183-dq3j4
     25 hostnames-884590183-g8kq9
     15 hostnames-884590183-o6jv6
     23 hostnames-884590183-v4vhb

To NodePort:
core@ip-10-0-0-201 ~ $ for i in `seq 1 100`; do curl -s 127.0.0.1:31111; echo; done | sort | uniq -c
     14 hostnames-884590183-b8q0a
     22 hostnames-884590183-dq3j4
     23 hostnames-884590183-g8kq9
     20 hostnames-884590183-o6jv6
     21 hostnames-884590183-v4vhb

Do you have any suggestions on troubleshooting why I'm seeing no distribution through the ELB?

bmarks-mylo commented Jan 3, 2017

Well this is interesting. Sure enough from a worker to the service IP or nodeport and it is distributing requests very evenly.

From outside into ELB:
$ curl --proxy redacted:80 -s sandbox-hostnames-redacted.us-west-2.elb.amazonaws.com
      hostnames-884590183-dq3j4

To service IP:
core@ip-10-0-0-201 ~ $ for i in `seq 1 100`; do curl -s 10.3.0.191:9376; echo; done | sort | uniq -c
     16 hostnames-884590183-b8q0a
     21 hostnames-884590183-dq3j4
     25 hostnames-884590183-g8kq9
     15 hostnames-884590183-o6jv6
     23 hostnames-884590183-v4vhb

To NodePort:
core@ip-10-0-0-201 ~ $ for i in `seq 1 100`; do curl -s 127.0.0.1:31111; echo; done | sort | uniq -c
     14 hostnames-884590183-b8q0a
     22 hostnames-884590183-dq3j4
     23 hostnames-884590183-g8kq9
     20 hostnames-884590183-o6jv6
     21 hostnames-884590183-v4vhb

Do you have any suggestions on troubleshooting why I'm seeing no distribution through the ELB?

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Jan 3, 2017

Member
Member

thockin commented Jan 3, 2017

@bmarks-mylo

This comment has been minimized.

Show comment
Hide comment
@bmarks-mylo

bmarks-mylo Jan 3, 2017

It looks like our corporate proxy strikes yet again. I tethered to my phone and sure enough it distributes among the 5 hosts just fine. Sorry to give you any grief over this.

bmarks-mylo commented Jan 3, 2017

It looks like our corporate proxy strikes yet again. I tethered to my phone and sure enough it distributes among the 5 hosts just fine. Sorry to give you any grief over this.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Jan 3, 2017

Member

Not the first, won't be the last. :)

Member

thockin commented Jan 3, 2017

Not the first, won't be the last. :)

@thockin thockin closed this Jan 3, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment