You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all.
we just set up Kong cluster with 2 kong nodes on our production environment. We didn’t use the load balancer in front of Kong , but using Nginx service on one of Kong nodes to proxy requests to Kong nodes
While we did the pressure test using Apach ab tool ,we found a lot of failed requests .
[work@DWD-BETA ~]$ab -n 1000 -c 100 "http://www.abc.com/api/"
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking www.abc.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: www.abc.com
Server Port: 80
Document Path: /api
Document Length: 206 bytes
Concurrency Level: 100
Time taken for tests: 7.894 seconds
Complete requests: 1000
Failed requests: 599
(Connect: 0, Receive: 0, Length: 599, Exceptions: 0)
Write errors: 0
Non-2xx responses: 401
Total transferred: 599127 bytes
HTML transferred: 365933 bytes
Requests per second: 126.67 [#/sec] (mean)
Time per request: 789.428 [ms] (mean)
Time per request: 7.894 [ms] (mean, across all concurrent requests)
Transfer rate: 74.12 [Kbytes/sec] received
after checking the kong proxy access logs ,I found a lot of 503 errors:
Meanwhile, the error.log didn't record anything, here are some error logs happened in our real scenario:
2018/12/24 03:50:52 [error] 27097#0: *14469262 upstream timed out (110: Connection timed out) while reading response header from upstream
2018/12/24 03:50:52 [warn] 27097#0: *14471407 upstream server temporarily disabled while reading response header from upstream
2018/12/24 03:50:52 [error] 27097#0: *14471407 no live upstreams while connecting to upstream
2018/12/24 07:41:56 [warn] 27097#0: *14611324 a client request body is buffered to a temporary file /data/kong/client_body_temp/0000000
427
I did some optimizations for kong Nginx . the nginx.conf files as below:
Then I try to using a load balancer to proxy requests to Kong nodes.But the problem still happens.
This problem disappeared if the load balancer proxy requests directly to the real servers on the backend.
below is the configuration of Kong.conf
[work@kong-node2 kong]$sed -n '/^#/!p' kong.conf | sed -rn '/^[[:space:]]+#/!p' | sed '/^$/d'
prefix = /data/kong/ # Working directory. Equivalent to Nginx's
proxy_access_log = logs/access.log # Path for proxy port request access
proxy_error_log = logs/error.log # Path for proxy port request error
admin_listen = xx.xx.xx.xx:8001, 127.0.0.1:8444 ssl
database = cassandra
cassandra_contact_points = xx.xx.xx.xx # A comma-separated list of contact
cassandra_port = 9042 # The port on which your nodes are listening
cassandra_keyspace = kong # The keyspace to use in your cluster.
cassandra_username = username # Username when using the
cassandra_password = passowrd # Password when using the
cassandra_consistency = QUORUM # Consistency setting to use when reading/
db_update_frequency = 5 # Frequency (in seconds) at which to check for
db_update_propagation = 2 # Time (in seconds) taken for an entity in the
upstream timed out (110: Connection timed out) while reading response header from upstream
Indicate that the timeout is coming from the upstream service, not Kong. You should check the upstream service to ensure it can handle the amount of traffic you're throwing at it (for example, depending on client configuration, and particular with the ab tool which is very naive, it can be easy to observe different performance envelopes when putting Kong in front of a poorly-scaling service, as Kong can handle a large number of simultaneous requests).
I'm also curious to understand why you put fastcgi config values into the Kong config.
Hi all.
we just set up Kong cluster with 2 kong nodes on our production environment. We didn’t use the load balancer in front of Kong , but using Nginx service on one of Kong nodes to proxy requests to Kong nodes
While we did the pressure test using Apach ab tool ,we found a lot of failed requests .
after checking the kong proxy access logs ,I found a lot of 503 errors:
Meanwhile, the error.log didn't record anything, here are some error logs happened in our real scenario:
I did some optimizations for kong Nginx . the nginx.conf files as below:
But it didn’t work .
Then I try to using a load balancer to proxy requests to Kong nodes.But the problem still happens.
This problem disappeared if the load balancer proxy requests directly to the real servers on the backend.
below is the configuration of Kong.conf
Below is the Services configuration of Kong API
Below is the Router configuration of Kong API:
The Kong version is the latest(v1.14) and the cassandra database is 3.11.3
I have been troubleshooting this problem for a whole day , and there is no any helpful information on Google.
Could anyone help me please ?
Thank you very much in advance. Forgive my bad English!
The text was updated successfully, but these errors were encountered: