Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There are too many "TIME_WAIT" between nginx and tomcat when your ajp module is in use. #5

Closed
HelloJamesLee opened this issue Nov 4, 2011 · 5 comments

Comments

@HelloJamesLee
Copy link

Hi,

When I use Nginx+ajp_module+tomcat and Apache+mod_jk+tomcat, I found two problems. What's the reason? Can you help?

(1) There are too many TIME_WAIT between nginx and tomcat
The TIME_WAIT/Total connection is 13890/16030
When I use the Apache+mod_jk+tomcat, the TIME_WAIT/Total is 92/952.

(2) The %CPU of tomcat in Nginx is higher than that of tomcat in apache. -- This is the main problem.
The tomcat in Apache used all the maxThreads(512), but tomcat in Nginx only uses 125 threads, via "ps -efL | grep catalina "

The configuration of tomcats in these two situatin are the same, and the concurrency is also the same. The configuration of ajp module is as follow:
ajp_connect_timeout 10;
ajp_read_timeout 10;
upstream loadbalancer {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;

    #keepalive 6400;
}

Whether i enable keepalive in upstream directive or not, the two problem exists all the same.
The ajp_module version is 0.2.5, the latest version.

Can you give me some clues to reslove these problems? Thank you!

@yaoweibin
Copy link
Owner

What's your Nginx version?

On 2011-11-4 13:17, HelloJamesLee wrote:

Hi,

When I use Nginx+ajp_module+tomcat and Apache+mod_jk+tomcat, I found two problems. What's the reason? Can you help?

(1) There are too many TIME_WAIT between nginx and tomcat
The TIME_WAIT/Total connection is 13890/16030
When I use the Apache+mod_jk+tomcat, the TIME_WAIT/Total is 92/952.

(2) The %CPU of tomcat in Nginx is higher than that of tomcat in apache. -- This is the main problem.
The tomcat in Apache used all the maxThreads(512), but tomcat in Nginx only uses 125 threads, via "ps -efL | grep catalina "

The configuration of tomcats in these two situatin are the same, and the concurrency is also the same. The configuration of ajp module is as follow:
ajp_connect_timeout 10;
ajp_read_timeout 10;
upstream loadbalancer {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;

     #keepalive 6400;
 }

Whether i enable keepalive in upstream directive or not, the two problem exists all the same.
The ajp_module version is 0.2.5, the latest version.

Can you give me some clues to reslove these problems? Thank you!


Reply to this email directly or view it on GitHub:
#5

Weibin Yao

@HelloJamesLee
Copy link
Author

My Nginx is 0.8.54.

There is the connection pool mechanism in mod_jk of apache. The connection_pool_size in mod_jk is automatically detected by mod_jk according to the number of threads per web server process. The connections between apache and tomcat can be reused by mod_jk. So the TIME_WAIT in apache is lower.

I guess that the keepalive between nginx and tomcat dosen't work very well. Only a small number connections can be keeped alive. Do you think so?

@yaoweibin
Copy link
Owner

Can you show me the debug.log with several requests?

http://wiki.nginx.org/Debugging

@yaoweibin yaoweibin reopened this Nov 7, 2011
@wangbin579
Copy link
Contributor

try to set accept_mutex off
maybe this will help you

@HelloJamesLee
Copy link
Author

The reason:
In the test servers, there are many times the read event was triggered. But the socket didn't read any data.
In the old keepalive module, it just closes the idle keepalive socket. It may cause much unexpected connection closing.

Weibin has fixed this keepalive problem.
Thanks Weibin! Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants