We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
现象
tengine 集群偶现耗时不符合预期(大于1s),通过抓包分析确定是 tengine 侧发生的耗时,同时发现 nginx 进程负载不均匀,与的nginx进程偶尔利用率超过100%
尝试开启reuseport解决该问题,修改后灰度过程中发现部分业务反馈耗时增加,少量499请求
关闭reuseport之后499请求恢复
配置 nginx 配置 revice buffer 为 8M worker_connections 65535; worker_processes auto
日志相关 没有错误日志 access.log里面显示upstream_response_time 超过1s,但是在上游服务抓包显示耗时非上游服务
The text was updated successfully, but these errors were encountered:
如果连接调度在负载超过100%的worker进程,那么就会卡顿在此worker进程内,调度超时。 在CPU > 100%的worker进程,pstack看下hung的具体函数位置。
提供完整tengine的main 和 listen配置
Sorry, something went wrong.
server 配置 server { listen 80 default backlog=24576; listen [::]:80 default backlog=24576; server_name _; client_body_buffer_size 1024k; client_header_buffer_size 1024k; large_client_header_buffers 8 1024k;
main配置 user root;
worker_processes auto; worker_cpu_affinity auto;
error_log /home/work/log/nginx/error.log; pid /home/work/log/nginx/nginx.pid;
worker_rlimit_nofile 200000; worker_shutdown_timeout 600;
events { use epoll; worker_connections 65535; } @lianglli
首先看下worker进程的CPU是否打散了,再确认一下你的worker进程数与CPU数是否相同的。 $pidstat | grep tengine
worker进程100%时,输出pstack 查看cpu具体耗时在哪些functions $sudo pstack CPU跑满的worker进程ID
显示地设置一下: events { multi_accept off; use epoll; accept_mutex on; worker_connections 65535; }
lianglli
No branches or pull requests
现象
tengine 集群偶现耗时不符合预期(大于1s),通过抓包分析确定是 tengine 侧发生的耗时,同时发现 nginx 进程负载不均匀,与的nginx进程偶尔利用率超过100%
尝试开启reuseport解决该问题,修改后灰度过程中发现部分业务反馈耗时增加,少量499请求
关闭reuseport之后499请求恢复
配置
nginx 配置 revice buffer 为 8M
worker_connections 65535; worker_processes auto
日志相关
没有错误日志
access.log里面显示upstream_response_time 超过1s,但是在上游服务抓包显示耗时非上游服务
The text was updated successfully, but these errors were encountered: