New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Go GC问题 #5

Open
ma6174 opened this Issue Jul 27, 2014 · 5 comments

Comments

5 participants
@ma6174
Owner

ma6174 commented Jul 27, 2014

近期网站访问量大增,随之而来出现一些莫名其妙的慢请求,表现为:有两个服务A和B,服务A要去服务B请求数据,从日志看,服务A有比较多的秒级的慢请求,根据日志判断慢请求是因为服务B变慢了,接下来查B的日志,发现B只有极少的慢请求的日志,最起码从某个慢请求跟踪来看,服务B的处理返回数据很快,并没有出现所谓的慢请求。这个就比较有趣了,A说B变慢了,B说我没有变慢。到底是A自己慢了,还是B慢了,或者是A到B之间的链路慢了?

首先要确认是否是链路变慢了,A和B两个服务在同一个局域网内,两个服务互ping时间在1ms以内。之前有发现因为丢包导致网络变慢的现象,但是查两台机器发现丢包率都很低,在正常范围内。这个基本可以排除网络问题。

因为之前日志记录不是很完善,接下来首先要做的是补日志,关键操作记录时间。随后分析日志发现,服务A从收到请求到将请求发送到B这个时间是毫秒级的,B从收到请求到处理完返回也是毫秒级的,也就是说两边处理请求都很快。后来详细对比服务A和B的时间戳发现一个惊人的现象:服务A发请求给B的时间和服务B收到请求的时间差了几秒!出现这样的情况首先要确认服务A和服务B所在是机器时间是一致的,经检查确实是一致的。会不会是网络丢包导致的?如果有比较严重的丢包问题的话,收发请求出现大量重传可能会导致比较大的网络延迟。但是之前已经分析过,两台机器之间的网络状况非常好,没有丢包现象。即使有丢包的话,出现丢包问题的时候,如果如果等待200ms没收到回复会进行重传,如果要达到秒级别的延时的话至少一个请求要丢包5次以上,这已经是很严重的丢包事故了,很容易被检测到。

这样看起来似乎问题没法继续分析解答了。就在这时,服务B出现了一条500的日志,看错误信息是too many open files,一般出现这种情况是因为服务B自己文件句柄数超了,当超过6000的时候就比较容易出现上面的错误。这里的files不一定是真实的文件,有可能是tcp连接,因为在linux下一切皆是文件。先写个程序监测一些服务B的文件句柄数吧,检查一个进程的文件句柄数可以用lsof -n -p pid | wc -l来观测,pid就是进程的pid。写个shell脚本循环观测:

#!/bin/sh
while true
do
        echo -n `date`"\t"
        lsof -n -p `pidof service_name` | wc -l
        sleep 0.5
done

部分观测结果:

Tue Jul 22 19:42:51 CST 2014    2715
Tue Jul 22 19:42:52 CST 2014    2713
Tue Jul 22 19:42:53 CST 2014    2711
Tue Jul 22 19:42:53 CST 2014    2711
Tue Jul 22 19:42:54 CST 2014    2708
Tue Jul 22 19:42:55 CST 2014    10177
Tue Jul 22 19:42:59 CST 2014    3103
Tue Jul 22 19:43:00 CST 2014    2722
Tue Jul 22 19:43:00 CST 2014    2719

从上面的结果我们很容易看出来,在19:42:55文件句柄数从2708突然升到了10177,随后很快恢复。之前说过,当文件句柄数超过6000就有可能会出现too many open files错误。这里文件句柄数都升到1w了。继续观察fd(文件句柄)突然升高的频率,发现在网络高峰期大约每2~3分钟就会出现一次。是不是因为fd突然升高导致慢请求呢?继续分析服务A的日志,发现确实在fd升高的时候会出现慢请求。如果是这样的话,是不是就能认为服务B在那个时间点有太多请求了导致从服务A新来的连接无法建立而出现慢请求?这只是一种猜测,当然还有一个更大的疑惑需要答案:服务B的fd为什么会突然升高?

因为服务B基本不会读写本地文件,因此可以认定fd升高肯定是突然有大量请求到来。是不是服务A的请求数有大幅度波动导致B有大幅度波动?这个需要统计服务A的每秒的请求数,发现并没有突然升高迹象。因为我们对服务B做了负载均衡,如果B1服务器突然有大量请求到来的话,那么B2服务器也应当会同时出现相同的fd突然升高现象。观察发现,B2服务器确实有升高现象,升高频率相似,但是时间点不同,能差几十秒。从这里我们基本能确定是B服务器在某个时间点发生了什么事情导致fd升高。

fd升高会有什么影响?继续查B的日志,惊奇的发现,fd升高之前,会有几秒钟没有任何日志的情况。难道是服务卡住了?如果是卡住的话,还能接受请求吗?为了验证这个问题,我们用一个简单的curl来测试

#!/bin/sh
while true
do
        echo `date`
        curl http://127.0.0.1:8080 -o /dev/null > /dev/null 2>&1
        sleep 0.2
done

部分结果:

Tue Jul 22 18:26:09 CST 2014
Tue Jul 22 18:26:09 CST 2014
Tue Jul 22 18:26:09 CST 2014
Tue Jul 22 18:26:09 CST 2014
Tue Jul 22 18:26:14 CST 2014
Tue Jul 22 18:26:14 CST 2014
Tue Jul 22 18:26:14 CST 2014

从这里我们看到,在18:26:09curl确实卡住了,直到18:26:14才恢复,卡了将近5秒!我们有理由相信,curl会卡住,那么来自服务A的请求也有可能会卡住,因为我们在这段时间并没有看到服务B的任何日志,也就是说服务B卡住了,拒绝提供服务了。

如果我们curl和检查fd同时进行,对比结果会有新发现(左边是检查fd结果,右边是curl日志)。

Tue Jul 22 19:44:52 CST 2014    2708        │Tue Jul 22 19:44:57 CST 2014
Tue Jul 22 19:44:52 CST 2014    2710        │Tue Jul 22 19:44:57 CST 2014
Tue Jul 22 19:44:53 CST 2014    2707        │Tue Jul 22 19:44:58 CST 2014
Tue Jul 22 19:44:54 CST 2014    2709        │Tue Jul 22 19:44:58 CST 2014
Tue Jul 22 19:44:54 CST 2014    2729        │Tue Jul 22 19:44:58 CST 2014
Tue Jul 22 19:44:55 CST 2014    2707        │Tue Jul 22 19:44:58 CST 2014
Tue Jul 22 19:44:56 CST 2014    2709        │Tue Jul 22 19:45:03 CST 2014
Tue Jul 22 19:44:56 CST 2014    2714        │Tue Jul 22 19:45:03 CST 2014
Tue Jul 22 19:44:57 CST 2014    2718        │Tue Jul 22 19:45:03 CST 2014
Tue Jul 22 19:44:58 CST 2014    2710        │Tue Jul 22 19:45:03 CST 2014
Tue Jul 22 19:44:58 CST 2014    2713        │Tue Jul 22 19:45:04 CST 2014
Tue Jul 22 19:44:59 CST 2014    2713        │Tue Jul 22 19:45:04 CST 2014
Tue Jul 22 19:45:02 CST 2014    10219       │Tue Jul 22 19:45:04 CST 2014
Tue Jul 22 19:45:03 CST 2014    2877        │Tue Jul 22 19:45:04 CST 2014
Tue Jul 22 19:45:04 CST 2014    2721        │Tue Jul 22 19:45:04 CST 2014
Tue Jul 22 19:45:05 CST 2014    2710        │Tue Jul 22 19:45:05 CST 2014
Tue Jul 22 19:45:05 CST 2014    2710        │Tue Jul 22 19:45:05 CST 2014
Tue Jul 22 19:45:06 CST 2014    2714        │Tue Jul 22 19:45:05 CST 2014
Tue Jul 22 19:45:07 CST 2014    2716        │Tue Jul 22 19:45:05 CST 2014

我们可以看到,在19:44:58,curl卡住了,直到19:45:03恢复。而句柄数升高是在19:45:02。我们基本能够确认,在19:44:58的时候,服务B卡住了,但是服务A的请求还是会到来,请求会堆积,大约4~5秒之后,服务B恢复正常,大量请求到来导致服务B句柄数升高。

看来句柄数升高只是一个结果,服务B卡住是直接原因,那么服务B为什么会卡住呢?从上面的分析看,当服务B卡住之后,会出现下面几个现象:

  1. 拒绝接受请求
  2. 没有任何日志滚动

如果是某个goroutine卡住了,其他goroutine不一定卡住,还会有日志的,也能正常接收新请求。看起来整个进程卡住了,卡住这段时间发生了什么?Go有自动垃圾回收(gc)机制,我们有理由怀疑是因为gc卡住了。

如何验证是否是GC的问题?最简单的方法是看GC日志。如果你使用Go1.2之前的版本,可以这样启动程序:

GOGCTRACE=1 /path/to/your/program

如果你是Go1.2之后的版本,需要这样开启GC日志:

GODEBUG=gctrace=1 /path/to/your/program

这样程序启动的时候我们就能看到类似这样的日志:

gc1(1): 0+0+0 ms, 0 -> 0 MB 16 -> 18 (19-1) objects, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc2(1): 0+0+0 ms, 0 -> 0 MB 29 -> 29 (30-1) objects, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc3(1): 0+0+0 ms, 0 -> 0 MB 972 -> 747 (973-226) objects, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc4(1): 0+0+0 ms, 0 -> 0 MB 1248 -> 904 (1474-570) objects, 0(0) handoff, 0(0) steal, 0/0/0 yields

想了解每个字段的含义的话可以看 源码,这里我们关注的是GC在什么时候发生的,频率是多少,GC一次花多长时间,GC的效果可以看内存减少了多少,也可以看对象减少多少。

开启GC log之后,我们再去看日志,就会发现有这样的信息:

2014/07/25 11:03:28 app log...
gc183(8): 18+4043+495 ms, 32426 -> 16950 MB 205909094 -> 3045275 (23371853344-23368808069) objects, 60(1982) handoff, 71(105762) steal, 564/267/314 yields
2014/07/25 11:03:33 app log...

很明显,在11:03:2811:03:33这几秒时间都在GC,再看之前的fd测试和curl测试结果:

Fri Jul 25 11:03:26 CST 2014    2463      │Fri Jul 25 11:03:27 CST 2014
Fri Jul 25 11:03:26 CST 2014    2461      │Fri Jul 25 11:03:27 CST 2014
Fri Jul 25 11:03:27 CST 2014    2460      │Fri Jul 25 11:03:27 CST 2014
Fri Jul 25 11:03:28 CST 2014    2459      │Fri Jul 25 11:03:28 CST 2014
Fri Jul 25 11:03:29 CST 2014    2462      │Fri Jul 25 11:03:28 CST 2014
Fri Jul 25 11:03:31 CST 2014    2462      │Fri Jul 25 11:03:28 CST 2014
Fri Jul 25 11:03:31 CST 2014    9738      │Fri Jul 25 11:03:28 CST 2014
Fri Jul 25 11:03:34 CST 2014    2501      │Fri Jul 25 11:03:33 CST 2014
Fri Jul 25 11:03:35 CST 2014    2500      │Fri Jul 25 11:03:33 CST 2014
Fri Jul 25 11:03:36 CST 2014    2492      │Fri Jul 25 11:03:34 CST 2014
Fri Jul 25 11:03:36 CST 2014    2493      │Fri Jul 25 11:03:34 CST 2014
Fri Jul 25 11:03:37 CST 2014    2494      │Fri Jul 25 11:03:34 CST 2014
Fri Jul 25 11:03:38 CST 2014    2490      │Fri Jul 25 11:03:35 CST 2014
Fri Jul 25 11:03:38 CST 2014    2490      │Fri Jul 25 11:03:35 CST 2014
Fri Jul 25 11:03:39 CST 2014    2490      │Fri Jul 25 11:03:35 CST 2014
Fri Jul 25 11:03:40 CST 2014    2489      │Fri Jul 25 11:03:35 CST 2014
Fri Jul 25 11:03:40 CST 2014    2493      │Fri Jul 25 11:03:36 CST 2014

现在原因很清晰了,程序发生了GC,curl会卡住,GC结束之后大量请求到来会导致fd升高。GC竟然卡了5秒!对于一个线上服务来说,这个是不能忍受的。

GC时间长,说明程序创建了大量的对象,占用了较多的内存,进行垃圾回收的时间肯定会延长,对比了一下,在半夜访问量比较少的时候,GC的时间会更短。

如何解决GC时间长导致程序卡住问题?

先去网上找了一下,果然找到一个人遇到了类似的情况:https://groups.google.com/forum/#!topic/golang-nuts/S9goEGuoMRM,原文是这样的:

Hello everyone, 

Our business suffered from an annoying problem. We are developing an 
iMessage-like service in Go, the server can serves hundreds of 
thousands of concurrent TCP connection per process, and it's robust 
(be running for about a month), which is awesome. However, the process 
consumes 16GB memory quickly, since there are so many connections, 
there are also a lot of goroutines and buffered memories used. I 
extend the memory limit to 64GB by changing runtime/malloc.h and 
runtime/malloc.goc. It works, but brings a big problem too - The 
garbage collecting process is then extremely slow, it stops the world 
for about 10 seconds every 2 minutes, and brings me some problems 
which are very hard to trace, for example, when stoping the world, 
messages delivered may be lost. This is a disaster, since our service 
is a real-time service which requires delivering messages as fast as 
possible and there should be no stops and message lost at all. 

I'm planning to split the "big server process" to many "small 
processes" to avoid this problem (smaller memory footprint results to 
smaller time stop), and waiting for Go's new GC implementation. 

Or any suggestions for me to improve our service currently? I don't 
know when Go's new latency-free garbage collection will occur. 

Thanks. 

大体意思是他用Go开发了一个实时消息系统,访问量很大,占用内存比较多,服务每两分钟就会因为GC问题卡住10秒(比我们那个还严重),对于一个实时的系统来说,这个是不能接受的。

后面有比较多的讨论,有建议他升级Go语言版本,他确实升级了,效果也比较明显,升级之后GC减小到1~2秒,不过还是比较长的:

Hello everyone, 

Thanks for all your help, I updated our Go version to: 

go version devel +852ee39cc8c4 Mon Nov 19 06:53:58 2012 +1100 

and rebuilt our servers, now GC duration reduced to 1~2 seconds, it's 
a big improvement! 
Thank contributors on the new GC! 

看来升级Go版本是一个办法,新版本会对GC进行优化。尝试升级Go版本。

先从Go1.1升级到Go1.2.1,发现GC问题基本没解决,甚至比之前更差。升到1.3之后,惊喜得发现GC时间明显缩短,一次GC在几十毫秒内就能完成,fd没有突然升高现象了,curl也没有卡住现象了:

gc43(8): 44+21+49805+104 us, 3925 -> 7850 MB, 50135750 (1781280527-1731144777) objects, 549097/394298/0 sweeps, 117(9232) handoff, 69(2683) steal, 969/439/2061 yields

再看一下go 1.3的release note,确实对GC有优化:

Changes to the garbage collector

For a while now, the garbage collector has been precise when examining values in the heap; the Go 1.3 release adds equivalent precision to values on the stack. This means that a non-pointer Go value such as an integer will never be mistaken for a pointer and prevent unused memory from being reclaimed.

Starting with Go 1.3, the runtime assumes that values with pointer type contain pointers and other values do not. This assumption is fundamental to the precise behavior of both stack expansion and garbage collection. Programs that use package unsafe to store integers in pointer-typed values are illegal and will crash if the runtime detects the behavior. Programs that use package unsafe to store pointers in integer-typed values are also illegal but more difficult to diagnose during execution. Because the pointers are hidden from the runtime, a stack expansion or garbage collection may reclaim the memory they point at, creating dangling pointers.

Updating: Code that uses unsafe.Pointer to convert an integer-typed value held in memory into a pointer is illegal and must be rewritten. Such code can be identified by go vet

几十毫秒是能接受的。如果升级Go版本没有用那么还有没有其他措施?

  1. 部署更多服务分担负载

服务个数多了,每个服务的请求数就少了,程序内部对象数目就少了,GC的时间会减少。

  1. 引入对象池

对于频繁使用的,数量较多的对象,程序初始化的时候生成一定数量的对象,以后程序用到这类对象的时候不是去重新生成而是去对象池中取一个对象。这样对象的总数是一定的,减少垃圾回收的对象个数。

  1. 修改GC并发数(未测试)

Go默认并发数是8,参考http://golang.org/src/pkg/runtime/malloc.h#L140,如果机器性能比较高的话,可以尝试提高GC并发数,理论上说应该会快一些。

  1. 服务超时自动重试其他机器

假设正常的话服务B1在100毫秒内会返回数据,某次请求发现150毫秒还没有返回数据,这时候服务A主动断开,尝试去服务B2去请求数据。这个依据是服务B1和服务B2并不一定会同时卡住。这样的好处是即使服务B的GC时间比较长,也不会影响服务A,因为A可以选一个没有GC的的B去请求数据。

@ma6174 ma6174 added the Go label Jul 27, 2014

@zhwei

This comment has been minimized.

zhwei commented Jul 29, 2014

刚刚改版issues, 你这里更像博客了。。。

@ma6174

This comment has been minimized.

Owner

ma6174 commented Jul 30, 2014

@zhwei 主界面有点调整,标题在左边,更大了,评论在右边,中间空着一大块不太好。。

@ma6174 ma6174 self-assigned this Aug 3, 2014

@ryancheung

This comment has been minimized.

ryancheung commented Sep 1, 2015

好文啊,Go 新手学习了~

@andyxning

This comment has been minimized.

andyxning commented Jan 8, 2016

好文,正在追查类似的问题。:)

@lidashuang

This comment has been minimized.

lidashuang commented Jan 8, 2016

Go1.5 能保证stw在10ms内

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment