Skip to content
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.

glusterd2 and glustershd logging same logs continuously after volume stop #940

Closed
Madhu-1 opened this issue Jun 29, 2018 · 13 comments
Closed
Labels
GCS/1.0 Issue is blocker for Gluster for Container Storage priority: high

Comments

@Madhu-1
Copy link
Member

Madhu-1 commented Jun 29, 2018

steps to reproduce the issue
1)create 2 volumes
2)start 2 volumes

[root@dhcp42-235 ~]# cli volume start test
Volume test started successfully

[root@dhcp42-235 ~]# cli volume start test1
Volume test1 started successfully
  1. set self heal daemon option on 2 volumes
[root@dhcp42-235 ~]# cli volume set test1 afr.self-heal-daemon on --advanced
Options set successfully for test1 volume

[root@dhcp42-235 ~]# cli volume set test afr.self-heal-daemon on --advanced
Options set successfully for test volume

4)stop one volume

[root@dhcp42-235 ~]# cli volume stop test1
Volume test1 stopped successfully

Issue:
glusterd2 and glustershd keeps on generating same logs (which may fillup the space on long run)

Expected behavior:
logging should be done only one time

Glusterd2 Logs:

INFO[2018-06-20 23:33:27.166550] client disconnected                           address="10.70.43.75:49129" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:27.168623] client connected                              address="10.70.43.75:49128" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:27.169053] client disconnected                           address="10.70.43.75:49128" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:30.066052] client connected                              address="10.70.42.235:49125" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:30.068299] client connected                              address="10.70.42.235:49124" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:30.072135] client disconnected                           address="10.70.42.235:49125" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:30.072794] client disconnected                           address="10.70.42.235:49124" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:30.178419] client connected                              address="10.70.43.75:49125" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:30.179933] client disconnected                           address="10.70.43.75:49125" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:30.182261] client connected                              address="10.70.43.75:49124" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:30.183365] client disconnected                           address="10.70.43.75:49124" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:33.083384] client connected                              address="10.70.42.235:49121" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:33.086366] client connected                              address="10.70.42.235:49120" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:33.087416] client disconnected                           address="10.70.42.235:49121" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:33.088215] client disconnected                           address="10.70.42.235:49120" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:33.192773] client connected                              address="10.70.43.75:49121" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:33.194866] client disconnected                           address="10.70.43.75:49121" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:33.197085] client connected                              address="10.70.43.75:49120" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:33.198314] client disconnected                           address="10.70.43.75:49120" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:36.104805] client connected                              address="10.70.42.235:49117" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:36.107378] client connected                              address="10.70.42.235:49116" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:36.107615] client disconnected                           address="10.70.42.235:49117" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:36.107938] client disconnected                           address="10.70.42.235:49116" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:36.210870] client connected                              address="10.70.43.75:49117" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:36.211714] client disconnected                           address="10.70.43.75:49117" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:36.213954] client connected                              address="10.70.43.75:49116" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:36.214515] client disconnected                           address="10.70.43.75:49116" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:39.117735] client connected                              address="10.70.42.235:49113" server=sunrpc source="[server.go:158:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
INFO[2018-06-20 23:33:39.120257] client disconnected                           address="10.70.42.235:49113" server=sunrpc source="[server.go:114:sunrpc.(*SunRPC).pruneConn]"
INFO[2018-06-20 23:33:39.120439] client connected                              address="10.70.42.235:49112" server=sunrpc source

glustershd logs:

[root@dhcp42-235 ~]# tailf /usr/local/var/log/glusterd2/glusterfs/glustershd.log
[2018-06-20 18:10:34.781185] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:34.781246] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:36.787069] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:36.788329] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:36.790648] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:36.791807] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:37.796212] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:37.796291] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:37.796958] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:37.797028] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:39.801050] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:39.803514] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:39.803602] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:39.804705] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-0: error returned while attempting to connect to host:(null), port:0
^C[2018-06-20 18:10:40.807938] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-0-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-20 18:10:40.808002] W [rpc-clnt.c:1737:rpc_clnt_submit] 0-test1-replicate-1-client-1: error returned while attempting to connect to host:(null), por
@Madhu-1
Copy link
Member Author

Madhu-1 commented Jun 29, 2018

@prashanthpai can you make this as a bug

@atinmu atinmu added bug priority: high GCS/1.0 Issue is blocker for Gluster for Container Storage Gluster 4.2 labels Jun 29, 2018
@minhdanh
Copy link

@Madhu-1
I'm seeing the same error from glustershd but I'm using glusterfs 4.0.2. Is this related? If not where can I report an issue?
The logs from glustershd caused disk space shorage for me.

@Madhu-1
Copy link
Member Author

Madhu-1 commented Jul 23, 2018

@vpandey-RH can you have a look at this.

@vpandey-RH
Copy link
Contributor

@Madhu-1 Let me check.

@minhdanh
Copy link

minhdanh commented Jul 24, 2018

@vpandey-RH How is it?

@minhdanh
Copy link

@vpandey-RH if my case is not related, can you please tell where can I report my problem? Thanks.

@vpandey-RH
Copy link
Contributor

@minhdanh Sorry, for replying so late. It seems that it's not a bug, it's an expected behaviour. @prashanthpai will be responding on this thread with the reason for this behaviour and preventive measures.

@minhdanh
Copy link

Thank you. I hope to hear from you soon.

@prashanthpai
Copy link
Contributor

@minhdanh In the version you're using (4.0.2), glusterd2 isn't being used. The right place to file that issue is at bugzilla. This document will help you in filing a bug report for your issue: https://docs.gluster.org/en/v3/Contributors-Guide/Bug-Reporting-Guidelines/

@vpandey-RH
Copy link
Contributor

@Madhu-1 Should we close this issue ?

@Madhu-1
Copy link
Member Author

Madhu-1 commented Aug 17, 2018

@vpandey-RHI have tested this with glusterfs master branch? if you are not able to reproduce the issue with the above-mentioned steps we can close this.

@atinmu
Copy link
Contributor

atinmu commented Oct 4, 2018

@vpandey-RH any reason why this is lingering around :-) ? Were you able to test this out?

@atinmu
Copy link
Contributor

atinmu commented Nov 30, 2018

Closing this as we've not seen this popping up in the latest testing results. Please feel free to reopen if the issue persists.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
GCS/1.0 Issue is blocker for Gluster for Container Storage priority: high
Projects
None yet
Development

No branches or pull requests

5 participants