Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to inprove gelf-forwarder performance #4

Closed
macaty opened this issue Oct 27, 2021 · 4 comments
Closed

how to inprove gelf-forwarder performance #4

macaty opened this issue Oct 27, 2021 · 4 comments

Comments

@macaty
Copy link

macaty commented Oct 27, 2021

gelf-forwarder qps is 2064.63 and graylog-http-gelf qps is 49607.89,how to inprove gelf-forwarder performance

1.install v0.4.1
-->Hardware

  1. CPU: 16 Core
  2. Mem: 16G
  3. Disk: 500G

--> Software:

  1. graylog 4.2

  2. es 7.10

  3. mongo 4.2

  4. gelf-forwarder v0.4.1

  5. gelf-forwarder testing result
    source--> gelf-forwarder -->graylog udp gelf
    Qps is 2064.63, and get lots of 429 error
    image
    image

  6. graylog testing result
    source -->graylog http gelf
    Qps is 49607.89

[./vegeta -cpus 16 attack -targets tatget.txt -body aa.json -timeout=20s -rate 50000 -duration=60s | tee results.bin | ./[23:34:00]vegeta report
[23:34:14]Requests [total, rate, throughput] 687367, 49789.46, 49607.89
[23:34:14]Duration [total, attack, wait] 13.807s, 13.805s, 1.102ms
[23:34:14]Latencies [min, mean, 50, 90, 95, 99, max] 154.211µs, 178.865ms, 118.744ms, 405.329ms, 610.809ms, 925.361ms, 3.782s
[23:34:14]Bytes In [total, mean] 0, 0.00
[23:34:14]Bytes Out [total, mean] 77395395, 112.60
[23:34:14]Success [ratio] 99.64%
[23:34:14]Status Codes [code:count] 0:2452 202:684915
[23:34:14]Error Set:
[23:33:47]Error Set:

@macaty
Copy link
Author

macaty commented Oct 27, 2021

7、gelf-forwarder tcp ouput testing result
source--> gelf-forwarder -->graylog tcp gelf

qps is 17898.86

[17:24:54][root@38d15 ~]# ./vegeta -cpus 16 attack -targets tatget.txt.bak -body pay.json -timeout=20s -rate 18000 -duration=60s | tee results.bin | ./vegeta report
[17:25:54]Requests [total, rate, throughput] 1079999, 17999.88, 17898.86
[17:25:54]Duration [total, attack, wait] 1m0s, 1m0s, 2.871ms
[17:25:54]Latencies [min, mean, 50, 90, 95, 99, max] 198.944µs, 7.645ms, 931.316µs, 18.815ms, 29.835ms, 105.36ms, 1.464s
[17:25:54]Bytes In [total, mean] 0, 0.00
[17:25:54]Bytes Out [total, mean] 1511714400, 1399.74
[17:25:54]Success [ratio] 99.44%
[17:25:54]Status Codes [code:count] 0:203 200:1073989 429:5807

@eplightning
Copy link
Owner

Did you modify channel-buffer-size option (CHANNEL_BUFFER_SIZE env var or via flag)?

Default is 100 which might cause a lot of HTTP 429 errors. You might want to increase it at the cost of higher memory usage during stress.

I'll also consider adding an option to disable backpressure which might remove HTTP 429 errors at the cost of higher latencies during stress.

@eplightning
Copy link
Owner

You can also now try out version v0.4.2 which introduces option to disable backpressure, for example:

./gelf-forwarder --backpressure=0

BACKPRESSURE=0 ./gelf-forwarder

If your source doesn't support backpressure then it's probably the best option.

@macaty
Copy link
Author

macaty commented Nov 1, 2021

Add this option channel-buffer-size=20000 and BACKPRESSURE=0, it get the little bit more result : qps is 18303 and the ES DISK IO is 60%. It time to upgrade the single ES to cluster ES.

thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants