Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

duration 420m streams messages on STDOUT #103

Closed
agenteo opened this issue Dec 11, 2014 · 7 comments
Closed

duration 420m streams messages on STDOUT #103

agenteo opened this issue Dec 11, 2014 · 7 comments

Comments

@agenteo
Copy link

agenteo commented Dec 11, 2014

using a duration higher then 420 starts displaying on STDOUT a stream of:

goroutine 114338 [select]:
^Created by github.com/tsenart/vegeta/lib.(*Attacker).Attack
        /Users/tomas/Code/go/src/github.com/tsenart/vegeta/lib/attack.go:154 +0x382

is it just a warning? I did not go trough running the app. Having 240 doesn't cause that stream.

@agenteo agenteo changed the title duration 420m breaks duration 420m streams messages on STDOUT Dec 11, 2014
@tsenart
Copy link
Owner

tsenart commented Dec 12, 2014

With such high durations you're better off setting the -workers flag to something like 1000. By default, Vegeta will allocate one go-routine per request as it was never designed for extremely long load tests such as the one you're doing.

By curiosity, what are you expecting to learn from a load test with this sort of parameters?

@agenteo
Copy link
Author

agenteo commented Dec 12, 2014

thanks I'll try with workers and get back to you.

Our app uses a library calling a 3rd party API, it keep the response data in our app memory and expires after a certain amount of hours. I wanted to monitor the refetching process under load. Ideally the library would store data on a key value cache that I could manually flush.

@tsenart
Copy link
Owner

tsenart commented Dec 15, 2014

@agenteo: Any interesting results to share?

@agenteo
Copy link
Author

agenteo commented Dec 15, 2014

Preamble, I am running this off a Macbook pro 2.6GHz i7 OSX 10.9.5 waiting for my devops to provide an EC2 instance.

I started the following attack:

vegeta attack -targets=vegeta_test_plan_dev.txt -workers=1000 -duration=360m -rate=20 > results_$(date +%Y%m%d_%H%M%S)_dev_loadtest.bin

the file has ~2000 URLs. My expectation was for the test to be running for 6 hours. I left this running Friday afternoon.

Today, Monday I did not see any goroutine output but the script was still running. When I looked at my app I did not see traffic, so I CTRL-C vegeta and looked at the report. It said it was running for 63 hours:

Requests        [total]                         78784
Duration        [total, attack, wait]           63h4m54.147124256s, 63h4m54.100175493s, 46.948763ms
Latencies       [mean, 50, 95, 99, max]         6.117199907s, 93.874484ms, 592.261451ms, 61h59m16.388871571s, 61h59m16.388871571s
Bytes In        [total, mean]                   4942771740, 62738.27
Bytes Out       [total, mean]                   0, 0.00
Success         [ratio]                         96.84%
Status Codes    [code:count]                    200:76291  505:780  404:40  0:1673

Opening the report plot I can see after 3800 seconds the red error line takes over.

I had this once before, this time I turned off HD power savings, power nap from my Mac. I asked for an EC2 instance where I'd like to run those tests I have a hunch this might be OSX related but I'll have to wait for the EC2 instance to confirm that.

I understand stretching tests for this long wasn't the design of vegeta but I appreciate you following this one up. Do you get any of this behavior when you load test for this long?

@tsenart
Copy link
Owner

tsenart commented Dec 15, 2014

What I can understand from the data you provided is that some requests never get a response. Vegeta doesn't timeout requests by default so it will wait forever in your case. It seems you would benefit from setting the -timeout option on your load test.

@agenteo
Copy link
Author

agenteo commented Dec 15, 2014

I see, I'll try that overnight and update this.

@agenteo
Copy link
Author

agenteo commented Dec 15, 2014

Actually I think we can close this one, --workers fixes the output issue I'll open a new one about long running tests going to sleep. Thanks

@agenteo agenteo closed this as completed Dec 15, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants