diff --git a/.gitignore b/.gitignore index fefc817..0c06a8a 100644 --- a/.gitignore +++ b/.gitignore @@ -8,8 +8,9 @@ # Test binary, built with `go test -c` *.test -# Output of the go coverage tool, specifically when used with LiteIDE +# Output of the go coverage and profile tools *.out +*.prof # Dependency directories (remove the comment below to include it) # vendor/ @@ -24,3 +25,6 @@ heyyall-linux-amd64 heyyall-linux-arm internal/coverage.html internal/testserver/testserver + +// VSCode history +.history \ No newline at end of file diff --git a/README.md b/README.md index 42469f4..93fc948 100644 --- a/README.md +++ b/README.md @@ -92,68 +92,131 @@ Running `./heyyall -help` provides the following usage message: Usage: heyyall -config [flags...] Options: - -loglevel Logging level. Default is 'WARN' (2). 0 is DEBUG, 1 INFO, up to 4 FATAL - -detail Detail level of output report, 'short' or 'long'. Default is 'long' - -nf Normalization factor used to compress the output histogram by eliminating long tails. - Lower values provide a finer grained view of the data at the expense of dropping data - associated with the tail of the latency distribution. The latter is partly mitigated by - including a final histogram bin containing the number of observations between it and - the previous latency bin. While this doesn't show a detailed distribution of the tail, - it does indicate how many observations are included in the tail. 10 is generally a good - starting number but may vary depending on the actual latency distribution and range - of latency values. The default is 0 which signifies no normalization will be performed. - With very small latencies (microseconds) it's possible that smaller normalization values - could cause the application to panic. Increasing the normalization factor will eliminate - the issue. - -cpus Specifies how many CPUs to use for the test run. The default is 0 which specifies that - all CPUs should be used. + -loglevel Logging level. Default is 'WARN' (2). 0 is DEBUG, 1 INFO, up to 4 FATAL + -out Type of output report, 'text' or 'json'. Default is 'text' + -nf Normalization factor used to compress the output histogram by eliminating long tails. + Lower values provide a finer grained view of the data at the expense of dropping data + associated with the tail of the latency distribution. The latter is partly mitigated by + including a final histogram bin containing the number of observations between it and + the previous latency bin. While this doesn't show a detailed distribution of the tail, + it does indicate how many observations are included in the tail. 10 is generally a good + starting number but may vary depending on the actual latency distribution and range + of latency values. The default is 0 which signifies no normalization will be performed. + With very small latencies (microseconds) it's possible that smaller normalization values + could cause the application to panic. Increasing the normalization factor will eliminate + the issue. + -cpus Specifies how many CPUs to use for the test run. The default is 0 which specifies that + all CPUs should be used. -help This usage message + ``` -One command line flag above is worth a little more discussion, the `nf` or "Normalization Factor" flag. +A couple of these flags are worth discussiong in more detail. First, the `-out` flag. As stated in the usage text it is used to specify whether text or JSON output is desired. Text output is optimized to be human readable and it summarizes the low level details (e.g., full set of response latencies in a test run). JSON output is very detailed, can be voluminous, and is probably best consumed programatically if the text output is missing some desired detail. The `report.go` file in the `api` package contains the Go structs that control the JSON output. + +The following shows an example of a test run specifiying text output: + +``` text +Run Summary: + Total Rqsts: 2000 + Rqsts/sec: 269.9186 + Run Duration (secs): 7.4096 + + +Request Latency (secs): Min Median P75 P90 P95 P99 + 0.0061 0.0544 0.1591 0.2660 0.5053 4.9642 + +Request Latency Histogram (secs): + Latency Observations + [0.0110] 123 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0220] 426 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0330] 184 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0440] 157 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0550] 117 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0660] 86 ❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0770] 70 ❱❱❱❱❱❱❱❱❱❱❱ + [0.0881] 38 ❱❱❱❱❱❱ + [0.0991] 37 ❱❱❱❱❱❱ + [0.1101] 71 ❱❱❱❱❱❱❱❱❱❱❱ + [0.1211] 47 ❱❱❱❱❱❱❱ + [5.2166] 644 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + + + +Endpoint Details(secs): + http://accountd.kube/users: + Requests Min Median P75 P90 P95 P99 + GET: 260 0.0086 0.0675 0.1670 0.2533 0.4255 4.9257 + + http://accountd.kube/users/1: + Requests Min Median P75 P90 P95 P99 + GET: 240 0.0077 0.0604 0.1585 0.2426 0.3642 4.4909 + + http://accountd.kube/users/2000: + Requests Min Median P75 P90 P95 P99 + DELETE: 1000 0.0061 0.0512 0.1573 0.3047 0.5632 4.9944 + GET: 500 0.0063 0.0496 0.1599 0.2578 0.4333 5.0647 + + + +Network Details (secs): + Min Median P75 P90 P95 P99 + DNS Lookup: 0.0000 0.0011 0.0019 0.0027 0.0029 0.0032 + TCP Conn Setup: 0.0000 0.0000 0.0010 0.0286 0.0926 0.1063 + TLS Handshake: 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 + Rqst Roundtrip: 0.0060 0.0498 0.1540 0.2425 0.4063 4.9641 +``` + +The other command line flag above is the `nf` or "Normalization Factor" flag. Some endpoints may exhibit widely varying response times, from as little as a few microseconds to over a second. This can lead to a relatively useless histogram being generated when the test run completes. Here's an example: ``` -./heyyall -config "testdata/oneEP1000Rqst.json" -loglevel 2 -detail "short" - -Response Time Histogram (seconds): - Latency Number of Observations - ------- ---------------------- - [ 0.508] 953 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - [ 1.015] 10 > - [ 1.523] 34 >>>> - [ 2.030] 0 - [ 2.538] 0 - [ 3.045] 0 - [ 3.553] 0 - [ 4.060] 0 - [ 4.568] 0 - [ 5.076] 1 +./heyyall -config "testdata/oneEP1000Rqst.json" -loglevel 2 -out "text" + +... + +Request Latency Histogram (secs): + Latency Observations + [0.5064] 940 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [1.0127] 50 ❱❱❱❱❱ + [1.5191] 0 + [2.0255] 0 + [2.5318] 0 + [3.0382] 0 + [3.5446] 0 + [4.0509] 0 + [4.5573] 3 + [5.0637] 7 ❱ + +... ``` -In this execution over 95% of the responses are in a single histogram bin. This isn't very helpful, hence the Normalization Factor. Specifying `-nf 10` has the following effect on the generated histogram: +In this execution 94% of the responses are in a single histogram bin. This isn't very helpful, hence the Normalization Factor. Specifying `-nf 10` has the following effect on the generated histogram: ``` -./heyyall -config "testdata/oneEP1000Rqst.json" -loglevel 2 -detail "short" -nf 10 - -Response Time Histogram (seconds): - Latency Number of Observations - ------- ---------------------- - [ 0.023] 0 - [ 0.046] 24 >>>>>>> - [ 0.069] 105 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - [ 0.092] 345 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - [ 0.115] 256 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - [ 0.138] 132 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - [ 0.161] 56 >>>>>>>>>>>>>>>> - [ 0.184] 43 >>>>>>>>>>>> - [ 0.207] 10 >>> - [ 0.230] 7 >> - [ 5.136] 22 >>>>>> +./heyyall -config "testdata/oneEP1000Rqst.json" -loglevel 2 -out "text" -nf 10 + +... + +Request Latency Histogram (secs): + Latency Observations + [0.0189] 1 + [0.0378] 365 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0567] 397 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0756] 149 ❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.0945] 51 ❱❱❱❱❱❱❱❱❱❱❱❱❱ + [0.1134] 10 ❱❱❱ + [0.1323] 0 + [0.1512] 0 + [0.1701] 0 + [0.1890] 0 + [5.1241] 27 ❱❱❱❱❱❱❱ + + +... ``` -Instead of 0.5 second bin widths, the widths are about 0.023 seconds. With the narrower widths come much more detail about the majority of the response latencies. However, there is a relatively long tail in both test executions. There are several responses with latencies over 1 second. These can be seen a little clearer in the first histogram. In the second histogram we lose this detail. All we see are that there were a total of 22 requests with latencies over 0.23 seconds. +Instead of 0.5 second bin widths, the widths are about 0.019 seconds. With the narrower widths come much more detail about the majority of the response latencies. However, there is a relatively long tail in both test executions. There are several responses with latencies over 5 seconds. These can be seen a little clearer in the first histogram. In the second histogram we lose this detail. All we see are that there were a total of 27 requests with latencies over 0.1134 seconds. Changing the Normalization Factor allows you to decide where in the range of response latencies you want to see the finer grained detail. It should be noted that with a narrower range of response latencies you may not need to specify the Normalization Factor at all. @@ -193,7 +256,7 @@ There are a few items of note: 4. `"KeyFile"` is optional and specifies a client's PEM encoded private key. It can be configured at both the global and Endpoint levels. If specified for an Endpoint it will override the global specification. 5. `"CertFile"` is optional and represent a client's PEM encoded public certificate. It can be configured at both the global and Endpoint levels. If specified for an Endpoint it will override the global specification. - +The `config.go` file in the `api` package contains the Go struct definitions for the JSON configuration. @@ -201,6 +264,8 @@ There are a few items of note: As mentioned above `heyyall` also supports client authentication and authorization via SSL on an HTTP request. The `"KeyFile"` and `"CertFile"` configuration fields provide the required information. These must both be PEM files. +The `internal/testhttpsserver` package contains the code for an HTTPS server that will authenticate and authorize a client certificate. This can be useful for testing `heyyall`'s HTTPS support. You will need a certificate and key files for both the server and client. It is possible to use the same certs/keys for both client and server. + # Runtime behavior Unsurprisingly, the configuration affects the runtime behavior of the application. @@ -209,7 +274,7 @@ The design of the application calls for `RqstRate`, `MaxConcurrentRqsts`, and if The results of the calculations that allocate these resources across `Endpoints` can result in multiple, concurrent, requests being sent to a single `Endpoint`. If this occurs then the total number of requests allocated to a single `Endpoint` will be split among the concurrent `Endpoint` executions. The `RqstRate` is likewise split across all `Endpoints` as well as among concurrent executions to a single `Endpoint`. If specified, `RunDuration` is the same for all `Endpoints`. -As a result of the above, the actual values specified for `RqstRate`, `MaxConcurrentRqsts`, and `NumRequests` can turn out to be more along the lines of a suggestion rather than a strict specification. This is due to rounding errors resulting from non-integer values resulting from calculations that are performed to allocate the `RqstRate` and the other config values across the `Endpoints`. For example, if 3 `Endpoints` are specified and the value for `MaxConcurrentRqsts` is 4 the calculation of concurrent requests per `Endpoint` is 1.33.... These kinds of results will always be rounded up. So the `Endpoint`s will have 2 concurrent requests each for an overall `MaxConcurrentRqsts` of 6, not 4. Keep this in mind when specifying the values for `RqstRate`, `MaxConcurrentRqsts`, `NumRequests`, and the overall number of configured `Endpoints`. The application does print warning messages for every calculation that is rounded up. For example the log messages below show the rounding that occurred for 3 different endpoints: +As a result of the above, the actual values specified for `RqstRate`, `MaxConcurrentRqsts`, and `NumRequests` can turn out to be more guidelines rather than a strict specification. This is due to rounding errors resulting from non-integer values resulting from calculations that are performed to allocate the `RqstRate` and the other config values across the `Endpoints`. For example, if 3 `Endpoints` are specified and the value for `MaxConcurrentRqsts` is 4 the calculation of concurrent requests per `Endpoint` is 1.33.... These kinds of results will always be rounded up. So the `Endpoint`s will have 2 concurrent requests each for an overall `MaxConcurrentRqsts` of 6, not 4. Keep this in mind when specifying the values for `RqstRate`, `MaxConcurrentRqsts`, `NumRequests`, and the overall number of configured `Endpoints`. The application does print warning messages for every calculation that is rounded up. For example the log messages below show the rounding that occurred for 3 different endpoints: ``` May 12 15:09:19.000 WRN EP: http://accountd.kube/users: epConcurrency, 1, was rounded up. The calcuation result was 0.990000 @@ -223,7 +288,7 @@ To manage request rate the implementation uses Go's `time.Sleep()` function. As Sleep pauses the current goroutine for at least the duration d. -What this means is that `time.Sleep()` may and likely will sleep longer than specified. This behavior means `RunDuration` also turns out to be more of a suggestion rather than a strict specification. So the actual run time of a test execution will be a little longer than specified. How much longer increases with the length of `RunDuration`. For example: +What this means is that `time.Sleep()` may and likely will sleep longer than specified. This behavior means `RunDuration` also turns out to be more of a guideline rather than a strict specification. So the actual run time of a test execution will be a little longer than specified. How much longer increases with the length of `RunDuration`. For example: ``` ./heyyall -config "testdata/threeEPs33Pct.json" @@ -251,7 +316,11 @@ WRN Requestor: error sending request error="Get \"https://prod.idrix.eu/secure/\ # Future plans -1. More metrics will be added to gain feature parity with `hey`. Mainly this means metrics for latency distribution (i.e., quantiles) and high level TCP/IP and HTTP related metrics like DNS lookup latencies and HTTP request write latencies. -3. Support for other configuration and output format types may be added, for example YAML and output in CSV format could be added. -4. Ability to specify the number of requests to be run at an endpoint level. If added this would be a strict specification in the sense that measures will be taken to ensure that the exact number of requests will be run and restrictions will be put in place to ensure related calculations don't have non-integer results. -5. Ability to script scenarios comprised of mutliple different requests to a single Endpoint. `heyyall` currently on supports a single request to a given endpoint. +1. Support for other configuration and output format types may be added, for example YAML and output in CSV format could be added. +2. Ability to specify the number of requests to be run at an endpoint level. If added this would be a strict specification in the sense that measures will be taken to ensure that the exact number of requests will be run and restrictions will be put in place to ensure related calculations don't have non-integer results. +3. Ability to script scenarios comprised of multiple different requests to a single Endpoint. `heyyall` currently on supports a single request to a given endpoint. +4. Performance improvements may be needed. When compared to similar tools like `hey` it seems like the request throughput of `heyyall` is generally lower. It's not entirely clear that this is the case, but it needs more investigation. + +# Similar tools + +While I was familiar with tools like `hey` and `JMeter`, it turns out there vast universe of load generation tools out there. [Here's a great resource](https://github.com/denji/awesome-http-benchmark) that is up-to-date as of January 2020. Another tool that was brought to my attention is [Artillery](https://artillery.io/). \ No newline at end of file diff --git a/api/config.go b/api/config.go index 294a916..932eeda 100644 --- a/api/config.go +++ b/api/config.go @@ -69,10 +69,6 @@ type LoadTestConfig struct { // both RunDuration and NumRequests is an error. See RunDuration // above for a bit more info. NumRequests int - // OutputType specifies if the output will be written in JSON or - // CSV format. Acceptable values are "JSON" and "CSV". If not - // specified output will be in JSON format. - OutputType string // KeyFile is the name of a file, in PEM format, that contains an SSL private // key. It will only be used if it has a non-empty value. It can be overridden // at the Endpoint level. diff --git a/api/report.go b/api/report.go new file mode 100644 index 0000000..1b68977 --- /dev/null +++ b/api/report.go @@ -0,0 +1,85 @@ +// Copyright (c) 2020 Richard Youngkin. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE file. + +package api + +import "time" + +// RqstStats contains a set of common runtime stats reported at both the +// Summary and Endpoint level +type RqstStats struct { + // TimingResultsNanos contains the duration of each request. + TimingResultsNanos []time.Duration + // TotalRqsts is the overall number of requests made during the run + TotalRqsts int64 + // TotalRequestDurationNanos is the sum of all request run durations + TotalRequestDurationNanos time.Duration + // MaxRqstDurationNanos is the longest request duration + MaxRqstDurationNanos time.Duration + // NormalizedMaxRqstDurationNanos is the longest request duration rejecting outlier + // durations more than 'x' times the MinRqstDuration + NormalizedMaxRqstDurationNanos time.Duration + // MinRqstDurationNanos is the smallest request duration for an endpoint + MinRqstDurationNanos time.Duration + // AvgRqstDurationNanos is the average duration of a request for an endpoint + AvgRqstDurationNanos time.Duration +} + +// EndpointDetail is used to report an overview of the results of +// a load test run for a given endpoint. +type EndpointDetail struct { + // URL is the endpoint URL + URL string + // HTTPMethodStatusDist summarizes, by HTTP method, the number of times a + // given status was returned (e.g., 200, 201, 404, etc). More specifically, + // it is a map keyed by HTTP method containing a map keyed by HTTP status + // referencing the number of times that status was returned. + HTTPMethodStatusDist map[string]map[int]int + // HTTPMethodRqstStats provides summary request statistics by HTTP Method. It is + // map of RqstStats keyed by HTTP method. + HTTPMethodRqstStats map[string]*RqstStats +} + +// RunResults is used to report an overview of the results of a +// load test run +type RunResults struct { + // RunSummary is a roll-up of the detailed run results + RunSummary RunSummary + // EndpointSummary describes how often each endpoint was called. + // It is a map keyed by URL of a map keyed by HTTP verb with a value of + // number of requests. So it's a summary of how often each HTTP verb + // was called on each endpoint. + EndpointSummary map[string]map[string]int + // EndpointDetails is the per endpoint summary of results keyed by URL + EndpointDetails map[string]*EndpointDetail `json:",omitempty"` +} + +// RunSummary is a roll-up of the detailed run results +type RunSummary struct { + // RqstRatePerSec is the overall request rate per second + // rounded to the nearest integer + RqstRatePerSec float64 + // RunDurationNanos is the wall clock duration of the test + RunDurationNanos time.Duration + + // MaxRqstRatePerSec is the maximum request rate per second + // over 1/10th of the run duration or number of requests + //MaxRqstRatePerSec int + // MinRqstRatePerSec is the maximum request rate per second + // over 1/10th of the run duration or number of requests + //MinRqstRatePerSec int + + // RqstStats is a summary of runtime statistics + RqstStats RqstStats + // DNSLookupNanos records how long it took to resolve the hostname to an IP Address + DNSLookupNanos []time.Duration + // TCPConnSetupNanos records how long it took to setup the TCP connection + TCPConnSetupNanos []time.Duration + // RqstRoundTripNanos records duration from the time the TCP connection was setup + // until the response was received + RqstRoundTripNanos []time.Duration + // TLSHandshakeNanos records the time it took to complete the TLS negotiation with + // the server. It's only meaningful for HTTPS connections + TLSHandshakeNanos []time.Duration +} diff --git a/buildexes.sh b/buildexes.sh deleted file mode 100755 index 002f38e..0000000 --- a/buildexes.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash - -platforms=("windows/amd64" "windows/386" "darwin/amd64" "linux/amd64" "linux/arm") - -for platform in "${platforms[@]}" -do - platform_split=(${platform//\// }) - GOOS=${platform_split[0]} - GOARCH=${platform_split[1]} - output_name=heyyall'-'$GOOS'-'$GOARCH - if [ $GOOS = "windows" ]; then - output_name+='.exe' - fi - - echo $GOOS $GOARCH $output_name - - env GOOS=$GOOS GOARCH=$GOARCH go build -o $output_name . - if [ $? -ne 0 ]; then - echo 'An error has occurred! Aborting the script execution...' - exit 1 - fi -done \ No newline at end of file diff --git a/heyyall.go b/heyyall.go index b370d1b..f950e20 100644 --- a/heyyall.go +++ b/heyyall.go @@ -15,6 +15,7 @@ import ( "os" "os/signal" "runtime" + "runtime/pprof" "syscall" "time" @@ -30,7 +31,7 @@ Usage: heyyall -config [flags...] Options: -loglevel Logging level. Default is 'WARN' (2). 0 is DEBUG, 1 INFO, up to 4 FATAL - -detail Detail level of output report, 'short' or 'long'. Default is 'long' + -out Type of output report, 'text' or 'json'. Default is 'text' -nf Normalization factor used to compress the output histogram by eliminating long tails. Lower values provide a finer grained view of the data at the expense of dropping data associated with the tail of the latency distribution. The latter is partly mitigated by @@ -49,13 +50,23 @@ Options: configFile := flag.String("config", "", "path and filename containing the runtime configuration") logLevel := flag.Int("loglevel", int(zerolog.WarnLevel), "log level, 0 for debug, 1 info, 2 warn, ...") - reportDetailFlag := flag.String("detail", "long", "what level of report detail is desired, 'short' or 'long'") + outputType := flag.String("out", "text", "what type of report is desired, 'text' or 'json'") normalizationFactor := flag.Int("nf", 0, "normalization factor used to compress the output histogram by eliminating long tails. If provided, the value must be at least 10. The default is 0 which signifies no normalization will be done") cpus := flag.Int("cpus", 0, "number of CPUs to use for the test run. Default is 0 which specifies all CPUs are to be used.") help := flag.Bool("help", false, "help will emit detailed usage instructions and exit") + cpuprofile := flag.String("cpuprofile", "", "write cpu profile to file") flag.Parse() + if *cpuprofile != "" { + f, err := os.Create(*cpuprofile) + if err != nil { + log.Fatal().Err(err).Msg("unable to create cpuprofile file") + } + pprof.StartCPUProfile(f) + defer pprof.StopCPUProfile() + } + if *help { fmt.Println(usage) return @@ -94,20 +105,18 @@ Options: responseC := make(chan internal.Response, config.MaxConcurrentRqsts) doneC := make(chan struct{}) - var reportDetail internal.ReportDetail = internal.Long - if *reportDetailFlag == "short" { - reportDetail = internal.Short + var reportDetail internal.OutputType = internal.JSON + if *outputType == "text" { + reportDetail = internal.Text } responseHandler := &internal.ResponseHandler{ - ReportDetail: reportDetail, - ResponseC: responseC, - DoneC: doneC, - NumRqsts: config.NumRequests, - NormFactor: *normalizationFactor, + OutputType: reportDetail, + ResponseC: responseC, + DoneC: doneC, + NumRqsts: config.NumRequests, + NormFactor: *normalizationFactor, } go responseHandler.Start() - // Give responseHandler a bit of time to start - time.Sleep(time.Millisecond * 20) var cert tls.Certificate if config.CertFile != "" && config.KeyFile != "" { diff --git a/internal/reporter.go b/internal/reporter.go new file mode 100644 index 0000000..73b45ac --- /dev/null +++ b/internal/reporter.go @@ -0,0 +1,212 @@ +// Copyright (c) 2020 Richard Youngkin. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE file. + +package internal + +import ( + "fmt" + "math" + "os" + "sort" + "text/template" + "time" + + "github.com/rs/zerolog/log" + "github.com/youngkin/heyyall/api" +) + +// OutputType specifies the output formate of the final report. There are +// 2 values, 'text' and 'json'. 'text' will present a human readable form. +// 'json' will present the JSON structures that capture the detailed run +// stats. +type OutputType int + +const ( + // Text specifies only high level report stats will be produced + Text OutputType = iota + // JSON indicates detailed reporting stats will be produced + JSON +) + +var tmpltFuncs = template.FuncMap{ + "formatFloat": formatFloat, + "formatSeconds": formatSeconds, + "formatPercentile": formatPercentile, + "formatMethod": formatMethod, + "format100Million": format100Million, +} + +func formatFloat(f float64) string { + return fmt.Sprintf("%4.4f", f) +} + +func formatSeconds(d time.Duration) string { + return fmt.Sprintf("%04.4f", d.Seconds()) +} + +func formatPercentile(p int, d []time.Duration) string { + val := calcPercentiles(p, d) + return formatSeconds(val) +} + +func formatMethod(m string) string { + if len(m) == 6 { // length of 'DELETE' + return m + } + + if len(m) == 3 { + return fmt.Sprintf(" %s", m) + } + + return fmt.Sprintf(" %s", m) +} + +func format100Million(i int64) string { + return fmt.Sprintf("%9v", i) +} + +var runSummTmplt = ` +Run Summary: + Total Rqsts: {{ .RqstStats.TotalRqsts }} + Rqsts/sec: {{ formatFloat .RqstRatePerSec }} + Run Duration (secs): {{ formatSeconds .RunDurationNanos }} +` + +var rqstLatencyTmplt = ` +Request Latency (secs): Min Median P75 P90 P95 P99 + {{ formatPercentile 0 .TimingResultsNanos }} {{ formatPercentile 50 .TimingResultsNanos }} {{ formatPercentile 75 .TimingResultsNanos }} {{ formatPercentile 90 .TimingResultsNanos }} {{ formatPercentile 95 .TimingResultsNanos }} {{ formatPercentile 99 .TimingResultsNanos }} +` + +var netDetailsTmplt = ` +Network Details (secs): + Min Median P75 P90 P95 P99 + DNS Lookup: {{ formatPercentile 0 .DNSLookupNanos }} {{ formatPercentile 50 .DNSLookupNanos }} {{ formatPercentile 75 .DNSLookupNanos }} {{ formatPercentile 90 .DNSLookupNanos }} {{ formatPercentile 95 .DNSLookupNanos }} {{ formatPercentile 99 .DNSLookupNanos }} + TCP Conn Setup: {{ formatPercentile 0 .TCPConnSetupNanos }} {{ formatPercentile 50 .TCPConnSetupNanos }} {{ formatPercentile 75 .TCPConnSetupNanos }} {{ formatPercentile 90 .TCPConnSetupNanos }} {{ formatPercentile 95 .TCPConnSetupNanos }} {{ formatPercentile 99 .TCPConnSetupNanos }} + TLS Handshake: {{ formatPercentile 0 .TLSHandshakeNanos }} {{ formatPercentile 50 .TLSHandshakeNanos }} {{ formatPercentile 75 .TLSHandshakeNanos }} {{ formatPercentile 90 .TLSHandshakeNanos }} {{ formatPercentile 95 .TLSHandshakeNanos }} {{ formatPercentile 99 .TLSHandshakeNanos }} + Rqst Roundtrip: {{ formatPercentile 0 .RqstRoundTripNanos }} {{ formatPercentile 50 .RqstRoundTripNanos }} {{ formatPercentile 75 .RqstRoundTripNanos }} {{ formatPercentile 90 .RqstRoundTripNanos }} {{ formatPercentile 95 .RqstRoundTripNanos }} {{ formatPercentile 99 .RqstRoundTripNanos }} +` + +// Pass in a EndpointDetails keyed by URL and range over EndpointDetail +// HTTPMethodRqstStats (map[string]*RqstStats keyed by Method) +var endpointDetailsTmplt = ` +Endpoint Details(secs): {{ range $url, $epDetails := . }} + {{ $url }}: + Requests Min Median P75 P90 P95 P99 {{ range $method, $epDetail := .HTTPMethodRqstStats }} + {{ formatMethod $method }}: {{ format100Million .TotalRqsts }} {{ formatPercentile 0 .TimingResultsNanos }} {{ formatPercentile 50 .TimingResultsNanos }} {{ formatPercentile 75 .TimingResultsNanos }} {{ formatPercentile 90 .TimingResultsNanos }} {{ formatPercentile 95 .TimingResultsNanos }} {{ formatPercentile 99 .TimingResultsNanos }} {{ end }} + {{ end }} +` + +func printRunSummary(rs api.RunSummary) { + tmplt, err := template.New("runSummary").Funcs(tmpltFuncs).Parse(runSummTmplt) + if err != nil { + log.Error().Err(err).Msg("error parsing runResults template") + } + + err = tmplt.Execute(os.Stdout, rs) + if err != nil { + log.Error().Err(err).Msg("error executing runResults template") + } +} + +func printRqstLatency(rs api.RqstStats) { + tmplt, err := template.New("rqstLatency").Funcs(tmpltFuncs).Parse(rqstLatencyTmplt) + if err != nil { + log.Error().Err(err).Msg("error parsing rqstLatency template") + } + + err = tmplt.Execute(os.Stdout, rs) + if err != nil { + log.Error().Err(err).Msg("error executing rqstLatency template") + } +} + +func printNetworkDetails(rs api.RunSummary) { + tmplt, err := template.New("networkDetails").Funcs(tmpltFuncs).Parse(netDetailsTmplt) + if err != nil { + log.Error().Err(err).Msg("error parsing networkDetails template") + } + + err = tmplt.Execute(os.Stdout, rs) + if err != nil { + log.Error().Err(err).Msg("error executing networkDetails template") + } +} + +func printEndpointDetails(epd map[string]*api.EndpointDetail) { + tmplt, err := template.New("endpointDetail").Funcs(tmpltFuncs).Parse(endpointDetailsTmplt) + if err != nil { + log.Error().Err(err).Msg("error parsing endpoint detail template") + } + + err = tmplt.Execute(os.Stdout, epd) + if err != nil { + log.Error().Err(err).Msg("error executing endpoint detail template") + } +} + +func calcPercentiles(percentile int, results []time.Duration) time.Duration { + if len(results) == 0 { + return 0 + } + + if percentile == 0 { + return calcPMin(results) + } + + if percentile == 50 { + return calcPMedian(results) + } + + sort.Slice(results, func(i, j int) bool { return results[i] < results[j] }) + + // applying math.Ceil to the results of math.Ceil is required to round up + // to the next results cell when len(results) is a small number, e.g., like + // 2. Otherwise Median is greater than P99. + p := math.Ceil(math.Ceil(float64((len(results)-1)*percentile)) / 100) + return results[int(p)] +} + +func calcPMin(results []time.Duration) time.Duration { + if len(results) == 0 { + return 0 + } + sort.Slice(results, func(i, j int) bool { return results[i] < results[j] }) + return results[0] +} + +func calcPMedian(results []time.Duration) time.Duration { + if len(results) == 0 { + return 0 + } + + sort.Slice(results, func(i, j int) bool { return results[i] < results[j] }) + + isEven := len(results)%2 == 0 + mNumber := len(results) / 2 + + if !isEven { + return results[mNumber] + } + return (results[mNumber-1] + results[mNumber]) / time.Duration(2) +} + +// func calcP90(results []time.Duration) time.Duration { +// if len(results) == 0 { +// return 0 +// } + +// sort.Slice(results, func(i, j int) bool { return results[i] < results[j] }) +// p90 := float64(len(results)-1) * 0.90 +// return results[int(p90)] +// } + +// func calcP99(results []time.Duration) time.Duration { +// if len(results) == 0 { +// return 0 +// } + +// sort.Slice(results, func(i, j int) bool { return results[i] < results[j] }) +// p99 := float64(len(results)-1) * 0.99 +// return results[int(p99)] +// } diff --git a/internal/requestor.go b/internal/requestor.go index 2e47f7a..7591c57 100644 --- a/internal/requestor.go +++ b/internal/requestor.go @@ -9,6 +9,7 @@ import ( "context" "crypto/tls" "net/http" + "net/http/httptrace" "time" "github.com/rs/zerolog/log" @@ -46,6 +47,32 @@ func (r Requestor) ProcessRqst(ep api.Endpoint, numRqsts int, runDur time.Durati return } + var dnsStart, dnsDone, connStart, connDone, gotResp, tlsStart, tlsDone time.Time + + trace := &httptrace.ClientTrace{ + DNSStart: func(_ httptrace.DNSStartInfo) { dnsStart = time.Now() }, + DNSDone: func(_ httptrace.DNSDoneInfo) { dnsDone = time.Now() }, + // ConnectStart: func(_, _ string) { + // if dnsDone.IsZero() { + // // connecting directly to IP + // dnsDone = time.Now() + // } + // }, + // ConnectDone: func(net, addr string, err error) { + // if err != nil { + // log.Fatal().Msgf("unable to connect to host %v: %v", addr, err) + // } + // connDone = time.Now() + + // }, + GetConn: func(_ string) { connStart = time.Now() }, + GotConn: func(_ httptrace.GotConnInfo) { connDone = time.Now() }, + GotFirstResponseByte: func() { gotResp = time.Now() }, + TLSHandshakeStart: func() { tlsStart = time.Now() }, + TLSHandshakeDone: func(_ tls.ConnectionState, _ error) { tlsDone = time.Now() }, + } + req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace)) + // At this point we know one of numRqsts or runDur is non-zero. Whichever one // is non-zero will be set to a super-high number to effectively disable its // test in the for-loop below @@ -105,9 +132,13 @@ func (r Requestor) ProcessRqst(ep api.Endpoint, numRqsts int, runDur time.Durati log.Debug().Msg("Requestor runDur expired, exiting") return case r.ResponseC <- Response{ - HTTPStatus: resp.StatusCode, - Endpoint: api.Endpoint{URL: ep.URL, Method: ep.Method}, - RequestDuration: time.Since(start), + HTTPStatus: resp.StatusCode, + Endpoint: api.Endpoint{URL: ep.URL, Method: ep.Method}, + RequestDuration: time.Since(start), + DNSLookupDuration: dnsDone.Sub(dnsStart), + TCPConnDuration: connDone.Sub(connStart), + RoundTripDuration: gotResp.Sub(connDone), + TLSHandshakeDuration: tlsDone.Sub(tlsStart), }: } diff --git a/internal/responseHandler.go b/internal/responseHandler.go index 68960e5..963b3e5 100644 --- a/internal/responseHandler.go +++ b/internal/responseHandler.go @@ -16,98 +16,26 @@ import ( "github.com/youngkin/heyyall/api" ) -// RqstStats contains a set of common runtime stats reported at both the -// Summary and Endpoint level -type RqstStats struct { - // TotalRqsts is the overall number of requests made during the run - TotalRqsts int64 - // TotalRequestDurationStr is the string version of TotalRequestDuration - TotalRequestDurationStr string - // MaxRqstDurationStr is a string representation of MaxRqstDuration - MaxRqstDurationStr string - // NormalizedMaxRqstDurationStr is a string representation of NormalizedMaxRqstDuration - NormalizedMaxRqstDurationStr string - // MinRqstDurationStr is a string representation of MinRqstDuration - MinRqstDurationStr string - // AvgRqstDurationStr is the average duration of a request in microseconds - AvgRqstDurationStr string - // TotalRequestDuration is the sum of all request run durations - TotalRequestDuration time.Duration - // MaxRqstDuration is the longest request duration - MaxRqstDuration time.Duration - // NormalizedMaxRqstDuration is the longest request duration rejecting outlier - // durations more than 'x' times the MinRqstDuration - NormalizedMaxRqstDuration time.Duration - // MinRqstDuration is the smallest request duration for an endpoint - MinRqstDuration time.Duration - // AvgRqstDuration is the average duration of a request for an endpoint - AvgRqstDuration time.Duration -} - -// EndpointDetail is used to report an overview of the results of -// a load test run for a given endpoint. -type EndpointDetail struct { - // URL is the endpoint URL - URL string - // HTTPMethodStatusDist summarizes, by HTTP method, the number of times a - // given status was returned (e.g., 200, 201, 404, etc). More specifically, - // it is a map keyed by HTTP method containing a map keyed by HTTP status - // referencing the number of times that status was returned. - HTTPMethodStatusDist map[string]map[int]int - // HTTPMethodRqstStats provides summary request statistics by HTTP Method. It is - // map of RqstStats keyed by HTTP method. - HTTPMethodRqstStats map[string]*RqstStats -} - -// RunResults is used to report an overview of the results of a -// load test run -type RunResults struct { - // RunSummary is a roll-up of the detailed run results - RunSummary RunSummary - // EndpointSummary describes how often each endpoint was called. - // It is a map keyed by URL of a map keyed by HTTP verb with a value of - // number of requests. So it's a summary of how often each HTTP verb - // was called on each endpoint. - EndpointSummary map[string]map[string]int - // EndpointDetails is the per endpoint summary of results keyed by URL - EndpointDetails map[string]*EndpointDetail `json:",omitempty"` -} - -// RunSummary is a roll-up of the detailed run results -type RunSummary struct { - // RqstRatePerSec is the overall request rate per second - // rounded to the nearest integer - RqstRatePerSec float64 - // RunDuration is the wall clock duration of the test - RunDuration time.Duration - // RunDurationStr is the string representation of RunDuration - RunDurationStr string - // ResponseDistribution is distribution of response times. There will be - // 11 bucket; 10 microseconds or less, between 10us and 100us, - // 100us and 1ms, 1ms to 10ms, 10ms to 100ms, 100ms to 1s, 1s to 1.1s, - // 1.1s to 1.5s, 1.5s to 1.8s, 1.8s to 2.5s, 2.5s and above - //ResponseDistribution map[float32]int - // HTTPStatusDistribution is the distribution of HTTP response statuses - //HTTPStatusDistribution map[string]int - // MaxRqstRatePerSec is the maximum request rate per second - // over 1/10th of the run duration or number of requests - //MaxRqstRatePerSec int - // MinRqstRatePerSec is the maximum request rate per second - // over 1/10th of the run duration or number of requests - //MinRqstRatePerSec int - // RqstStats is a summary of runtime statistics - RqstStats RqstStats +// Response contains information describing the results +// of a request to a specific endpoint +type Response struct { + HTTPStatus int + Endpoint api.Endpoint + RequestDuration time.Duration + DNSLookupDuration time.Duration + TCPConnDuration time.Duration + RoundTripDuration time.Duration + TLSHandshakeDuration time.Duration } // ResponseHandler is responsible for accepting, summarizing, and reporting // on the overall load test results. type ResponseHandler struct { - ReportDetail ReportDetail - ResponseC chan Response - DoneC chan struct{} - NumRqsts int - NormFactor int - timingResults []time.Duration + OutputType OutputType + ResponseC chan Response + DoneC chan struct{} + NumRqsts int + NormFactor int // histogram contains a count of observations that are <= to the value of the key. // The key is a number that represents response duration. histogram map[float64]int @@ -117,14 +45,14 @@ type ResponseHandler struct { func (rh *ResponseHandler) Start() { log.Debug().Msg("ResponseHandler starting") - rh.timingResults = make([]time.Duration, 0, int(math.Min(float64(rh.NumRqsts), float64(api.MaxRqsts)))) - epRunSummary := make(map[string]*EndpointDetail) - runSummary := RunSummary{RqstStats: RqstStats{MaxRqstDuration: time.Duration(-1), MinRqstDuration: time.Duration(math.MaxInt64)}} - runResults := RunResults{RunSummary: runSummary} + epRunSummary := make(map[string]*api.EndpointDetail) + runSummary := api.RunSummary{RqstStats: api.RqstStats{MaxRqstDurationNanos: time.Duration(-1), MinRqstDurationNanos: time.Duration(math.MaxInt64)}} + runResults := api.RunResults{RunSummary: runSummary} runResults.EndpointSummary = make(map[string]map[string]int) start := time.Now() var totalRunTime time.Duration + responses := make([]Response, 0, 10) for { select { @@ -132,87 +60,96 @@ func (rh *ResponseHandler) Start() { if !ok { defer close(rh.DoneC) log.Debug().Msg("ResponseHandler: Summarizing results and exiting") + + for _, r := range responses { + rh.accumulateResponseStats(r, &totalRunTime, &runResults, epRunSummary) + runResults.RunSummary.DNSLookupNanos = append(runResults.RunSummary.DNSLookupNanos, r.DNSLookupDuration) + runResults.RunSummary.TCPConnSetupNanos = append(runResults.RunSummary.TCPConnSetupNanos, r.TCPConnDuration) + runResults.RunSummary.RqstRoundTripNanos = append(runResults.RunSummary.RqstRoundTripNanos, r.RoundTripDuration) + runResults.RunSummary.TLSHandshakeNanos = append(runResults.RunSummary.TLSHandshakeNanos, r.TLSHandshakeDuration) + } + err := rh.finalizeResponseStats(start, &totalRunTime, &runResults, epRunSummary) if err != nil { log.Error().Err(err) return } - min, max := rh.generateHistogram(&runResults) + if rh.OutputType == Text { + fmt.Println("") + printRunSummary(runResults.RunSummary) + + fmt.Println("") + printRqstLatency(runResults.RunSummary.RqstStats) + + min, max := rh.generateHistogram(&runResults) + fmt.Printf("\nRequest Latency Histogram (secs):\n") + fmt.Println(rh.generateHistogramString(min, max)) + + fmt.Println("") + printEndpointDetails(runResults.EndpointDetails) + + fmt.Println("") + printNetworkDetails(runResults.RunSummary) + + return + } + // TODO: This needs to be uncommented to print JSON results. Also, reportDetail should probably + // TODO: change to reportType with values text or JSON rsjson, err := json.MarshalIndent(runResults, " ", " ") if err != nil { log.Error().Err(err).Msgf("error marshaling RunSummary into string: %+v.\n", runResults) return } - - if max != 0 { - fmt.Printf("\nResponse Time Histogram (seconds):\n") - fmt.Println(rh.generateHistogramString(min, max)) - } else { - fmt.Println("\nUnable to generate Response Time Histogram.") - log.Error().Msg("'max' histogram bin value was 0, no histogram can be created") - } - - fmt.Printf("\n\nRun Results:\n") fmt.Printf("%s\n", string(rsjson[2:len(rsjson)-1])) + return } - rh.accumulateResponseStats(resp, &totalRunTime, &runResults, epRunSummary) - + responses = append(responses, resp) } } } func (rh *ResponseHandler) finalizeResponseStats(start time.Time, totalRunTime *time.Duration, - runResults *RunResults, epRunSummary map[string]*EndpointDetail) error { - - runResults.RunSummary.RunDuration = time.Since(start) - runResults.RunSummary.RunDurationStr = runResults.RunSummary.RunDuration.String() - runResults.RunSummary.RqstStats.TotalRequestDurationStr = totalRunTime.String() - runResults.RunSummary.RqstStats.MaxRqstDurationStr = runResults.RunSummary.RqstStats.MaxRqstDuration.String() - runResults.RunSummary.RqstStats.MinRqstDurationStr = runResults.RunSummary.RqstStats.MinRqstDuration.String() - runResults.RunSummary.RqstStats.AvgRqstDuration = time.Duration(0) + runResults *api.RunResults, epRunSummary map[string]*api.EndpointDetail) error { + + runResults.RunSummary.RunDurationNanos = time.Since(start) + runResults.RunSummary.RqstStats.AvgRqstDurationNanos = time.Duration(0) if runResults.RunSummary.RqstStats.TotalRqsts > 0 { - runResults.RunSummary.RqstStats.AvgRqstDuration = *totalRunTime / time.Duration(runResults.RunSummary.RqstStats.TotalRqsts) + runResults.RunSummary.RqstStats.AvgRqstDurationNanos = *totalRunTime / time.Duration(runResults.RunSummary.RqstStats.TotalRqsts) } - runResults.RunSummary.RqstStats.AvgRqstDurationStr = runResults.RunSummary.RqstStats.AvgRqstDuration.String() - runResults.RunSummary.RqstRatePerSec = (float64(runResults.RunSummary.RqstStats.TotalRqsts) / float64(runResults.RunSummary.RunDuration)) * float64(time.Second) + runResults.RunSummary.RqstRatePerSec = (float64(runResults.RunSummary.RqstStats.TotalRqsts) / float64(runResults.RunSummary.RunDurationNanos)) * float64(time.Second) - if rh.ReportDetail == Long { - runResults.EndpointDetails = epRunSummary + runResults.EndpointDetails = epRunSummary - for _, epDetail := range epRunSummary { - for _, methodRqstStats := range epDetail.HTTPMethodRqstStats { - methodRqstStats.MaxRqstDurationStr = methodRqstStats.MaxRqstDuration.String() - methodRqstStats.MinRqstDurationStr = methodRqstStats.MinRqstDuration.String() - methodRqstStats.AvgRqstDurationStr = "0s" - if methodRqstStats.TotalRqsts > 0 { - methodRqstStats.AvgRqstDuration = (methodRqstStats.TotalRequestDuration / time.Duration(methodRqstStats.TotalRqsts)) - methodRqstStats.AvgRqstDurationStr = methodRqstStats.AvgRqstDuration.String() - } - methodRqstStats.TotalRequestDurationStr = methodRqstStats.TotalRequestDuration.String() - log.Debug().Msgf("EndpointSummary: %+v", epDetail) + for _, epDetail := range epRunSummary { + for _, methodRqstStats := range epDetail.HTTPMethodRqstStats { + if methodRqstStats.TotalRqsts > 0 { + methodRqstStats.AvgRqstDurationNanos = (methodRqstStats.TotalRequestDurationNanos / time.Duration(methodRqstStats.TotalRqsts)) } + log.Debug().Msgf("EndpointSummary: %+v", epDetail) } } return nil } -func (rh *ResponseHandler) accumulateResponseStats(resp Response, totalRunTime *time.Duration, runResults *RunResults, epRunSummary map[string]*EndpointDetail) { - rh.timingResults = append(rh.timingResults, resp.RequestDuration) +func (rh *ResponseHandler) accumulateResponseStats(resp Response, totalRunTime *time.Duration, + runResults *api.RunResults, epRunSummary map[string]*api.EndpointDetail) { + + runResults.RunSummary.RqstStats.TimingResultsNanos = append(runResults.RunSummary.RqstStats.TimingResultsNanos, resp.RequestDuration) runResults.RunSummary.RqstStats.TotalRqsts++ - runResults.RunSummary.RqstStats.TotalRequestDuration += resp.RequestDuration + runResults.RunSummary.RqstStats.TotalRequestDurationNanos += resp.RequestDuration *totalRunTime = *totalRunTime + resp.RequestDuration - if resp.RequestDuration > runResults.RunSummary.RqstStats.MaxRqstDuration { - runResults.RunSummary.RqstStats.MaxRqstDuration = resp.RequestDuration + if resp.RequestDuration > runResults.RunSummary.RqstStats.MaxRqstDurationNanos { + runResults.RunSummary.RqstStats.MaxRqstDurationNanos = resp.RequestDuration } - if resp.RequestDuration < runResults.RunSummary.RqstStats.MinRqstDuration { - runResults.RunSummary.RqstStats.MinRqstDuration = resp.RequestDuration + if resp.RequestDuration < runResults.RunSummary.RqstStats.MinRqstDurationNanos { + runResults.RunSummary.RqstStats.MinRqstDurationNanos = resp.RequestDuration } var epStatusCount map[string]int @@ -223,35 +160,36 @@ func (rh *ResponseHandler) accumulateResponseStats(resp Response, totalRunTime * } epStatusCount[resp.Endpoint.Method]++ - var epDetail *EndpointDetail + var epDetail *api.EndpointDetail epDetail, ok = epRunSummary[resp.Endpoint.URL] if !ok { - epDetail = &EndpointDetail{ + epDetail = &api.EndpointDetail{ URL: resp.Endpoint.URL, HTTPMethodStatusDist: make(map[string]map[int]int), - HTTPMethodRqstStats: make(map[string]*RqstStats), + HTTPMethodRqstStats: make(map[string]*api.RqstStats), } epRunSummary[resp.Endpoint.URL] = epDetail } methodRqstStats, ok := epDetail.HTTPMethodRqstStats[resp.Endpoint.Method] if !ok { - epDetail.HTTPMethodRqstStats[resp.Endpoint.Method] = &RqstStats{ - MaxRqstDuration: -1, - MinRqstDuration: time.Duration(math.MaxInt64), + epDetail.HTTPMethodRqstStats[resp.Endpoint.Method] = &api.RqstStats{ + MaxRqstDurationNanos: -1, + MinRqstDurationNanos: time.Duration(math.MaxInt64), } methodRqstStats = epDetail.HTTPMethodRqstStats[resp.Endpoint.Method] } methodRqstStats.TotalRqsts++ - methodRqstStats.TotalRequestDuration = methodRqstStats.TotalRequestDuration + resp.RequestDuration + methodRqstStats.TotalRequestDurationNanos = methodRqstStats.TotalRequestDurationNanos + resp.RequestDuration - if resp.RequestDuration > methodRqstStats.MaxRqstDuration { - methodRqstStats.MaxRqstDuration = resp.RequestDuration + if resp.RequestDuration > methodRqstStats.MaxRqstDurationNanos { + methodRqstStats.MaxRqstDurationNanos = resp.RequestDuration } - if resp.RequestDuration < methodRqstStats.MinRqstDuration { - methodRqstStats.MinRqstDuration = resp.RequestDuration + if resp.RequestDuration < methodRqstStats.MinRqstDurationNanos { + methodRqstStats.MinRqstDurationNanos = resp.RequestDuration } + methodRqstStats.TimingResultsNanos = append(methodRqstStats.TimingResultsNanos, resp.RequestDuration) _, ok = epDetail.HTTPMethodStatusDist[resp.Endpoint.Method] if !ok { @@ -266,16 +204,15 @@ func (rh *ResponseHandler) accumulateResponseStats(resp Response, totalRunTime * // taken from the result set, referencing the number of observations in the 'range' // of that number. It returns the min and max values for the histogram, i.e., the // min and max number of observations in the histogram. -func (rh *ResponseHandler) generateHistogram(runResults *RunResults) (minBinCount, maxBinCount int) { - numBins := calcNumBinsSturgesMethod(len(rh.timingResults)) - // numBins := calcNumBinsRiceMethod(len(rh.timingResults)) - runResults.RunSummary.RqstStats.NormalizedMaxRqstDuration = time.Duration(rh.NormFactor) * runResults.RunSummary.RqstStats.MinRqstDuration - runResults.RunSummary.RqstStats.NormalizedMaxRqstDurationStr = runResults.RunSummary.RqstStats.NormalizedMaxRqstDuration.String() +func (rh *ResponseHandler) generateHistogram(runResults *api.RunResults) (minBinCount, maxBinCount int) { + numBins := calcNumBinsSturgesMethod(len(runResults.RunSummary.RqstStats.TimingResultsNanos)) + // numBins := calcNumBinsRiceMethod(len(runResults.RunSummary.RqstStats.TimingResultsNanos)) + runResults.RunSummary.RqstStats.NormalizedMaxRqstDurationNanos = time.Duration(rh.NormFactor) * runResults.RunSummary.RqstStats.MinRqstDurationNanos - binWidth := float64(runResults.RunSummary.RqstStats.MaxRqstDuration) / float64(numBins) + binWidth := float64(runResults.RunSummary.RqstStats.MaxRqstDurationNanos) / float64(numBins) if rh.NormFactor > 1 { - maxNormDur := time.Duration(math.Min(float64(runResults.RunSummary.RqstStats.MaxRqstDuration), - float64(runResults.RunSummary.RqstStats.NormalizedMaxRqstDuration))) + maxNormDur := time.Duration(math.Min(float64(runResults.RunSummary.RqstStats.MaxRqstDurationNanos), + float64(runResults.RunSummary.RqstStats.NormalizedMaxRqstDurationNanos))) binWidth = float64(maxNormDur) / float64(numBins) } rh.histogram = make(map[float64]int, numBins) @@ -292,7 +229,7 @@ func (rh *ResponseHandler) generateHistogram(runResults *RunResults) (minBinCoun // that the observation gets assigned to the correct bin, i.e., the lowest bin value that is // >= to the observation. 'binValues' is a slice whose values are appended in ascending order, // so it is already sorted. - for _, observation := range rh.timingResults { + for _, observation := range runResults.RunSummary.RqstStats.TimingResultsNanos { // TODO: Might be able to get this to O(n*Log(n))) if did a binary search on binKeys as it's sorted for _, binVal := range binValues { if float64(observation) <= binVal { @@ -311,18 +248,18 @@ func (rh *ResponseHandler) generateHistogram(runResults *RunResults) (minBinCoun } } - if rh.NormFactor > 1 && runResults.RunSummary.RqstStats.NormalizedMaxRqstDuration < runResults.RunSummary.RqstStats.MaxRqstDuration { + if rh.NormFactor > 1 && runResults.RunSummary.RqstStats.NormalizedMaxRqstDurationNanos < runResults.RunSummary.RqstStats.MaxRqstDurationNanos { // If the histogram is being normalized, pick up all the observations greater than largest bin's key // into a single bin. This will show how many observations occurred between 'largestBinKey' and the // MaxRqstDuration. largestBinKey := binWidth * float64(numBins) var tailBinCount int - for _, observation := range rh.timingResults { + for _, observation := range runResults.RunSummary.RqstStats.TimingResultsNanos { if float64(observation) > largestBinKey { tailBinCount++ } } - rh.histogram[float64(runResults.RunSummary.RqstStats.MaxRqstDuration)] = tailBinCount + rh.histogram[float64(runResults.RunSummary.RqstStats.MaxRqstDurationNanos)] = tailBinCount maxBinCount = int(math.Max(float64(tailBinCount), float64(maxBinCount))) minBinCount = int(math.Min(float64(tailBinCount), float64(minBinCount))) } @@ -330,33 +267,13 @@ func (rh *ResponseHandler) generateHistogram(runResults *RunResults) (minBinCoun return minBinCount, maxBinCount } -func (rh *ResponseHandler) printHistogram(min, max int) { - barUnit := ">" - // barUnit := "■" - - keys := make([]float64, 0, len(rh.histogram)) - for k := range rh.histogram { - keys = append(keys, k) - } - sort.Float64s(keys) - - fmt.Printf("\tLatency\t\tNumber of Observations\n") - fmt.Printf("\t-------\t\t----------------------\n") - var sb strings.Builder - for _, key := range keys { - cnt := rh.histogram[key] - barLen := ((cnt * 100) + (max / 2)) / max - for i := 0; i < barLen; i++ { - sb.WriteString(barUnit) - } - fmt.Printf("\t[%6.3f]\t%7v\t%s\n", key/float64(time.Second), cnt, sb.String()) - sb.Reset() - } -} - func (rh *ResponseHandler) generateHistogramString(min, max int) string { - barUnit := ">" + // barUnit := ">" + barUnit := "❱" // barUnit := "■" + // barUnit := "➤" + // barUnit := "⭆" + // barUnit := '➯' keys := make([]float64, 0, len(rh.histogram)) for k := range rh.histogram { @@ -365,8 +282,8 @@ func (rh *ResponseHandler) generateHistogramString(min, max int) string { sort.Float64s(keys) var sb strings.Builder - sb.WriteString(fmt.Sprintf("\tLatency\t\tNumber of Observations\n")) - sb.WriteString(fmt.Sprintf("\t-------\t\t----------------------\n")) + sb.WriteString(fmt.Sprintf("\tLatency Observations\n")) + // sb.WriteString(fmt.Sprintf("\t-------- ----------------------\n")) for _, key := range keys { var sbBar strings.Builder cnt := rh.histogram[key] @@ -374,7 +291,7 @@ func (rh *ResponseHandler) generateHistogramString(min, max int) string { for i := 0; i < barLen; i++ { sbBar.WriteString(barUnit) } - sb.WriteString(fmt.Sprintf("\t[%6.3f]\t%7v\t%s\n", key/float64(time.Second), cnt, sbBar.String())) + sb.WriteString(fmt.Sprintf("\t[%4.4f] %7v\t%s\n", key/float64(time.Second), cnt, sbBar.String())) sbBar.Reset() } return sb.String() diff --git a/internal/responseHandler_test.go b/internal/responseHandler_test.go index f9f5864..4966a0c 100644 --- a/internal/responseHandler_test.go +++ b/internal/responseHandler_test.go @@ -91,18 +91,18 @@ func TestResponseStats(t *testing.T) { url1 := "http://someurl/1" url2 := "http://someurl/2" url3 := "http://someurl/3" - runResults := RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: math.MaxInt64, - MaxRqstDuration: 0, + runResults := api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: math.MaxInt64, + MaxRqstDurationNanos: 0, }, }, EndpointSummary: make(map[string]map[string]int), } - epRunSummary := make(map[string]*EndpointDetail) + epRunSummary := make(map[string]*api.EndpointDetail) - rh := ResponseHandler{ReportDetail: Long} + rh := ResponseHandler{OutputType: JSON} // URL1 resp := Response{ @@ -207,7 +207,7 @@ func TestResponseStats(t *testing.T) { } expected := readGoldenFile(t, testName) - expectedJSON := RunResults{} + expectedJSON := api.RunResults{} err = json.Unmarshal(expected, &expectedJSON) if err != nil { t.Errorf("error unmarshaling GoldenFile %s into RunSummary, Error: %s\n", expected, err) @@ -220,7 +220,7 @@ func TestResponseStats(t *testing.T) { t.Errorf("expected %d endpoints, got %d", len(expectedJSON.EndpointSummary), len(runResults.EndpointSummary)) } - if expectedJSON.RunSummary.RqstStats != runResults.RunSummary.RqstStats { + if !compareRqstStats(expectedJSON.RunSummary.RqstStats, runResults.RunSummary.RqstStats) { t.Errorf("expected %+v, got %+v", expectedJSON.RunSummary.RqstStats, runResults.RunSummary.RqstStats) } @@ -255,19 +255,23 @@ func TestResponseStats(t *testing.T) { runResults.EndpointSummary[url3][http.MethodDelete]) } - if *expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodGet] != *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodGet] { + if !compareRqstStats(*expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodGet], + *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodGet]) { t.Errorf("expected %+v for %s method %s, got %+v", expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodGet], url3, http.MethodGet, runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodGet]) } - if *expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPut] != *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPut] { + if !compareRqstStats(*expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPut], + *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPut]) { t.Errorf("expected %+v for %s method %s, got %+v", expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPut], url3, http.MethodPut, runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPut]) } - if *expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPost] != *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPost] { + if !compareRqstStats(*expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPost], + *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPost]) { t.Errorf("expected %+v for %s method %s, got %+v", expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPost], url3, http.MethodPost, runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodPost]) } - if *expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodDelete] != *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodDelete] { + if !compareRqstStats(*expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodDelete], + *runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodDelete]) { t.Errorf("expected %+v for %s method %s, got %+v", expectedJSON.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodDelete], url3, http.MethodDelete, runResults.EndpointDetails[url3].HTTPMethodRqstStats[http.MethodDelete]) } @@ -284,7 +288,7 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal int expectedHist map[float64]int respHandler *ResponseHandler - runResults RunResults + runResults api.RunResults }{ { name: "No observations, nf = 0", @@ -292,10 +296,9 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: math.MaxInt32, expectedHist: map[float64]int{}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{}, - NormFactor: 0, + NormFactor: 0, }, - runResults: RunResults{}, + runResults: api.RunResults{}, }, { name: "Observations: 1; nf = 0", @@ -303,14 +306,14 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 1, expectedHist: map[float64]int{1: 1}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 1}, - NormFactor: 0, + NormFactor: 0, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 1, - MaxRqstDuration: time.Nanosecond * 1, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 1, + MaxRqstDurationNanos: time.Nanosecond * 1, + TimingResultsNanos: []time.Duration{time.Nanosecond * 1}, }, }, }, @@ -321,14 +324,14 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 0, expectedHist: map[float64]int{2: 0, 4: 2}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 3, time.Nanosecond * 4}, - NormFactor: 0, + NormFactor: 0, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 3, - MaxRqstDuration: time.Nanosecond * 4, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 3, + MaxRqstDurationNanos: time.Nanosecond * 4, + TimingResultsNanos: []time.Duration{time.Nanosecond * 3, time.Nanosecond * 4}, }, }, }, @@ -339,14 +342,14 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 1, expectedHist: map[float64]int{2: 1, 4: 1}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 2, time.Nanosecond * 4}, - NormFactor: 0, + NormFactor: 0, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 2, - MaxRqstDuration: time.Nanosecond * 4, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 2, + MaxRqstDurationNanos: time.Nanosecond * 4, + TimingResultsNanos: []time.Duration{time.Nanosecond * 2, time.Nanosecond * 4}, }, }, }, @@ -357,14 +360,14 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 1, expectedHist: map[float64]int{1.3333333333333333: 1, 2.6666666666666665: 1, 4: 2}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 4}, - NormFactor: 0, + NormFactor: 0, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 1, - MaxRqstDuration: time.Nanosecond * 4, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 1, + MaxRqstDurationNanos: time.Nanosecond * 4, + TimingResultsNanos: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 4}, }, }, }, @@ -377,14 +380,14 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 1, expectedHist: map[float64]int{1: 1, 2: 1, 3: 1, 4: 1}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 4}, - NormFactor: 3, + NormFactor: 3, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 1, - MaxRqstDuration: time.Nanosecond * 4, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 1, + MaxRqstDurationNanos: time.Nanosecond * 4, + TimingResultsNanos: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 4}, }, }, }, @@ -399,14 +402,14 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 1, expectedHist: map[float64]int{1.3333333333333333: 1, 2.6666666666666665: 1, 4: 2}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 4}, - NormFactor: 10, + NormFactor: 10, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 1, - MaxRqstDuration: time.Nanosecond * 4, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 1, + MaxRqstDurationNanos: time.Nanosecond * 4, + TimingResultsNanos: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 4}, }, }, }, @@ -418,15 +421,16 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 0, expectedHist: map[float64]int{50: 6, 100: 1, 150: 0, 200: 1}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 2, time.Nanosecond * 2, - time.Nanosecond * 3, time.Nanosecond * 10, time.Nanosecond * 100, time.Nanosecond * 200}, NormFactor: 0, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 1, - MaxRqstDuration: time.Nanosecond * 200, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 1, + MaxRqstDurationNanos: time.Nanosecond * 200, + TimingResultsNanos: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, + time.Nanosecond * 2, time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 10, + time.Nanosecond * 100, time.Nanosecond * 200}, }, }, }, @@ -438,15 +442,16 @@ func TestGenHistogramSturges(t *testing.T) { expectedMinBinVal: 0, expectedHist: map[float64]int{0.5: 0, 1: 1, 1.5: 0, 2: 3, 200: 4}, respHandler: &ResponseHandler{ - timingResults: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 2, time.Nanosecond * 2, - time.Nanosecond * 3, time.Nanosecond * 10, time.Nanosecond * 100, time.Nanosecond * 200}, NormFactor: 2, }, - runResults: RunResults{ - RunSummary: RunSummary{ - RqstStats: RqstStats{ - MinRqstDuration: time.Nanosecond * 1, - MaxRqstDuration: time.Nanosecond * 200, + runResults: api.RunResults{ + RunSummary: api.RunSummary{ + RqstStats: api.RqstStats{ + MinRqstDurationNanos: time.Nanosecond * 1, + MaxRqstDurationNanos: time.Nanosecond * 200, + TimingResultsNanos: []time.Duration{time.Nanosecond * 1, time.Nanosecond * 2, time.Nanosecond * 2, + time.Nanosecond * 2, time.Nanosecond * 3, time.Nanosecond * 10, time.Nanosecond * 100, + time.Nanosecond * 200}, }, }, }, @@ -540,3 +545,25 @@ func generateNormalDistribution(mean float64, stdDev int) float64 { // } // return x } + +func compareRqstStats(x, y api.RqstStats) bool { + if len(x.TimingResultsNanos) != len(y.TimingResultsNanos) { + return false + } + for i := 0; i < len(x.TimingResultsNanos); i++ { + if x.TimingResultsNanos[i] != y.TimingResultsNanos[i] { + return false + } + } + + if x.AvgRqstDurationNanos == y.AvgRqstDurationNanos && + x.MaxRqstDurationNanos == y.MaxRqstDurationNanos && + x.MinRqstDurationNanos == y.MinRqstDurationNanos && + x.NormalizedMaxRqstDurationNanos == y.NormalizedMaxRqstDurationNanos && + x.TotalRequestDurationNanos == y.TotalRequestDurationNanos && + x.TotalRqsts == y.TotalRqsts { + return true + } + + return false +} diff --git a/internal/scheduler.go b/internal/scheduler.go index b4eb9dc..b3fb9ab 100644 --- a/internal/scheduler.go +++ b/internal/scheduler.go @@ -144,7 +144,7 @@ func validateConfig(concurrency int, rate int, runDur time.Duration, numRqsts in if runDur < 1 && len(eps) > numRqsts { return fmt.Errorf("there are more endpoints, %d, than requests, %d", len(eps), numRqsts) } - if concurrency%len(eps) != 0 { + if concurrency < len(eps) { return fmt.Errorf("MaxConcurrentRqsts must be greater than the number of endpoints. MaxConcurrentRqsts is %d and there are %d endpoints", concurrency, len(eps)) } diff --git a/internal/scheduler_test.go b/internal/scheduler_test.go index 017beb8..b5ac6e2 100644 --- a/internal/scheduler_test.go +++ b/internal/scheduler_test.go @@ -408,11 +408,11 @@ func TestValidation(t *testing.T) { shouldFail: false, }, { - name: "FailPath - concurrency must be a multiple of len(eps) - otherwise some concurrency is lost", + name: "FailPath - concurrency must be >= len(eps) - otherwise some concurrency is lost", rqstRate: goFastRate, runDur: "0s", numRqsts: 99, - concurrency: 99, + concurrency: 1, eps: []api.Endpoint{ { URL: url1, diff --git a/internal/structs.go b/internal/structs.go deleted file mode 100644 index f2b6379..0000000 --- a/internal/structs.go +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2020 Richard Youngkin. All rights reserved. -// Use of this source code is governed by a MIT-style -// license that can be found in the LICENSE file. - -package internal - -import ( - "time" - - "github.com/youngkin/heyyall/api" -) - -// ReportDetail specifies the level of detail of the final report. There are -// 2 values, 'Short' and 'Long'. 'Short' will only report the high level stats. -// 'Long' will report details for each endpoint in addition to the hibh -// level stats. -type ReportDetail int - -const ( - // Short specifies only high level report stats will be produced - Short ReportDetail = iota - // Long indicates detailed reporting stats will be produced - Long -) - -// Response contains information describing the results -// of a request to a specific endpoint -type Response struct { - HTTPStatus int - Endpoint api.Endpoint - RequestDuration time.Duration -} - -// Request contains the information needed to execute a request -// to an endpoint and return the response. -type Request struct { - EP api.Endpoint -} diff --git a/internal/testdata/TestResponseStats.golden b/internal/testdata/TestResponseStats.golden index cff4ef9..15f9115 100644 --- a/internal/testdata/TestResponseStats.golden +++ b/internal/testdata/TestResponseStats.golden @@ -1,21 +1,33 @@ { "RunSummary": { - "RqstRatePerSec": 8594.350933131653, - "RunDuration": 1396266, - "RunDurationStr": "1.396266ms", + "RqstRatePerSec": 11098.358275352119, + "RunDurationNanos": 1081241, "RqstStats": { + "TimingResultsNanos": [ + 100000000, + 1000000000, + 500000000, + 250000000, + 250000000, + 250000000, + 750000000, + 250000000, + 750000000, + 1250000000, + 1750000000, + 900000000 + ], "TotalRqsts": 12, - "TotalRequestDurationStr": "8s", - "MaxRqstDurationStr": "1.75s", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "100ms", - "AvgRqstDurationStr": "666.666666ms", - "TotalRequestDuration": 8000000000, - "MaxRqstDuration": 1750000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 100000000, - "AvgRqstDuration": 666666666 - } + "TotalRequestDurationNanos": 8000000000, + "MaxRqstDurationNanos": 1750000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 100000000, + "AvgRqstDurationNanos": 666666666 + }, + "DNSLookupNanos": null, + "TCPConnSetupNanos": null, + "RqstRoundTripNanos": null, + "TLSHandshakeNanos": null }, "EndpointSummary": { "http://someurl/1": { @@ -45,30 +57,27 @@ }, "HTTPMethodRqstStats": { "GET": { + "TimingResultsNanos": [ + 100000000 + ], "TotalRqsts": 1, - "TotalRequestDurationStr": "100ms", - "MaxRqstDurationStr": "100ms", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "100ms", - "AvgRqstDurationStr": "100ms", - "TotalRequestDuration": 100000000, - "MaxRqstDuration": 100000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 100000000, - "AvgRqstDuration": 100000000 + "TotalRequestDurationNanos": 100000000, + "MaxRqstDurationNanos": 100000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 100000000, + "AvgRqstDurationNanos": 100000000 }, "PUT": { + "TimingResultsNanos": [ + 1000000000, + 500000000 + ], "TotalRqsts": 2, - "TotalRequestDurationStr": "1.5s", - "MaxRqstDurationStr": "1s", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "500ms", - "AvgRqstDurationStr": "750ms", - "TotalRequestDuration": 1500000000, - "MaxRqstDuration": 1000000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 500000000, - "AvgRqstDuration": 750000000 + "TotalRequestDurationNanos": 1500000000, + "MaxRqstDurationNanos": 1000000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 500000000, + "AvgRqstDurationNanos": 750000000 } } }, @@ -81,17 +90,15 @@ }, "HTTPMethodRqstStats": { "POST": { + "TimingResultsNanos": [ + 250000000 + ], "TotalRqsts": 1, - "TotalRequestDurationStr": "250ms", - "MaxRqstDurationStr": "250ms", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "250ms", - "AvgRqstDurationStr": "250ms", - "TotalRequestDuration": 250000000, - "MaxRqstDuration": 250000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 250000000, - "AvgRqstDuration": 250000000 + "TotalRequestDurationNanos": 250000000, + "MaxRqstDurationNanos": 250000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 250000000, + "AvgRqstDurationNanos": 250000000 } } }, @@ -113,56 +120,52 @@ }, "HTTPMethodRqstStats": { "DELETE": { + "TimingResultsNanos": [ + 900000000 + ], "TotalRqsts": 1, - "TotalRequestDurationStr": "900ms", - "MaxRqstDurationStr": "900ms", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "900ms", - "AvgRqstDurationStr": "900ms", - "TotalRequestDuration": 900000000, - "MaxRqstDuration": 900000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 900000000, - "AvgRqstDuration": 900000000 + "TotalRequestDurationNanos": 900000000, + "MaxRqstDurationNanos": 900000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 900000000, + "AvgRqstDurationNanos": 900000000 }, "GET": { + "TimingResultsNanos": [ + 250000000, + 750000000 + ], "TotalRqsts": 2, - "TotalRequestDurationStr": "1s", - "MaxRqstDurationStr": "750ms", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "250ms", - "AvgRqstDurationStr": "500ms", - "TotalRequestDuration": 1000000000, - "MaxRqstDuration": 750000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 250000000, - "AvgRqstDuration": 500000000 + "TotalRequestDurationNanos": 1000000000, + "MaxRqstDurationNanos": 750000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 250000000, + "AvgRqstDurationNanos": 500000000 }, "POST": { + "TimingResultsNanos": [ + 250000000 + ], "TotalRqsts": 1, - "TotalRequestDurationStr": "250ms", - "MaxRqstDurationStr": "250ms", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "250ms", - "AvgRqstDurationStr": "250ms", - "TotalRequestDuration": 250000000, - "MaxRqstDuration": 250000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 250000000, - "AvgRqstDuration": 250000000 + "TotalRequestDurationNanos": 250000000, + "MaxRqstDurationNanos": 250000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 250000000, + "AvgRqstDurationNanos": 250000000 }, "PUT": { + "TimingResultsNanos": [ + 250000000, + 750000000, + 1250000000, + 1750000000 + ], "TotalRqsts": 4, - "TotalRequestDurationStr": "4s", - "MaxRqstDurationStr": "1.75s", - "NormalizedMaxRqstDurationStr": "", - "MinRqstDurationStr": "250ms", - "AvgRqstDurationStr": "1s", - "TotalRequestDuration": 4000000000, - "MaxRqstDuration": 1750000000, - "NormalizedMaxRqstDuration": 0, - "MinRqstDuration": 250000000, - "AvgRqstDuration": 1000000000 + "TotalRequestDurationNanos": 4000000000, + "MaxRqstDurationNanos": 1750000000, + "NormalizedMaxRqstDurationNanos": 0, + "MinRqstDurationNanos": 250000000, + "AvgRqstDurationNanos": 1000000000 } } } diff --git a/internal/testserver/main.go b/internal/testhttpsserver/main.go similarity index 100% rename from internal/testserver/main.go rename to internal/testhttpsserver/main.go diff --git a/internal/testhttpsserver/testserver b/internal/testhttpsserver/testserver new file mode 100755 index 0000000..12b7db6 Binary files /dev/null and b/internal/testhttpsserver/testserver differ diff --git a/internal/testserver/.session.vim b/internal/testserver/.session.vim deleted file mode 100644 index 9e3a723..0000000 --- a/internal/testserver/.session.vim +++ /dev/null @@ -1,56 +0,0 @@ -let SessionLoad = 1 -if &cp | set nocp | endif -let s:so_save = &so | let s:siso_save = &siso | set so=0 siso=0 -let v:this_session=expand(":p") -silent only -silent tabonly -cd ~/Software/repos/heyyall/internal/testserver -if expand('%') == '' && !&modified && line('$') <= 1 && getline(1) == '' - let s:wipebuf = bufnr('%') -endif -set shortmess=aoO -argglobal -%argdel -$argadd /private/etc/hosts -edit /private/etc/hosts -set splitbelow splitright -set nosplitbelow -set nosplitright -wincmd t -set winminheight=0 -set winheight=1 -set winminwidth=0 -set winwidth=1 -argglobal -setlocal fdm=manual -setlocal fde=0 -setlocal fmr={{{,}}} -setlocal fdi=# -setlocal fdl=0 -setlocal fml=1 -setlocal fdn=20 -setlocal fen -silent! normal! zE -let s:l = 8 - ((7 * winheight(0) + 28) / 57) -if s:l < 1 | let s:l = 1 | endif -exe s:l -normal! zt -8 -normal! 0 -tabnext 1 -badd +0 /private/etc/hosts -if exists('s:wipebuf') && len(win_findbuf(s:wipebuf)) == 0 - silent exe 'bwipe ' . s:wipebuf -endif -unlet! s:wipebuf -set winheight=1 winwidth=20 shortmess=filnxtToOS -set winminheight=1 winminwidth=1 -let s:sx = expand(":p:r")."x.vim" -if file_readable(s:sx) - exe "source " . fnameescape(s:sx) -endif -let &so = s:so_save | let &siso = s:siso_save -nohlsearch -doautoall SessionLoadPost -unlet SessionLoad -" vim: set ft=vim : diff --git a/internal/testserver/cert/private.key b/internal/testserver/cert/private.key deleted file mode 100644 index 26cfb69..0000000 --- a/internal/testserver/cert/private.key +++ /dev/null @@ -1,51 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -MIIJKAIBAAKCAgEAo9oCErTk0EDYU4a2VPR2++il/GxMfY0HQuYuxgXQQM4Vj4a0 -/Gv58P681JX8/NScwSlnI1XsAumGPvcit4y/W1UiFJC6Cpe1rvDl+Sze8p8aXEWL -OOgb/xQ0RHtDk8hy/uJzF6i0lk7LNKJ3pXApYjm4ZOd47ytmRzUT3UH3wmKBPgDC -WKkU0ihX6JnoYqeseFIbgqvVqkqN1X2OCO25HB8woqFK3/G3VI/X/33Dx32XH4Hd -5EHz2hQL+f3i3gDlbhXlJDJ4ZQK2gdiNX8o/1znHOQF+YCuUujBrpqtLY3JBUSAM -zX2jsVWisFev91RXSVcjIaTG3QUZgJrRgEpFgNoqCNGhSWqAL1mHA8q7s+vI7St2 -S3LOuXtLYsnx5AY3hqrFfEZWc5b3tAQTVn0jyws2rDiLApjCC4FPPwMnRt9GSE+Y -OpMf7F5gXuAaiGayguBatR8VG6iXttQ7r25APD+X3Di4G5MH7YLHHse0zHaPEBAN -jC7uQZkWXqYd/VsimVQ8EA5sYpzR6xc3GOWBpx2XDWkgWr0LHHGV7M3vA/CG/PWv -9Ltk2xRPoXE96IlKPM0uFe1QrJ5gkJDaao0ooqw6u7x7HfAP3r39mj2OlPmNKbqh -xgT8onLU2EiKAMtD+3pm27VeZAkJI9DvcAEb7bt4D33QPPt16dn4era9aMkCAwEA -AQKCAgEAhqj0DTW322N/pl3PWdC0POH+EI9c6c5Oym9sv3glqCz20UdPVSjUeHiS -7k/6ZDvEPIfvaL4DwnzzxKnNUXaOCdzUrnwnOe9m9Mis6HU1IccanfzVp8YyxBdx -wVpgtoMF525qUmZnTCSjorVniYK9sClp3xvRaxaK7zizA6NkoM/eEfwDvWPwZauc -W+CpePL+rsAkNedNKbAuEugmJDZjbLZBfVb7t0LrfcbeKO7OSRRxcAaGO0Lsyyyw -jxtWKUgpRqkd8fq8sZ9iKSK1RaVthE/K6JHOr/EBQWfsAUVEJw3OmoKxoux/7I1J -sI3QY39gYTkI+Wx3t0uqouQaC8p+hU9/VvSepoF8ILmW/5oM6htEwhJE+M61VqAc -KvvBrTH2h/2AcrOWyYomgXSkycgL5BYxwpIkg+1+oj2aanx9y6ehPdkjbTMHLcBD -McqPw0ekHPUhC6dFUdLHSqYWOLRnsREPIoWXqZRWG6EoL7IEXtWVCH4xqMTVpdFm -O8jkRwLYzogOUlzz+lHW2fowAraj+L3b6DZ+++luU6xZDoFnX0XANqrLnT1Mw8EQ -bQ/HpMpacs8u9gTcjjhf2U/PRrTGsycXRZqhD0M91gPWbo6JMTNPG9MNl19pTp+A -CscCi0TI0tU2uZJYUk50alUCmkPPxoKWSbuChtpWRif5jeDMf+UCggEBANK/9C+/ -ErJirlgTpX3Or96p5jSqAa4OY2+jG8an84kPGhhejpbXv7AjvUKAp1XiWA6e2WES -bi3Wy4TcPRtgDUjiQ6/eGws0/Z1EOYs1BDn19v4f5wXqWFWpse5d58VitrDfMGbh -Ke+38zIUvfj1t/gE5odrdECXTp5r+Etur25QcuxDu4FkH5euIQI4s2kUrsVR26M7 -AtNpSq4T4jbCsqaAdcDDvLTaRnP2hFELimur+UicYSrJNkGMS1Ysgyj+GPUOCNnM -Nf9XmCy4tnkQClCG7e+WH49RoFbdWAt4RQwIpPW/dQqjeY0Oz5LAo7HMb4P9mWvx -PtexfaxynFz+Zy8CggEBAMcIQLFQTTM7uiNE9tK/h3N9zf6owmGvMf8w9of0ms8I -sYcj4wPR5Ll6ySLpqpddmuu4gZSdgVp5ycYU43tmSFazia6B6unQHNZLUaAx/0eW -xYznfvbqz65kBiL7DypIPiZluWetIZOScY1GOsrW0mZPRcPryxUi3iCsx7pUH7AD -H3U7RiELBaBzs7gVJ3YcRurDbuAX5OS2Lj5vehkO6ZQi5sPnt7W7MZlTvNC0Yydj -opCZm/z7twlj73L68nhftu57ozwB6tB+gvNoCnT5NSqFvumnFuo+6UHBKeGpzHsM -x5Ym1r3imG9LIXNZktSB+Q0Gl3cVnfCtTAhCzz07MYcCggEAMwO0IDqoU/X/LeLT -lHiLqeKGjwj4DyH8f/GDr7rIAM1fC7cX3Pussv5zub1axDdeCWv6Qr0rXn04FpkZ -UZ8WmCXtLI0fDr9tBLyXEVNsCnu3phwi2BO5/kJth73DdMxIXNgp5z1p4VUt0Vmk -Are4KJlHFFC2e0wlA8Qu/lN0s8dVikt6//80horoApmnFDClfa4q9IA8VuCN60V0 -5LyMcjF2T4sSCtUraLaroNKiVx3x4dm4y5qZP5SuR9XOigW4FNmo2s/L4ltZwrmT -sgpn9MY0omI8kXy4y04ZGe2rCRaul64YrtKTgcmsBWIMPeW2uMSSdsaW569XNH8p -ynjqkQKCAQA+e2rDv2/c650twVKzKol9SjtG/Pe47uUFNfvPBo0q/ZGt2ShFZLkn -OVK3cR+q0Sn3Yj7bxu561szvMFORw7Rl84r/i62RpVHIPHDtl4SKltyBtZL4NRLp -rmD2zlYecfuA1mJ0F7f4ufqH3UpLr1Dx6WT/cqCYjA+rtlIlPo+MFA7mIKuNaAZm -Lqx2171BqPLidGP0WcvzuPWfiCOOhk3xwVssmSvlE1Uoy071Pgv6q563QmHj86ms -ewEK2ZkRDQtCpvHBvuBWf8DgZQMTYcC9Dqu2ckwRUZqsl9VsEIAvCP4HNz4m8mHk -XnOr4Kzlpb/nxO/75H9mtSCvXznsAQ81AoIBAHBqlzkDaUIydRp+fTGOPUIA3dVs -Dfy5kC5OzN0iAQpk8sHDIZ+Zrnu4O25rMzi+YwlklZd4FjGwJPaJ8jWRWyDJvVAe -HantDeQYdLis5enZ+Z8aCswHdXhPllqWSLt3q2EvbL0BmwZueJ4Re/ljqf2nBFRo -omPXjBA4yteGGFTwlOIg5ql8hsTQ8guW9fvXqFoL55YWAY2ltcb3hP2MuxOe8LR3 -z2qjRkf6rqMKDDpdYkBXT9SlJQa9PWIXfCjBiGQPN6Bok9VTZaZYRZs07od5VV/y -LENZqN5tCNjC/beMaJoCJRVlZjIxV6POrONAEmaUlbBmJAwqZhWFJOn7/f4= ------END RSA PRIVATE KEY----- diff --git a/internal/testserver/cert/public.crt b/internal/testserver/cert/public.crt deleted file mode 100644 index 4d541bc..0000000 --- a/internal/testserver/cert/public.crt +++ /dev/null @@ -1,33 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIFpjCCA44CCQC3NyiGZhymBTANBgkqhkiG9w0BAQsFADCBlDELMAkGA1UEBhMC -VVMxETAPBgNVBAgMCENvbG9yYWRvMQ8wDQYDVQQHDAZEZW52ZXIxEzARBgNVBAoM -ClNvY2NlcnNvZnQxEDAOBgNVBAsMB1Rvb2xpbmcxEjAQBgNVBAMMCWxvY2FsaG9z -dDEmMCQGCSqGSIb3DQEJARYXcmljaC55b3VuZ2tpbkBnbWFpbC5jb20wHhcNMjAw -NTE3MjMwMzQyWhcNMjUwNTE2MjMwMzQyWjCBlDELMAkGA1UEBhMCVVMxETAPBgNV -BAgMCENvbG9yYWRvMQ8wDQYDVQQHDAZEZW52ZXIxEzARBgNVBAoMClNvY2NlcnNv -ZnQxEDAOBgNVBAsMB1Rvb2xpbmcxEjAQBgNVBAMMCWxvY2FsaG9zdDEmMCQGCSqG -SIb3DQEJARYXcmljaC55b3VuZ2tpbkBnbWFpbC5jb20wggIiMA0GCSqGSIb3DQEB -AQUAA4ICDwAwggIKAoICAQCj2gIStOTQQNhThrZU9Hb76KX8bEx9jQdC5i7GBdBA -zhWPhrT8a/nw/rzUlfz81JzBKWcjVewC6YY+9yK3jL9bVSIUkLoKl7Wu8OX5LN7y -nxpcRYs46Bv/FDREe0OTyHL+4nMXqLSWTss0onelcCliObhk53jvK2ZHNRPdQffC -YoE+AMJYqRTSKFfomehip6x4UhuCq9WqSo3VfY4I7bkcHzCioUrf8bdUj9f/fcPH -fZcfgd3kQfPaFAv5/eLeAOVuFeUkMnhlAraB2I1fyj/XOcc5AX5gK5S6MGumq0tj -ckFRIAzNfaOxVaKwV6/3VFdJVyMhpMbdBRmAmtGASkWA2ioI0aFJaoAvWYcDyruz -68jtK3ZLcs65e0tiyfHkBjeGqsV8RlZzlve0BBNWfSPLCzasOIsCmMILgU8/AydG -30ZIT5g6kx/sXmBe4BqIZrKC4Fq1HxUbqJe21DuvbkA8P5fcOLgbkwftgscex7TM -do8QEA2MLu5BmRZeph39WyKZVDwQDmxinNHrFzcY5YGnHZcNaSBavQsccZXsze8D -8Ib89a/0u2TbFE+hcT3oiUo8zS4V7VCsnmCQkNpqjSiirDq7vHsd8A/evf2aPY6U -+Y0puqHGBPyictTYSIoAy0P7embbtV5kCQkj0O9wARvtu3gPfdA8+3Xp2fh6tr1o -yQIDAQABMA0GCSqGSIb3DQEBCwUAA4ICAQBLSPePS0F5HBjQuC9gsqAuq+fMnC31 -22CVqOEq38dU1nIFW16Kzdh0oD/P8wGx56AJoBLko2LbRkI4KQ5UdwynXP8jfxN3 -XuTHqMNpSGPwUn6+vRaLrYUZHM9I9UzdDorWB6k3W4l2ArxW2aIhwNFX6hLnyCck -RnJlv1Jly+/KDuejFdvsLdWovLF0TRhpClAi4XvJIZ7eGNY3+yKRmR4o8LpnQZIu -fGPK+hwDDaPeswVXrmMtwIn6HECJUudwc/MMaS2+mHCO9UPBBJb+ebH98E8XypQ2 -Gh311tUrtIo2JfP0YMgjoVkWHsyHlED7VueiBctEr4ehAmukRCDWyg0z+9UQptkM -Xu2E3QnySrvZUAbU3a3fC2dpcm4NpMirWJ9G7BR5ai50+27Adrb/ECExO5hCAL44 -PF2cMMtBKVrbt+zSSbMjJESiHdeXBVspAQvw8pEvkjzzJoWgNHDl6VuHOI8KVhWI -N3r+cgUdZ1qpvFNoA5io+xlhX1FjsK9Ke1v+j2PuL38elGZWy8QKvTy/QGZqtOqY -S8esWPgpKxLfqT72UhxmJO9/CmNX0EmTF/9WUxXy5JVLHSfZ25bxk4i26OoR4K0P -VBes8OnkZQjeECgTff4ZKKhr8KOdFwp9wk77T2gqDwAxG/2Jxb50UbgbDY3amt75 -NFbFhAYfD/0c5g== ------END CERTIFICATE----- diff --git a/testdata/httpbinorg.json b/testdata/httpbinorg.json index 848f313..d48b33f 100644 --- a/testdata/httpbinorg.json +++ b/testdata/httpbinorg.json @@ -1,8 +1,8 @@ { "RqstRate": 0, - "MaxConcurrentRqsts": 10, + "MaxConcurrentRqsts": 50, "RunDuration": "0s", - "NumRequests": 10, + "NumRequests": 1000, "Endpoints": [ { "URL": "https://httpbin.org/anything", diff --git a/testdata/httpsEPCerts.json b/testdata/httpsEPCerts.json index 239ca26..9b20203 100644 --- a/testdata/httpsEPCerts.json +++ b/testdata/httpsEPCerts.json @@ -2,7 +2,7 @@ "RqstRate": 0, "MaxConcurrentRqsts": 2, "RunDuration": "0s", - "NumRequests": 2, + "NumRequests": 10, "CertFile": "/Users/rich_youngkin/certs/fullchain.pem", "KeyFile": "/Users/rich_youngkin/certs/privkey.pem", "Endpoints": [ @@ -19,7 +19,7 @@ "RqstPercent": 50, "CertFile": "/Users/rich_youngkin/Software/repos/heyyall/internal/testserver/cert/public.crt", "KeyFile": "/Users/rich_youngkin/Software/repos/heyyall/internal/testserver/cert/private.key" - } + } ] } \ No newline at end of file diff --git a/testdata/smallish.json b/testdata/smallish.json index 77b1bce..21fd41c 100644 --- a/testdata/smallish.json +++ b/testdata/smallish.json @@ -1,8 +1,8 @@ { "RqstRate": 10, - "MaxConcurrentRqsts": 5, + "MaxConcurrentRqsts": 20, "RunDuration": "0s", - "NumRequests": 4, + "NumRequests": 20, "OutputType": "JSON", "Endpoints": [ { diff --git a/testdata/threeEPs33Pct.json b/testdata/threeEPs33Pct.json index 61490c1..18915f9 100644 --- a/testdata/threeEPs33Pct.json +++ b/testdata/threeEPs33Pct.json @@ -1,29 +1,36 @@ { "RqstRate": 1000, - "MaxConcurrentRqsts": 3, + "MaxConcurrentRqsts": 200, "RunDuration": "0s", - "NumRequests": 1000, + "NumRequests": 2000, "OutputType": "JSON", "Endpoints": [ { "URL": "http://accountd.kube/users", "Method": "GET", "RqstBody": "", - "RqstPercent": 33, + "RqstPercent": 13, "NumRequests": 2 }, { "URL": "http://accountd.kube/users/1", "Method": "GET", "RqstBody": "", - "RqstPercent": 33, + "RqstPercent": 12, "NumRequests": 2 }, { - "URL": "http://accountd.kube/users/2", + "URL": "http://accountd.kube/users/2000", "Method": "GET", "RqstBody": "", - "RqstPercent": 34, + "RqstPercent": 25, + "NumRequests": 2 + }, + { + "URL": "http://accountd.kube/users/2000", + "Method": "DELETE", + "RqstBody": "", + "RqstPercent": 50, "NumRequests": 2 } ]