Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking with 50 inputs leads to enormous RAM usage #325

Open
goncalotomas opened this issue Jul 17, 2020 · 7 comments
Open

Benchmarking with 50 inputs leads to enormous RAM usage #325

goncalotomas opened this issue Jul 17, 2020 · 7 comments

Comments

@goncalotomas
Copy link

I tried out Benchee to test out the difference in performance between pattern matching in function clauses versus a more traditional approach. I built a dumb module to test this out which you can see here.

Here was my run script:

Benchee.run(
  %{
    "bar" => fn input -> Foo.bar(input) end,
    "baz" => fn input -> Foo.baz(input) end,
  },
  inputs: %{
    "1" => "1option",
    "2" => "2option",
    "3" => "3option",
    "4" => "4option",
    "5" => "5option",
    "6" => "6option",
    "7" => "7option",
    "8" => "8option",
    "9" => "9option",
    "10" => "10option",
    "11" => "11option",
    "12" => "12option",
    "13" => "13option",
    "14" => "14option",
    "15" => "15option",
    "16" => "16option",
    "17" => "17option",
    "18" => "18option",
    "19" => "19option",
    "20" => "20option",
    "20" => "20option",
    "21" => "21option",
    "22" => "22option",
    "23" => "23option",
    "24" => "24option",
    "25" => "25option",
    "26" => "26option",
    "27" => "27option",
    "28" => "28option",
    "29" => "29option",
    "30" => "30option",
    "31" => "31option",
    "32" => "32option",
    "33" => "33option",
    "34" => "34option",
    "35" => "35option",
    "36" => "36option",
    "37" => "37option",
    "38" => "38option",
    "39" => "39option",
    "40" => "40option",
    "41" => "41option",
    "42" => "42option",
    "43" => "43option",
    "44" => "44option",
    "45" => "45option",
    "46" => "46option",
    "47" => "47option",
    "48" => "48option",
    "49" => "49option",
    "50" => "50option"
  },
  time: 2,
  warmup: 0,
  print: [fast_warning: false],
  formatters: [
    Benchee.Formatters.Console
  ]
)

I understand that I could have just called both functions with random inputs, but I'd have to ensure that I used the same seed for the PRNG function. This was the dumbest (and quickest) way I found to set this up.

This creates effectively 50 inputs for each function, and even with just 2 seconds of testing and zero warmup it was still generating a lot of data. The problem with this is that since the data isn't (as far as I can see) being written to disk, this culminates in a huge use of RAM after the last input is tested. This makes the program hang for a long time without output, with the memory being hammered and making the OS itself a little jittery. Eventually, after waiting for about an hour, I do get the output and Benchee terminates peacefully. Here are some screenshots of what I saw on mac OS Catalina:

Screen Shot 2020-07-14 at 10 47 12

Screen Shot 2020-07-14 at 10 30 57

Screen Shot 2020-07-14 at 10 31 25

Screen Shot 2020-07-14 at 10 20 21

Screen Shot 2020-07-14 at 10 19 49

Screen Shot 2020-07-14 at 10 18 07

Screen Shot 2020-07-14 at 09 53 52

I understand I could have written the script differently, but I have a feeling this might not be expected behavior.

@devonestes
Copy link
Collaborator

Yikes! It's definitely possible to write that benchmark in a way that doesn't cause this problem, but it also shouldn't be up to the user to find the right way to do that. I'll definitely take a peek at this, as it's not a good user experience and absolutely needs to be fixed.

Thanks for the great report with all the repro steps!

@PragTob
Copy link
Member

PragTob commented Jul 18, 2020

👋

Thanks for the report!

Also, I wouldn't trust whatever output you received. Once it starts swapping performance will be abysmal where it swapped.

I'm surprised that it generated so much data though... but I mean, it might. I don't think it's related to the 50 inputs though. You should see the same behavior (or so I believe) with just one input but running it for 100 seconds (or just one function with 200 seconds run time). These are extremely fast and we record every single one of them, never throwing them away.

What's more is that in our post processing/statistics we sort all of the run time values which creates a copy of the list with all the run times.

I'm not sure we can viably do anything better here other than warning you that this might get out of hand by some metric. We could implement dumping and reloading each job from disk but that would/should be an option/separate thing. And it's also a lot of complexity/performance degradation for the "normal" use case.

With so damn fast functions I'd vote for massively reduced run times that should still give you ample sample size. Like.. 0.1 or something.

@goncalotomas
Copy link
Author

I wouldn't trust whatever output you received. Once it starts swapping performance will be abysmal where it swapped.

I'm pretty sure that this high memory usage only starts after the benchmarking completes, so I guess the measurements might remain OK.

You should see the same behavior (or so I believe) with just one input but running it for 100 seconds (or just one function with 200 seconds run time).

That makes sense given what I've seen. I think I tried it, but did not record the end result.

I'm not sure we can viably do anything better here other than warning you that this might get out of hand by some metric.

I think it's a valid question for any benchmarking tool. Perhaps a warning is appropriate, but I think this will lead to users just working around with something like :timer.sleep/1 or similar, which might ruin the results, or hide performance differences.

These are extremely fast and we record every single one of them, never throwing them away.

Implementation details aside, perhaps the best effort solution would be to reduce the sample rate when these very fast functions are detected. For example, if the number of iteration exceeds a certain threshold and we know the amount of data generated in each iteration, you could still keep executing the function, incrementing the iterations, but only sampling the rest of the metrics every N-th execution.

Regarding the log to disk feature, you can look at Basho Bench and how they log to disk. They log one file per different operation (equivalent to one Benchee input, I think) and a final summary file that aggregates data from multiple operations. I've attached an example of both of these in case it's interesting for you.

basho-bench-single-operation-log.txt
basho-bench-summary-log.txt

@PragTob
Copy link
Member

PragTob commented Jul 18, 2020

@goncalotomas

that the time explosion comes after the benchmarks is interesting behavior, maybe I misread the initial post 🤔 That leads to thinking more that it's the statistics/sorting part. In that case we could offer simplified statistics or something.

Sampling the executions is a no go for me. I mean, that just means we're executing it for no real benefit. Might as well turn the time down and execute fewer of them.

As for logging to disk, as I said yes that would work and we even have the serialization/deserialization in place. It's a non trivial amount of work and potential problems though. For instance, if I'm not wrong right now benchee could run without ever having write permission to disk (if you use the console output). Which, is nice. Plus when the problem truly are our statistics computations then writing it to file might not even change that much :|

I'd have to take a deeper look to be certain though. Starting to throw warnings at an arbitrary/somewhat determined count of samples per scenario would be a good and doable first step though.

@devonestes
Copy link
Collaborator

So I'm finding that there's a pretty steady increase in memory usage during just the collection phase (~6.4GB of memory use), and then we see the huge jump.

I'm thinking that in this case we've got two options, really:

  1. Warn users when they've collected a whole ton of measurements during the run of their benchmark to tell them to change their benchmark, or
  2. Try and make it so folks don't end up collecting too many measurements in the first place.

By point 2, I'm thinking mostly about this old issue: #9

That would be a huge optimization for folks since it would decrease the runtime dramatically, especially for benchmarks like this where there is little deviation and we'd hit a really high measure of confidence really quickly, and that would also have the side effect of making it so less memory is used for collecting the data and calculating the statistics.

I'm also thinking that since we calculate statistics in parallel, that might be contributing to the problem here a bit. In theory we could make that an option, but I'd prefer to avoid that sort of really low-level configuration.

What do y'all think?

@PragTob
Copy link
Member

PragTob commented Jul 19, 2020

Ugh good point about us doing that in parallel :D that means there are 100 sorts running in parallel here which really explains this.

Imo both 1 and 2 would be good to implement. Not sure how to come up with a good "barrier" for the warning for 1., 2. is of course one of my most wished for features. Iirc benchmark.js does this/can do this and I want us to be clearly the best 👼

It has some interesting implementation considerations though which we might discuss in the other issue. Like how often do we check the confidence? After every invocation seems a bit much. My gut feeling would be kinda dependent on run time and the last computed confidence value. Lots of fine tuning there and maybe an option to expose. We might also be able to just cheat and see what benchmark.js or others do :)

@PragTob
Copy link
Member

PragTob commented Jul 19, 2020

Ah btw. great thought of fixing that through offering 2. @devonestes - never occurred to me but seems great 🚀 ⭐ 💚

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants