-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increased test run time from 2.5.0 to 2.5.1 #5571
Comments
I'm assuming the tests are not grouped by outcome, correct? The execution time is measured while the actual test method is running - the timer is stopped as soon as the test finishes running, so any additional apparent runtime includes the overhead of updating the test explorer UI. Thinking perhaps we should consider updating the UI at each x% of tests executed, rather than every time. |
Yes, they are grouped by location in all cases. In general though, it seems that the rubberduck UI slows down significantly with project size. For some things that is obviously understandable, there is more to render, but there are also cases such as the example with clicking the "Run" button where I don't really see why that would be the case. I assume these things would improve immensely when you spin out (if you spin out) the language server into a separate process. I love Rubberduck to death, but I have to admit that it is a bit frustrating developing in a somewhat sluggish environment and it would be great if it was possible to identify why it slows down in scenarios where one wouldn't (naively) expect it. |
In the new version I also have issues where Excel crashes while running unit test. The general symptom is that the test explorer is running, but the UI isn't updating, and then after a while Excel crashes. It mostly happens if things have been running for a while, if I reopen it and run the tests immediately it hasn't crashed yet. I experience Excel crashes once or twice a day, but since I have no way of knowing whether that is due to Excel or Rubberduck I haven't really talked about it here (also the logs doesn't say anything). However, since I started using the new version I get crashes while it runs tests, so there might be at least some connection. I'm not sure, Excel isn't exactly the world's most stable program. Do you know if there is any way to check what type of error occurred? For instance maybe it's running out of memory. |
For the sake of knowing, I noticed tests run smoothly only grouping by category, not by outcome, nor by location. |
Yea it sounds like we need to take a hard look at the TestExplorer updating its UI and the TestEngine dispatching results... The UI updates should not affect the measured test runtime in any significant way. There might be shenanigans to be had with batching updates, but that sounds like a lot of trouble ... |
Just as a further tip, if you set a filter for the tests that filters away all tests, the tests run the fastest. Afterwards, you can remove the filter and see the results. This also indicates that we really should batch updates to the UI. |
That's what I did for the second of the new tests (the ones marked filtered). But yeah, that is what I will start doing ^^ |
Testing GUI is horrendously inefficient and slows down quickly with the number of tests. I would like to support the idea of adding a setting for the number of tests between GUI updates. For example, 175 fairly basic tests take 3.5-4 min with GUI updates vs 2 sec with GUI disabled. Attached is my test file. |
Rubberduck version information
Description
For the past couple of months I have noticed that my Rubberduck installation has been running slower and slower, however these slownesses are generally hard to both document and pin down so I thought I'd at least provide some evidence. Initially I assumed that it was simply my project increasing in complexity, but I thought I'd at least test that hypothesis by running my unit tests and see the results. At the moment I have:
v2.5.0: Runtime 2:06 - 2:12
v2.5.1: Runtime 3:26 - 3:48
I haven't been able to find a good way to measure the other slownesses I am experiencing, but this is a place to start. I'll see if I can create an MWE at some point. Other symptoms mostly include the UI being slow, and when it hangs it hangs so that Windows puts the window in the background because it thinks it hangs. Some examples are if I open the unit testing framework, run a parse, wait until task manager reports no activity in Excel and then click the "Run" menu item then Rubberduck freezes for about 10-15 seconds before the dropdown menu becomes active and I can click something.
Logfile
RubberduckLog - 2.5.0.txt
RubberduckLog - 2.5.1.txt
Additional context
The project I am currently testing has 400 module and class files, 50 test modules and 475 tests. It's about 87k lines of code.
If I export the runtimes form the unit test system then the tests always report running faster on 2.5.1 than on 2.5.0. From that report the new version should have been faster by 5 seconds, but it is in fact slower by over a minute so maybe there is something in the test setup/teardown. I have also timed it with a stopwatch to check that the timing function isn't wrong, and I get the same times.
The text was updated successfully, but these errors were encountered: