-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible to speed up gcovr on big projects? #36
Comments
Scalability has been raised for several projects in the past. I need a case study to focus the performance tuning for gcovr. Is the issue simply the number of gcno files? I might be able to generate a case study that illustrates this situation. |
my assumption is it's not just the number but also the size/complexity of if header1.hpp is included by file1.cpp, file2.cpp, and file3.cpp, can On Jul 4, 2014, at 5:01 PM, William Hart notifications@github.com wrote: Scalability has been raised for several projects in the past. I need a case — |
Perhaps you could make that sort of logical deduction, but that would On Fri, Jul 4, 2014 at 4:14 PM, patapra notifications@github.com wrote:
|
On a project that I'm working on the following gcovr command-line takes 5 minutes: gcovr --xml --root $MI_SOURCE_DIR --exclude=.*/tests/.* --exclude=.*/build/.* -o $MI_BUILD_DIR/coverage_report.xml $MI_SOURCE_DIR I tried replacing it with the following: gcov $(find . -name "*.gcda" -o -name "*.gcno") --branch-counts --branch-probabilities --preserve-paths
gcovr -g --xml --root $MI_SOURCE_DIR --exclude=.*/tests/.* --exclude=.*/build/.* -o $MI_BUILD_DIR/coverage_report.xml $MI_SOURCE_DIR Here the gcov command takes about 25 seconds and the gcovr about 35 seconds. In other words it is a solid factor of five faster. gcovr runs gcov once per .gcda (fallback .gcno) in the first case above (I checked, worst case it will run it more than once pr .gcda/.gcno file, if it doesn't find the working dir in the first try). I believe running gcov once or at least fewer times presents a great opportunity for improvement. Better than parallelizing the main loop as suggested in #3. |
Sorry, turns out the benefit I posted was a misrepresentation. The second option takes about 3m20s, so the benefit is not even a factor of two. Also, I get exactly the same stats for files, classes, lines, but for conditionals the raw numbers are much higher for the second approach - 39728/286544 (12.7%) vs 87858/692094 (13.9%) |
GCOV.exe could be easily called parallel, based on number of CPU cores. |
@tsondergaard Did you ever manage to figure out why the numbers for the conditionals are being messed up? |
@itavero Unfortunately not. |
I checked and, in our case, the report claims to have exactly twice as much branches as they actual have (what the previous reports show). What we are doing is running all the different tests for our entire code base and then we run If anyone has an idea about what might be causing this, I'd love to hear it. |
hi,
i've written a tool, which uses gcovr to generate coverage reports for a big project. i am selectively instrumenting files based on a diff file users pass in (ie if the user changed file1, file2, file3, then touch these files and do a make with gcov flags enabled). in some cases, especially when a user touches a commonly included header file, hundreds of gcno files are created (i presume wherever these header files are included and possibly recursively?). in these cases, gcovr takes up to 3 hours to complete its analysis. i'm curious if there's a safe (maintains coverage accuracy) way for me to speed things up?
The text was updated successfully, but these errors were encountered: