Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support coverage builds #187

Open
P4Cu opened this issue Feb 21, 2017 · 7 comments
Open

Support coverage builds #187

P4Cu opened this issue Feb 21, 2017 · 7 comments

Comments

@P4Cu
Copy link

P4Cu commented Feb 21, 2017

Currently if we detect that there's a coverage flag we're falling back to local build but it should be possible to achieve it over distibuted network if *.gcno/gca files got captured.

I could try to implement that but I would love to here expert opinions first.

  • Is it possible with current architecture? (Without hacking the code and making it more messy than it is)
  • Which code blocks would need to be modified?
@HenryMiller1
Copy link
Collaborator

It would be nice to have, but I do not know how much effort it might be.

Dwarf-fission support already brings back two files. You can probably use b3da9eb as a guide. I suspect if you expand on that you can make returning more files work.

Best of luck, I haven't actually thought about the problem much.

@jdrouhard
Copy link
Collaborator

While this is desirable, I'm afraid it would require some significant refactoring and some pretty major design decisions.

When I added debug fission support (to capture .dwo files), I had to hard code the expectation of whether a .dwo file would exist to send back or not. It is not trivial to add more possibilities of output files from a given command line without intimate knowledge of the specific tools involved, which I don't think should be added.

If we were to generalize this concept by making the daemons able to communicate an arbitrary number of output files per compile job, we would need to seriously consider the mechanism for knowing how many to expect, how to find them once the compile job is complete, etc. I do not believe we can do this in a deterministic way, however.

I'm sorry to rain on your parade, but you're probably better off sticking with local builds for this. :-/

@P4Cu
Copy link
Author

P4Cu commented Feb 21, 2017

Thanks for an answer. However I'm not giving up that fast ;-)
Two things come to my mind:

  1. Sending a package which would be extracted on host.
  2. The other thing is rather major but worth consideration. If you think that we should keep things like -ftest-coverage out of ICECC then how about having a plugin system? That's huge task I know.

@HenryMiller1
Copy link
Collaborator

I think we are just suggesting that you to make sending files back.. I don't think johnmiked15 wants to see more places where we track which files we expect to be produced. I think I agree with him - when I looked at the patch again I see a number of places crying out for a more generic solution, but what that might be I do not know. We already have binary, the debug file, and you are adding 2 more. This is enough to call for a generic solution.

A package extracted on host might work - if you can figure out which files to put in the package, preferably in a generic way so that we can easily handle the next file someone decides the compiler needs to produce. This is not easy to do in a way that is generic, deterministic, and correct. I think I want all 3. I'm fine with a rule "the compiler will always output to directory X", but only if it covers all our current cases: I don't actually know what all the possible cases are.

I'm not against a plugin system, but I'm not sure how it might work which makes it hard to say I'm for it. There are a lot of ways to do it that I would reject, but not all of them.

@svkf
Copy link

svkf commented Oct 3, 2017

I have a version that has a relatively simple model to extend,
I ripped up all the split-dwarf bits (except where it has to change stuff in the file), the code is much simpler (using c++11 bits) now as it just deals with a list of file extensions
-fprofile-arcs -ftest-coverage (.gcno)
save-temps=obj (.i, .ii, .s) (just to see if it was easy ...but 3 edits, 1 of which just to test the API version)
so you can specify any/all of those and get the right files back

are there other multiple file returns that others would want for a large system build... so I can see if those can follow the same pattern

not sure how to post it up ..since I made a lot of other changes, so I need to split that out and ideally add a a build test ....

@HenryMiller1
Copy link
Collaborator

Good to see progress on this, I hope you can figure out the next steps.

@HenryMiller1
Copy link
Collaborator

When working on this, try to maintain support for interoperability with older clients that have the current .dwo support. Mixed networks make it a lot easier for network administrators to manage their system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants