Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add line tracing for more fine granular spec selection #253

Open
mbj opened this issue Sep 16, 2014 · 6 comments
Open

Add line tracing for more fine granular spec selection #253

mbj opened this issue Sep 16, 2014 · 6 comments

Comments

@mbj
Copy link
Owner

mbj commented Sep 16, 2014

Mutant currently uses explicit convention based spec selection. There are no options to configure this behavior. The explicit approach is nice and works for most of the cases I cared about in the past.

But:

  • Legacy projects introducing mutant have a much higher adoption curve
  • Even on non legacy projects the speed up from sub-setting the convention based selected tests will be major
  • Selective retargeting (kill expressions, another planned but stalled on missing development time feature) will allow to accidentally pull the "test universe" that could greatly benefit from line tracing as selection source.

This ticket was created to allow aggregation of thoughts.

@tjchambers
Copy link
Contributor

I for one experienced this on a "legacy project" with specs that were not written in a granular and targeted fashion. The result was:

  • Why is the mutant not killed (sometimes I did have targets but too granular, not complete)
  • Why are the non-mutated pre-tests failing?
  • How to handle tests that are not targeted (like mutating private methods, but testing public ones)

However in hindsight the approach of structuring the tests in a heirarchical fashion - aka "mutant-compatible" - structure led to some real benefits, for me worth the cost:

  • Mutant ran faster
  • Tests were clearer less nebulous as to what was being tested
  • Structure led to better gap identification - before AND after using mutant

Having said the above it would have been nice for me to have two things going in:

  • A VERY clear understanding of the test selection approach mutant had (I think I get it now, but still occasionally am unclear)
  • A tool of sorts to "suggest" where mutant spec selection could be improved by adjusting specs (like a list of methods that have little/no corresponding specs covering them). This is more of a "coverage" concern, so not a mutant failing but one that significantly impacts the effectiveness of utilizing a powerful tool like mutant.

My two cents.

@mbj
Copy link
Owner Author

mbj commented Sep 16, 2014

@tjchambers The current logic is best explained with code.

# Return tests for mutation
#
# @return [Array<Test>]
#
# @api private
#
def tests
match_expressions.each do |match_expression|
tests = config.integration.all_tests.select do |test|
match_expression.prefix?(test.expression)
end
return tests if tests.any?
end
EMPTY_ARRAY
end
memoize :tests
# Prepare the subject for the insertion of mutation

Also thx for your post. It helps me where to improve mutant and or the documentation.

@mbj
Copy link
Owner Author

mbj commented Sep 16, 2014

@tjchambers BTW I like the success story of "mutant forced better structure". This is one of the main reasons I had such a rigid selection internally. I wanted to make sure I force myself to think clear about units. I need to allow selective relaxiation in future to allow real private classes and such.

I think you are one of the few mutant users that use the tool enough (and talk to me) to give feedback or kind of a success / challenge story. I'd be pleased if you could write down an outsiders guide. I generally suck at writing such documents.

@tjchambers
Copy link
Contributor

I have been accumulating notes as I experience things. I need to be careful to critique mutant from both the newcomer perspective ("How do I use this tool getting benefits that outweigh the cost?") and from the seasoned user ("What could improve to enhance the benefits or reduce the cost?"). The outsiders guide (good term BTW) at this stage of peoples usage is probably biased to the former.

@mbj
Copy link
Owner Author

mbj commented Sep 16, 2014

@tjchambers Yeah I get the difficulties. But please do not hold back critism, even it it might not be valid I think I should have a chance to think about.

@tjchambers
Copy link
Contributor

In comments re: #259 @dkubb referred to the selective re-targeting. My companies product includes a proprietary tool (ACE) that is based on genetic algorithm that "discovers" highest combination of alignment by a group around a set of statements re: a topic. Like most genetic algorithms it uses mutation to swiftly identify the most aligned combinations, giving each a fitness score.

In my naive thinking, assuming one could retain/memorize the configuration and know when to re-evaluate it, it would be like giving a fitness score to a sequence of example group spec tests in the context of a method. The fitness score would maximize for speed (shorter timed tests) and kill rate. For a method there would be a discovery of the "fittest" sequence of tests that in the past have killed mutations in that method. Biasing towards prior kills and their associated speed of execution sounds like what is being considered. Of course if specs changed (aka the signature of the _spec file changed), then the fitness could be re-evaluated. That way, perhaps, one would not have to tweak the tests to benefit from the speed. That is somewhat akin to what our commercial product does when exploring the "alignment space".

Once I had mutated a module, I would love to see the subsequent executions try the tests that previous killed the mutations of a particular method first before moving on to a more "standard" approach. At least that's my naive perspective.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants