Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Substract current automaton size to Wp-method's maxDepth #32

Closed
cgvwzq opened this issue May 6, 2019 · 2 comments
Closed

Substract current automaton size to Wp-method's maxDepth #32

cgvwzq opened this issue May 6, 2019 · 2 comments

Comments

@cgvwzq
Copy link

cgvwzq commented May 6, 2019

I probably got it wrong, but after checking the original paper it seems that the "middle part" length is the difference between implementation's bound on number of states (m) and the current spec size (n), so when using Wp-method as equivalence query the maxDepth could be shrunk dynamically (by substracting the current automaton.size()). Right?

Right now the length is fixed idependently of the candidate automaton: https://github.com/LearnLib/automatalib/blob/master/util/src/main/java/net/automatalib/util/automata/conformance/WpMethodTestsIterator.java#L81

I just started looking at the code base, so I'm might be missing something. Should/Can I update the equivalence oracle inside my learning loop to reduce the max depth?

@mtf90
Copy link
Member

mtf90 commented May 7, 2019

You are correct, currently the parameter is independent of the hypothesis size and used to insert all tuples of length l \in {0, ..., maxDepth} between elements of the P and W sets. I'll have to look at the paper to see what the authors intentions for this case are (I haven't originally implemented it in AutomataLib and only know the Wp-method with maxDepth = 0).

What I see as a potential problem with shrinking the value is, that a bad estimate will degrade performance in later equivalence checks, because once n > m you basically have no more look-ahead. With a fixed exploration depth (as it is now), you continue to look ahead even in later iterations of refinement cycles. On the contrary, a too low value for the current value of maxDepth probably gives you the same problems in the earlier equivalence checks. Maybe, we can use two parameters (one for m and one for the minimal exploration depth depth), and always choose max(depth, m - n), so users have more fine-grained control over the behaviour. What do you think?

For now, to simulate shrinking exploration depth, you can just instantiate a new equivalence for every refinement loop, like (untested):

DFALearner<I> learner = ...;
learner.startLearning();

int m = ...;
boolean finished = false;

while (!finished) {    
    DFA<?, I> hyp = learner.getHypothesisModel();

    DFAEquivalenceOracle<I> eqOracle = new DFAWpMethodEQOracle<>(mqOracle, m - hyp.size());
    DefaultQuery<I, Boolean> ce;
    
    if ((ce = eqOracle.findCounterExample(hyp, inputs)) != null) {
        learner.refineHypothesis(ce);
    } else {
    	finished = true;
    }
}

@cgvwzq
Copy link
Author

cgvwzq commented May 7, 2019

Thanks for the quick answer.

My main point is that if I know the upper bound size for the automaton I want to learn (m in the paper), having a fixed value will require a bunch of unnecessary queries when n (size of current hypothesis) gets bigger. But I agree that a bad estimate (or no information) can also be a problem.

Your solution makes sense to me: m would be the upper bound (which can be optional), and depth the minimal exploration depth (in the worst case, i.e. if no counterexamples are found before).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants