-
Notifications
You must be signed in to change notification settings - Fork 172
/
mod.rs
2212 lines (1993 loc) · 82.2 KB
/
mod.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
//! An alternative solver based around the SLG algorithm, which
//! implements the well-formed semantics. This algorithm is very
//! closed based on the description found in the following paper,
//! which I will refer to in the comments as EWFS:
//!
//! > Efficient Top-Down Computation of Queries Under the Well-formed Semantics
//! > (Chen, Swift, and Warren; Journal of Logic Programming '95)
//!
//! However, to understand that paper, I would recommend first
//! starting with the following paper, which I will refer to in the
//! comments as NFTD:
//!
//! > A New Formulation of Tabled resolution With Delay
//! > (Swift; EPIA '99)
//!
//! In addition, I incorporated extensions from the following papers,
//! which I will refer to as SA and RR respectively, that
//! describes how to do introduce approximation when processing
//! subgoals and so forth:
//!
//! > Terminating Evaluation of Logic Programs with Finite Three-Valued Models
//! > Riguzzi and Swift; ACM Transactions on Computational Logic 2013
//! > (Introduces "subgoal abstraction", hence the name SA)
//! >
//! > Radial Restraint
//! > Grosof and Swift; 2013
//!
//! Another useful paper that gives a kind of high-level overview of
//! concepts at play is the following, which I will refer to as XSB:
//!
//! > XSB: Extending Prolog with Tabled Logic Programming
//! > (Swift and Warren; Theory and Practice of Logic Programming '10)
//!
//! While this code is adapted from the algorithms described in those
//! papers, it is not the same. For one thing, the approaches there
//! had to be extended to our context, and in particular to coping
//! with hereditary harrop predicates and our version of unification
//! (which produces subgoals). I believe those to be largely faithful
//! extensions. However, there are some other places where I
//! intentionally dieverged from the semantics as described in the
//! papers -- e.g. by more aggressively approximating -- which I
//! marked them with a comment DIVERGENCE. Those places may want to be
//! evaluated in the future.
//!
//! Glossary of other terms:
//!
//! - WAM: Warren abstract machine, an efficient way to evaluate Prolog programs.
//! See <http://wambook.sourceforge.net/>.
//! - HH: Hereditary harrop predicates. What Chalk deals in.
//! Popularized by Lambda Prolog.
use cast::{Cast, Caster};
use ir::*;
use ir::could_match::CouldMatch;
use solve::infer::{InferenceTable, unify::UnificationResult};
use solve::truncate::{truncate, Truncated};
use stacker;
use std::collections::{HashMap, HashSet};
use std::collections::hash_map::Entry;
use std::cmp::min;
use std::iter;
use std::mem;
use std::ops::{Index, IndexMut, Range};
use std::sync::Arc;
use std::usize;
mod aggregate;
pub mod on_demand;
mod resolvent;
mod test;
/// Finds all possible solutions to the given root goal in the context
/// of the given program, approximating if the size of solutions or
/// subqueries exceeds `max_size`.
///
/// If this returns `Ok`, a complete set of answers is returned, some
/// of which may be approximated. This can be converted into a
/// solution using the method `into_solution` on `SimplifiedAnswers`.
///
/// If this returns `Err`, then the success or failure of the program
/// could not be interpreted due to some execution error (typically
/// involving negation of a term with unresolved existential variables
/// -- that is, a non-ground term -- which is called "floundering").
pub fn solve_root_goal(
max_size: usize,
program: &Arc<ProgramEnvironment>,
root_goal: &UCanonicalGoal,
) -> Result<SimplifiedAnswers, ExplorationError> {
Forest::solve_root_goal(max_size, program, &root_goal)
}
/// The **FOREST** of evaluation tracks all the in-progress work.
/// Conceptually, it corresponds to the forest F described in NFTD,
/// however, we structure it more like the "table" described in EWFS.
/// In particular, we never materialize the forest and subgraphs
/// *directly*, instead keeping two bits of information:
///
/// - There is **table** for each tree with root node `A :- A` in the forest.
/// This table is indexed by the (canonical) root node A. It contains
/// the answers found so far, as well as links to nodes from other trees in the
/// forest that are still waiting for answeres.
/// - There is a **stack** of nodes `A :- G` from the forest. Roughly
/// speaking, this stack stores nodes in the forest which have not
/// yet been completely evaluated.
/// - Calling this is stack can be a bit misleading: although the
/// activity of the system is focused on the top of the stack, we
/// also will wind up doing things like producing a new answer
/// that feeds into a goal higher-up the stack. For example, we might
/// have a stack like the following (where the stack grows down):
///
/// // foo(X) :- bar(X), baz(X).
/// // bar(X) :- ...
///
/// Here, we see that `foo(X)` is waiting on a result from `bar(X)`. Let's
/// say we just found an answer, `bar(1)`. In that case, we would feed that answer
/// to `foo`, causing us to push a new stack entry:
///
/// // foo(X) :- bar(X), baz(X).
/// // bar(X) :- ...
/// // foo(X) :- baz(1).
///
/// `bar(X)` and the node on top of it in the stack are not really
/// related. (Indeed, coping with this is actually the source of
/// some complexity in the machine itself.)
struct Forest {
infer: InferenceTable,
program: Arc<ProgramEnvironment>,
dfn: DepthFirstNumber,
tables: Tables,
stack: Stack,
max_size: usize,
}
/// A unit type used to indicate that we have fully explored a
/// particular pathway.
struct FullyExplored;
/// The various kinds of errors we can encounter during exploration.
/// Note that these do not indicate **failed results** -- i.e, traits
/// not implemented. They also do not indicate the "third value" in
/// the WFS semantics. Rather they indicate that we could not figure
/// out the result for a given predicate in the WFS semantics (i.e.,
/// we could not prove, disprove, nor even find a definitive undefined
/// result).
#[derive(Debug)]
pub enum ExplorationError {
/// Indicates that execution "flounded", meaning that it
/// encountered a negative goal with unresolved variables.
Floundered,
/// We do not tolerate overly large goals along negative paths
/// right now.
NegativeOverflow,
}
/// The result of exploration: either we fully explored some subtree,
/// populating the result tables with answers, or else we encountered
/// some kind of exploration error along the way.
type ExplorationResult = ::std::result::Result<FullyExplored, ExplorationError>;
/// See `Forest`.
#[derive(Default)]
struct Tables {
/// Maps from a canonical goal to the index of its table.
table_indices: HashMap<CanonicalGoal, TableIndex>,
/// Table: as described above, stores the key information for each
/// tree in the forest.
tables: Vec<Table>,
}
/// See `Forest`.
#[derive(Default)]
struct Stack {
/// Stack: as described above, stores the in-progress goals.
stack: Vec<StackEntry>,
}
index_struct! {
struct TableIndex {
value: usize,
}
}
/// The StackIndex identifies the position of a table's goal in the
/// stack of goals that are actively being processed. Note that once a
/// table is completely evaluated, it may be popped from the stack,
/// and hence no longer have a stack index.
index_struct! {
struct StackIndex {
value: usize,
}
}
/// The `DepthFirstNumber` (DFN) is a sequential number assigned to
/// each goal when it is first encountered. The naming (taken from
/// EWFS) refers to the idea that this number tracks the index of when
/// we encounter the goal during a depth-first traversal of the proof
/// tree.
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
struct DepthFirstNumber {
value: u64,
}
copy_fold!(DepthFirstNumber);
struct StackEntry {
/// The goal G from the stack entry `A :- G` represented here.
table: TableIndex,
/// The DFN of this computation.
dfn: DepthFirstNumber,
/// Tracks the dependencies of this stack entry on things beneath
/// it in the stack. This field is updated "periodically",
/// e.g. when a direct subgoal completes. Otherwise, the minimums
/// for the active computation are tracked in a local variable
/// that is threaded around.
///
/// Note that this field is an over-approximation. As described in
/// section 3.4.1 of EWFS, it actually stores the minimal
/// dependencies of this stack entry **and anything on top of it
/// in the stack**. In some cases, it can happen that this entry
/// on the stack does not depend on the things on top of it, in
/// which case the `link` is overapproximated -- this
/// overapproximation reflects the fact that, because of the
/// nature of a stack, we cannot in fact pop this entry until
/// those other entries are popped, even though there is no
/// *logical* dependency between us. This is the price we pay for
/// using such a simple data structure.
link: Minimums,
}
struct Table {
/// The goal this table is trying to solve (also the key to look
/// it up).
table_goal: CanonicalGoal,
/// Stores the answers that we have found thus far. For each head
/// goal, we store a set of "delayed literals" instances. So, if
/// the SLG algorithm would have computed (e.g.) `A :- ~B |` and
/// `A :- ~C |` as answers, this would be stored as a `A -> {{B},
/// {C}}` map entry.
answers: HashMap<CanonicalConstrainedSubst, DelayedLiteralSets>,
/// Stack entries waiting to hear about POSITIVE results from this
/// table. This occurs when you have something like `foo(X) :-
/// bar(X)`.
positives: Vec<CanonicalPendingExClause>,
/// Stack entries waiting to hear about NEGATIVE results from this
/// table. This occurs when you have something like `foo(X) :- not
/// bar(X)`.
negatives: Vec<CanonicalPendingExClause>,
/// Stores the index of this table on the stack. This is only
/// `Some` until the table has been COMPLETELY EVALUATED -- i.e.,
/// all possible answers have been found -- at which point it is
/// set to `None`.
depth: Option<StackIndex>,
}
/// A truth value in the WFS.
#[derive(Copy, Clone, Debug)]
enum TruthValue {
/// Contains a finite proof.
True,
/// Contains no proof or an infinite proof.
False,
/// Participates in a negative cycle.
///
/// Consider this: are `a` and `b` true if `a :- not b. b :- not a.`?
Unknown,
}
/// A link between two tables, indicating that when an answer is
/// produced by one table, it should be fed into another table.
/// For example, if we have a clause like
///
/// ```notrust
/// foo(?X) :- bar(?X), baz(?X)
/// ```
///
/// then `foo` might insert a `PendingExClause` into the table for
/// `bar`, indicating that each value of `?X` could lead to an answer
/// for `foo` (if `baz(?X)` is true).
#[derive(Clone, Debug)]
struct PendingExClause {
/// The `goal_depth` in the stack of `foo`, the blocked goal.
/// Note that `foo` must always be in the stack, since it is
/// actively awaiting an answer.
goal_depth: StackIndex,
/// Answer substitution that the pending ex-clause carries along
/// with it. Maps from the free variables in the table goal to the
/// final values they wind up with.
subst: Substitution,
/// The goal `bar(?X)` that `foo(?X)` was trying to solve;
/// typically equal to the table-goal in which this pending
/// ex-clause is contained, modulo the ordering of variables. This
/// is not *always* true, however, because the table goal may have
/// been truncated.
selected_goal: InEnvironment<Goal>,
/// Any delayed literals in the ex-clause we were solving
/// when we blocked on `bar`.
delayed_literals: Vec<DelayedLiteral>,
/// Constraints accumulated thus far in the ex-clause we were solving.
constraints: Vec<InEnvironment<Constraint>>,
/// Further subgoals, like `baz(?X)`, that must be solved after `foo`.
subgoals: Vec<Literal>,
}
struct_fold!(PendingExClause {
goal_depth,
subst,
selected_goal,
delayed_literals,
constraints,
subgoals,
});
/// The paper describes these as `A :- D | G`.
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
struct ExClause {
/// The substitution which, applied to the goal of our table,
/// would yield A.
subst: Substitution,
/// Delayed literals: things that we depend on negatively,
/// but which have not yet been fully evaluated.
delayed_literals: Vec<DelayedLiteral>,
/// Region constraints we have accumulated.
constraints: Vec<InEnvironment<Constraint>>,
/// Subgoals: literals that must be proven
subgoals: Vec<Literal>,
}
struct_fold!(ExClause {
subst,
delayed_literals,
constraints,
subgoals,
});
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct SimplifiedAnswers {
pub answers: Vec<SimplifiedAnswer>,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct SimplifiedAnswer {
/// A fully instantiated version of the goal for which the query
/// is true (including region constraints).
pub subst: CanonicalConstrainedSubst,
/// If this flag is set, then the answer could be neither proven
/// nor disproven. In general, the existence of a non-empty set of
/// delayed literals simply means the answer's status is UNKNOWN,
/// either because the size of the answer exceeded `max_size` or
/// because of a negative loop (e.g., `P :- not { P }`).
pub ambiguous: bool,
}
#[derive(Clone, Debug)]
enum DelayedLiteralSets {
/// Corresponds to a single, empty set.
None,
/// Some (non-zero) number of non-empty sets.
Some(HashSet<DelayedLiteralSet>),
}
/// A set of delayed literals. The vector in this struct must
/// be sorted, ensuring that we don't have to worry about permutations.
///
/// (One might expect delayed literals to always be ground, since
/// non-ground negative literals result in flounded
/// executions. However, due to the approximations introduced via RR
/// to ensure termination, it *is* in fact possible for delayed goals
/// to contain free variables. For example, what could happen is that
/// we get back an approximated answer with `Goal::CannotProve` as a
/// delayed literal, which in turn forces its subgoal to be delayed,
/// and so forth. Therefore, we store canonicalized goals.)
#[derive(Clone, Debug, Default, PartialEq, Eq, Hash)]
struct DelayedLiteralSet {
delayed_literals: Vec<DelayedLiteral>,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
enum DelayedLiteral {
/// Something which can never be proven nor disproven. Inserted
/// when truncation triggers; doesn't arise normally.
CannotProve(()),
/// We are blocked on a negative literal `~G`, where `G` is the
/// goal of the given table. Because negative goals must always be
/// ground, we don't need any other information.
Negative(TableIndex),
/// We are blocked on a positive literal `Li`; we found a
/// **conditional** answer (the `CanonicalConstrainedSubst`) within the
/// given table, but we have to come back later and see whether
/// that answer turns out to be true.
Positive(TableIndex, CanonicalConstrainedSubst),
}
enum_fold!(DelayedLiteral[] { CannotProve(a), Negative(a), Positive(a, b) });
/// Either `A` or `~A`, where `A` is a `Env |- Goal`.
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
enum Literal {
Positive(InEnvironment<Goal>),
Negative(InEnvironment<Goal>),
}
enum_fold!(Literal[] { Positive(a), Negative(a) });
/// The `Minimums` structure is used to track the dependencies between
/// some item E on the evaluation stack. In particular, it tracks
/// cases where the success of E depends (or may depend) on items
/// deeper in the stack than E (i.e., with lower DFNs).
///
/// `positive` tracks the lowest index on the stack to which we had a
/// POSITIVE dependency (e.g. `foo(X) :- bar(X)`) -- meaning that in
/// order for E to succeed, the dependency must succeed. It is
/// initialized with the index of the predicate on the stack. So
/// imagine we have a stack like this:
///
/// // 0 foo(X) <-- bottom of stack
/// // 1 bar(X)
/// // 2 baz(X) <-- top of stack
///
/// In this case, `positive` would be initially 0, 1, and 2 for `foo`,
/// `bar`, and `baz` respectively. This reflects the fact that the
/// answers for `foo(X)` depend on the answers for `foo(X)`. =)
///
/// Now imagine that we had a clause `baz(X) :- foo(X)`, inducing a
/// cycle. In this case, we would update `positive` for `baz(X)` to be
/// 0, reflecting the fact that its answers depend on the answers for
/// `foo(X)`. Similarly, the minimum for `bar` would (eventually) be
/// updated, since it too transitively depends on `foo`. `foo` is
/// unaffected.
///
/// `negative` tracks the lowest index on the stack to which we had a
/// NEGATIVE dependency (e.g., `foo(X) :- not { bar(X) }`) -- meaning
/// that for E to succeed, the dependency must fail. This is initially
/// `usize::MAX`, reflecting the fact that the answers for `foo(X)` do
/// not depend on `not(foo(X))`. When negative cycles are encountered,
/// however, this value must be updated.
#[derive(Copy, Clone, Debug)]
struct Minimums {
positive: DepthFirstNumber,
negative: DepthFirstNumber,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
enum Polarity {
Positive,
Negative,
}
#[derive(Copy, Clone, Debug)]
enum Satisfiable<T> {
Yes(T),
No,
}
type CanonicalConstrainedSubst = Canonical<ConstrainedSubst>;
type CanonicalGoal = Canonical<InEnvironment<Goal>>;
type CanonicalPendingExClause = Canonical<PendingExClause>;
type UCanonicalGoal = UCanonical<InEnvironment<Goal>>;
impl Forest {
fn solve_root_goal(
max_size: usize,
program: &Arc<ProgramEnvironment>,
root_goal: &UCanonicalGoal,
) -> Result<SimplifiedAnswers, ExplorationError> {
let program = program.clone();
let mut forest = Forest {
infer: InferenceTable::new(),
dfn: DepthFirstNumber::MIN,
program: program.clone(),
tables: Tables::default(),
stack: Stack::default(),
max_size: max_size,
};
let root_goal = forest.infer.instantiate_universes(root_goal);
let (root_table, root_table_depth) = forest.push_new_table(&root_goal, None, None);
let mut minimums = forest.stack[root_table_depth].link;
let subst = forest.infer.fresh_subst(&root_goal.binders);
let instantiated_goal = root_goal.substitute(&subst);
forest.subgoal(root_table_depth, instantiated_goal, subst, &mut minimums)?;
Simplification::simplify(&mut forest.tables);
Ok(forest.tables[root_table].export_answers())
}
/// Pushes a new goal onto the stack, creating a table entry in the process.
fn push_new_table(
&mut self,
goal: &CanonicalGoal,
positive_pending: Option<CanonicalPendingExClause>,
negative_pending: Option<CanonicalPendingExClause>,
) -> (TableIndex, StackIndex) {
let depth = self.stack.next_index();
let dfn = self.dfn.next();
let table = self.tables.insert(goal, depth);
debug!(
"push_new_table: depth {:?} is table {:?} with goal {:?}",
depth, table, goal
);
self.tables[table].positives.extend(positive_pending);
self.tables[table].negatives.extend(negative_pending);
self.stack.push(table, dfn);
(table, depth)
}
/// Creates an inference snapshot and executes `op`, rolling back
/// the snapshot afterwards. This is generally safe to use in any
/// context where we are doing exploring (hence the return type),
/// since -- due to the nature of the EWFS algorithm -- any result
/// that may escape a stack frame (e.g., by being stored in a
/// table) is canonicalized first.
fn snapshotted<F>(&mut self, op: F) -> ExplorationResult
where
F: FnOnce(&mut Self) -> ExplorationResult,
{
let snapshot = self.infer.snapshot();
let result = op(self);
self.infer.rollback_to(snapshot);
result
}
/// This is SLG_SUBGOAL from EWFS. It is invoked when a new goal
/// has been freshly pushed. We do a slight tweak to account for
/// HH vs domain goals.
fn subgoal(
&mut self,
goal_depth: StackIndex,
goal: InEnvironment<Goal>,
subst: Substitution,
minimums: &mut Minimums,
) -> ExplorationResult {
debug_heading!(
"subgoal(goal_depth={:?}, goal={:?}, minimums={:?})",
goal_depth,
goal,
minimums
);
// We want to consider two cases:
//
// - The goal is a domain goal. In that case, we will make N alternatives,
// one for each clause we can find.
// - The goal is some other kind of HH goal. In that case, we will break it
// down into a product of literals, and create 1 alternative.
let InEnvironment { environment, goal } = goal;
match goal {
Goal::Leaf(LeafGoal::DomainGoal(domain_goal)) => {
let domain_goal = InEnvironment::new(&environment, domain_goal);
let clauses = clauses(&self.program, &domain_goal);
for clause in clauses {
self.snapshotted(|this| {
debug!("program clause = {:#?}", clause);
match resolvent::resolvent_clause(
&mut this.infer,
&domain_goal,
&subst,
&clause.implication,
) {
Satisfiable::No => Ok(FullyExplored),
Satisfiable::Yes(resolvent) => {
this.new_clause(goal_depth, resolvent, minimums)
}
}
})?;
}
}
_ => {
// `canonical_goal` is an HH goal. We can simplify it
// into a series of *literals*, all of which must be
// true. Thus, in EWFS terms, we are effectively
// creating a single child of the `A :- A` goal that
// is like `A :- B, C, D` where B, C, and D are the
// simplified subgoals. You can think of this as
// applying built-in "meta program clauses" that
// reduce HH goals into Domain goals.
let hh_goal = InEnvironment::new(&environment, goal);
let ex_clause = match simplify_hh_goal(&mut self.infer, subst, hh_goal) {
Satisfiable::Yes(ex_clause) => ex_clause,
Satisfiable::No => return Ok(FullyExplored), // now way to solve
};
self.new_clause(goal_depth, ex_clause, minimums)?;
}
}
debug!(
"subgoal: goal_depth={:?} minimums={:?}",
goal_depth, minimums
);
self.complete(goal_depth, minimums)
}
/// Pop off the next subgoal from `ex_clause` and try to solve
/// it. Invoked when we have either just started a fresh goal (and
/// selected a program clause) or when we have a new answer to a
/// blocked goal that has just been incorporated.
fn new_clause(
&mut self,
goal_depth: StackIndex,
mut ex_clause: ExClause, // Contains both A and G together.
minimums: &mut Minimums,
) -> ExplorationResult {
info_heading!(
"new_clause(goal_depth={:?}, ex_clause={:?}, minimums={:?}",
goal_depth,
self.infer.normalize_deep(&ex_clause),
minimums
);
maybe_grow_stack(|| {
self.snapshotted(|this| {
match ex_clause.subgoals.pop() {
// No goals left to prove: this is an answer.
None => this.answer(goal_depth, ex_clause, minimums),
// Positive goal.
Some(Literal::Positive(selected_goal)) => {
this.positive(goal_depth, ex_clause, selected_goal, minimums)
}
// Negative goal. EWFS checks for whether `selected_goal`
// is ground here. We push this check into `negative`.
Some(Literal::Negative(selected_goal)) => {
this.negative(goal_depth, ex_clause, selected_goal, minimums)
}
}
})
})
}
/// Try to solve a positive selected literal.
fn positive(
&mut self,
goal_depth: StackIndex,
ex_clause: ExClause,
selected_goal: InEnvironment<Goal>,
minimums: &mut Minimums,
) -> ExplorationResult {
debug_heading!(
"positive(goal_depth={:?}, ex_clause={:?}, selected_goal={:?}, minimums={:?})",
goal_depth,
self.infer.normalize_deep(&ex_clause),
selected_goal,
minimums
);
// Subgoal abstraction: Rather than looking up the table for
// `selected_goal` directly, first apply the truncation
// function. This may introduce fresh variables, making the
// goal that we are looking up more general, and forcing us to
// reuse an existing table. For example, if we had a selected
// goal of
//
// // Vec<Vec<Vec<Vec<i32>>>>: Sized
//
// we might now produce a truncated goal of
//
// // Vec<Vec<?T>>: Sized
//
// Obviously, the answer we are looking for -- if it exists -- will be
// found amongst the answers of this new, truncated goal.
//
// Subtle point: Note that the **selected goal** remains
// unchanged and will be carried over into the "pending
// clause" for the positive link on the new subgoal. This
// means that if our new, truncated subgoal produces
// irrelevant answers (e.g., `Vec<Vec<u32>>: Sized`), they
// will fail to unify with our selected goal, producing no
// resolvent.
let Truncated {
overflow: _,
value: truncated_subgoal,
} = truncate(&mut self.infer, self.max_size, &selected_goal);
// Check if we need to create a new table and (if so) stop.
let subgoal_table = match self.select_goal(
goal_depth,
Polarity::Positive,
&ex_clause,
&selected_goal,
truncated_subgoal,
minimums,
)? {
Some(t) => t,
None => return Ok(FullyExplored),
};
let pending_ex_clause = self.pending_ex_clause(goal_depth, &ex_clause, &selected_goal);
// A table for this entry already exists. We want to take
// whatever answers we can find in the table -- bearing in
// mind that the table may still be in the process of being
// evaluated!
if let Some(subgoal_depth) = self.tables[subgoal_table].depth {
// If the table is not completely evaluated, then there is
// a cycle. We'll still use whatever answers have been
// found so far, but we'll also register ourselves to
// receive any new answers that will come later.
self.tables[subgoal_table]
.positives
.push(pending_ex_clause.clone());
self.update_lookup(goal_depth, subgoal_depth, Polarity::Positive, minimums);
}
// Process the answers that have already been found one by
// one.
let new_ex_clauses: Vec<_> = {
let infer = &mut self.infer;
let subgoal_table_goal = &self.tables[subgoal_table].table_goal;
self.tables[subgoal_table]
.answers
.iter()
.filter_map(|(answer_subst, answer_delayed_literals)| {
// This is a bit subtle: when we incorporate the
// cached answer, we always use the
// `pending_ex_clause` as the starting point
// rather than `ex_clause`. This is because the
// pending ex-clause is canonicalized, and hence
// we wind up generating fresh copies of all the
// inference variables involved. In the case where
// there is more than one pending answer, this is
// important, because otherwise we can wind up
// "cross-contaminating" between answers here.
//
// Imagine that our ex-clause is `?T: Sour`, and
// we have two tabled answers `Lemon: Sour` and
// `Vingear: Sour` (this scenario btw is tested by
// `cached_answers1` and friends). When we
// incorporate the first cached answer, we wind up
// unifying `?T` with `Lemon` -- but we just store
// that in a vector, we don't immediately do anything with it.
//
// Then we try to incorporate the next cached answer,
// `Vinegar` -- but now we already have `?T = Lemon` as a constraint,
// so that (incorrectly) fails.
//
// There are other ways to fix this:
//
// - We might clone the answer set and then use
// snapshots as we process each cached
// answer. This is effectively what happens when
// handling program clauses, except that in that
// case no clone is needed, because the program
// clauses are not stored in a structure that is
// being updated during computation (unlike the
// answer set).
//
// - We might alternatively clone our inference
// table for each cached answer, so that `?T`
// has distinct bindings on the different
// paths. We would need to move `infer` from a
// field to a parameter then.
//
// In both cases, we could incorporate persistent
// data structures to make those clones cheaper.
Self::incorporate_cached_answer(
infer,
&pending_ex_clause,
subgoal_table,
subgoal_table_goal,
answer_subst,
answer_delayed_literals,
).yes()
})
.collect()
};
for (new_goal_depth, new_ex_clause) in new_ex_clauses {
assert_eq!(new_goal_depth, goal_depth);
let new_ex_clause = self.truncate_returned(new_ex_clause);
self.new_clause(goal_depth, new_ex_clause, minimums)?;
}
Ok(FullyExplored)
}
/// Checks to see if a table exists for `instantiated_subgoal`. If
/// so, returns `Ok(Some(index))` with its index. If not, creates the new
/// table and starts solving it.
///
/// If a new table is created, it will have a link to the
/// ex-clause (`ex_clause`) being solved. This link will carry the
/// current selected goal (`selected_goal`). Typically,
/// `selected_goal` and `instantiated_subgoal` are variants of one
/// another, but due to subgoal abstraction (truncation) this may
/// not *necessarily* be the case. See the callers of this
/// function for detailed comments.
///
/// # Parameters
///
/// - `goal_depth`: depth of current goal that we are solving in the stack
/// - `polarity`: is the selected literal positive or negative
/// - `ex_clause`: current X-clause we are solving
/// - `selected_goal`: goal of current selected literal (unaltered by abstraction)
/// - `instantiated_subgoal`: abstracted version of selected goal used for table lookup
/// - `minimums`: minimums for current stack frame
fn select_goal(
&mut self,
goal_depth: StackIndex,
polarity: Polarity,
ex_clause: &ExClause,
selected_goal: &InEnvironment<Goal>,
instantiated_subgoal: InEnvironment<Goal>,
minimums: &mut Minimums,
) -> Result<Option<TableIndex>, ExplorationError> {
debug_heading!(
"select_goal(goal_depth={:?}, \
polarity={:?}, \
selected_goal={:?}, \
instantiated_subgoal={:?}, \
minimums={:?})",
goal_depth,
polarity,
selected_goal,
instantiated_subgoal,
minimums
);
let (canonical_subgoal, subst) = self.infer
.canonicalize(&instantiated_subgoal)
.into_quantified_and_subst();
debug!("selected_goal: canonical_subgoal={:?}", canonical_subgoal);
// Check if we have an existing table. If yet, return it.
if let Some(subgoal_table) = self.tables.index_of(&canonical_subgoal) {
return Ok(Some(subgoal_table));
}
// Otherwise, create the new table, listing the current goal
// as being pending. Then try to solve this new table.
let pending_ex_clause = self.pending_ex_clause(goal_depth, ex_clause, selected_goal);
let (positive_link, negative_link) = match polarity {
Polarity::Positive => (Some(pending_ex_clause), None),
Polarity::Negative => (None, Some(pending_ex_clause)),
};
let (subgoal_table, subgoal_depth) =
self.push_new_table(&canonical_subgoal, positive_link, negative_link);
let mut subgoal_minimums = self.stack.top().link;
self.subgoal(
subgoal_depth,
instantiated_subgoal,
subst,
&mut subgoal_minimums,
)?;
self.update_solution(
goal_depth,
subgoal_table,
polarity,
minimums,
&mut subgoal_minimums,
);
Ok(None)
}
/// Creates a `PendingExClause` representing the current node in the forest.
///
/// # Parameters
///
/// - `goal_depth` -- the depth of the suspended goal in the stack
/// - `ex_clause` -- the thing we are trying to prove (`A |- G` in EWFS),
/// but with selected literal popped
/// - `selected_goal` -- the selected literal. This could be either positive
/// or negative depending on context.
fn pending_ex_clause(
&mut self,
goal_depth: StackIndex,
ex_clause: &ExClause,
selected_goal: &InEnvironment<Goal>,
) -> CanonicalPendingExClause {
let parts = (
&ex_clause.subst,
&selected_goal,
&ex_clause.delayed_literals,
&ex_clause.constraints,
&ex_clause.subgoals,
);
let canonical_parts = self.infer.canonicalize(&parts).quantified;
canonical_parts.map(
|(subst, selected_goal, delayed_literals, constraints, subgoals)| PendingExClause {
goal_depth,
subst,
selected_goal,
delayed_literals,
constraints,
subgoals,
},
)
}
fn negative(
&mut self,
goal_depth: StackIndex,
ex_clause: ExClause,
selected_goal: InEnvironment<Goal>,
minimums: &mut Minimums,
) -> ExplorationResult {
debug_heading!(
"negative(goal_depth={:?}, ex_clause={:?}, selected_goal={:?}, minimums={:?})",
goal_depth,
self.infer.normalize_deep(&ex_clause),
selected_goal,
minimums
);
// First, we have to check that the selected negative literal
// is ground, and invert any universally quantified variables.
//
// DIVERGENCE -- In the RR paper, to ensure completeness, they
// permit non-ground negative literals, but only consider
// them to succeed when the target table has no answers at
// all. This is equivalent inverting those free existentials
// into universals, as discussed in the comments of
// `invert`. This is clearly *sound*, but the completeness is
// a subtle point. In particular, it can cause **us** to reach
// false conclusions, because e.g. given a program like
// (selected left-to-right):
//
// not { ?T: Copy }, ?T = Vec<u32>
//
// we would select `not { ?T: Copy }` first. For this goal to
// succeed we would require that -- effectively -- `forall<T>
// { not { T: Copy } }`, which clearly doesn't hold. (In the
// terms of RR, we would require that the table for `?T: Copy`
// has failed before we can continue.)
//
// In the RR paper, this is acceptable because they assume all
// of their input programs are both **normal** (negative
// literals are selected after positive ones) and **safe**
// (all free variables in negative literals occur in positive
// literals). It is plausible for us to guarantee "normal"
// form, we can reorder clauses as we need. I suspect we can
// guarantee safety too, but I have to think about it.
//
// For now, we opt for the safer route of terming such
// executions as floundering, because I think our use of
// negative goals is sufficiently limited we can get away with
// it. The practical effect is that we will judge more
// executions as floundering than we ought to (i.e., where we
// could instead generate an (imprecise) result). As you can
// see a bit later, we also diverge in some other aspects that
// affect completeness when it comes to subgoal abstraction.
let inverted_subgoal = match self.infer.invert(&selected_goal) {
Some(g) => g,
None => {
return Err(ExplorationError::Floundered);
}
};
// DIVERGENCE
//
// If the negative subgoal has grown so large that we would have
// to truncate it, we currently just abort the computation
// entirely. This is not necessary -- the SA paper makes no
// such distinction, for example, and applies truncation equally
// for positive/negative literals. However, there are some complications
// that arise that I do not wish to deal with right now.
//
// Let's work through an example to show you what I
// mean. Imagine we have this (negative) selected literal;
// hence `selected_subgoal` will just be the inner part:
//
// // not { Vec<Vec<Vec<Vec<i32>>>>: Sized }
// // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// // `selected_goal`
//
// (In this case, the `inverted_subgoal` would be the same,
// since there are no free universal variables.)
//
// If truncation **doesn't apply**, we would go and lookup the
// table for the selected goal (`Vec<Vec<..>>: Sized`) and see
// whether it has any answers. If it does, and they are
// definite, then this negative literal is false. We don't
// really care even how many answers there are and so forth
// (if the goal is ground, as in this case, there can be at