Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show differences from existing methods #3

Open
MichaelJFishman opened this issue Aug 25, 2022 · 7 comments
Open

Show differences from existing methods #3

MichaelJFishman opened this issue Aug 25, 2022 · 7 comments
Assignees
Labels
writing The writing needs to be updated

Comments

@MichaelJFishman
Copy link
Collaborator

Some reviews asked how this differs from existing methods. Explain the differences.

@MichaelJFishman MichaelJFishman self-assigned this Aug 25, 2022
@camall3n camall3n added the writing The writing needs to be updated label Aug 30, 2022
@camall3n
Copy link
Collaborator

camall3n commented Aug 30, 2022

@MichaelJFishman Is the intent to distinguish scoping from these others, or to improve clarity by explaining scoping better in terms of existing techniques?

If it's the former, we should make an additional issue for the latter.

@MichaelJFishman
Copy link
Collaborator Author

MichaelJFishman commented Oct 2, 2022

@camall3n

At least the latter. Review 2.4 says

I think the suggested technique could be
presented as an actual abstraction, combined with pruning of operators
irrelevant in the abstraction: the suggested technique combines a
projection (not in the sense of the paper) with a domain abstraction
(to a single value). (Some operators would need to be removed and
others would need to be "combined", more on this below.) This would
have the great advantage that all of the properties discussed in
Sections 3.1 and 3.2 would follow automatically from the transformation
being a homomorphism. One would then "only" need to argue why the
transformation additionally allows plan reconstruction from the
transformed to the original task. I recommend reading Sections 2 and 3
of "Merge-and-Shrink: A Compositional Theory of Transformations of
Factored Transition Systems"
by Sievers & Helmert (JAIR 2021). While
that work deals with transformations of LTS rather than tasks, I think
the concept is applicable to FDR tasks. Note that they also discuss how
transformations can prune states or operators (Section 8). Furthermore,
the concept of "label reduction" (Section 7) should capture the
"combination of operators" discussed in here.

Edit: This comment is more relevant to #15 .

@MichaelJFishman
Copy link
Collaborator Author

MichaelJFishman commented Oct 4, 2022

Specific methods to relate to:

@MichaelJFishman
Copy link
Collaborator Author

Rewording:

Use standard verbiage to make our method more clear, and to compare and contrast to existing methods.

@MichaelJFishman
Copy link
Collaborator Author

Expressing PPP using merge-and-shrink is a mess

It seems that expressing PPP as a composition of shrink, prune, label reduce, label prune transformations will a mess. The shrink transformation will throw away information we need at one of the label-pruning steps of PPP. In particular:

Part of a PPP transformation is, in order

  1. Ignore effects on vshrink. This is part of a shrink transformation
  2. Merge operators that are effect-equivalent Label reduction transformation
  3. Delete merged operators that condition on vshrink This relies on pre-shrink-transformation data!

The problem is that for FTS, we’d use the shrink transformation to ignore effects on vshrink, but the shrink transformation will remove any conditioning on vshrink.

We could

  1. Apply shrink to FTS X0 to get system X1
  2. Collect labels that are now effect-equivalent in X1
  3. Map each of these equivalence classes in X1 back to the pre-shrink labels in X0
  4. Reduce those pre-shrink labels to get X2
  5. Delete any label from this reduced label set (in X2) that conditions on vshrink, in order to get FTS X3
  6. Apply a shrink transformation to X3 to get a new FTS X4

But this seems messier than our existing description.

Mermaid graph of this. Solid lines are between FTS, their transformation, and the output FTS. Dashed lines indicate that a transformation definition is based on the source FTS.

graph LR

X0
X1
X2
X3
X4

F0(("shrink"))
F1(("label reduce"))
F2(("label prune"))
F3(("shrink"))

X0 --> F0 --> X1
X1 -.-> F1
X0 -->F1-->X2
X2 --> F2 --> X3
X3 --> F3 --> X4
Loading
graph LR

X0
X1
X2
X3
X4

F0[/"shrink vshrink"/]
F1[/"reduce labels based on the effect-equivalence of their images in X1"/]
F2[/"prune labels which condition on vshrink"/]
F3[/"shrink vshrink"/]

X0 --> F0 --> X1
X1 -.-> F1
X0 -->F1-->X2
X2 --> F2 --> X3
X3 --> F3 --> X4
Loading

Moving forwards

There may be another step left in PPP after the above. I'm not sure, and I don't know if we need to find out.

I think a minimal path is:

  1. In the paper say

A PPP transformation is similar to a composition of prune, shrink, label reduction, and label pruning* transformations from helmert-et-al. We can describe our transformation in this way, but the following expression is simpler. See the supplement for a partial expression of PPP in merge-and-shrink.

  1. Put a slightly cleaned up version of the above in the supplement.
  2. (tangent) Edit the paper to use partial functions instead of mapping some elements to null.

@camall3n
Copy link
Collaborator

What does vshrink mean?

@MichaelJFishman
Copy link
Collaborator Author

MichaelJFishman commented Dec 30, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
writing The writing needs to be updated
Projects
None yet
Development

No branches or pull requests

2 participants