PDDL samples demonstrating features of the VS Code PDDL Extension
Planning Domain Definition Language PDDL samples demonstrating VS Code PDDL extension features.
- Install Visual Studio Code the free and light-weight editor for developers
- Install the PDDL Extension
- Clone this repository and open the folder in Visual Studio Code
- Open the Test Explorer (View > Open View ... > Test)
- Unfold the folders in the Test Explorer and run the samples by right-clicking on them...
Start from empty files by using the domain and problem snippets.
This is a sample showing airport ground operations planning. It has an intentional bug. Can you find it?
Run the 1plane problem. The test case shows as failed. Export the plan to
1plane.pddl, save the file to disk. Select both
1plane-expected.pddl in the File Explorer and select Compare selected. See the difference? The aircraft is not getting re-fueled.
Now go back to the plan visualization and click on the action that is missing a pre-condition. If you select a correct one, you will see a hint.
Fix the bug and re-run the test case - it should pass now.
Export the plan to a
.plan file keeping the proposed default name. Run Tasks > Run task... > validate (with report) and observe that a
.tex file got created. Install LaTeX Preview extension to open the preview.
This domain is interesting, because it generates a small search space and the planner does not run out of available memory even when you ask it to search for a more optimal plan (e.g. by using the
-n flag in
Open the PDDL files of one of the test case and pres
Alt + P. When prompted, select specific options... and then type
-n as the command-line option (or equivalent in your planner).
The plan visualization shows multiple plans and you can select the plan you want (the bar visualizes plan metric of the plan) and compare them easily.
This classic planning benchmark domain is plemented here using the problme templating approach. See the
.ptest.json file for definition of the generated test cases. Each test case i generated from a
.json file defining the initial and goal state.
This demonstrates a regression test suite.
This domain demonstrates the problem file templating and a programatic generation of scalability test suite.
generate_tests.py script (requires Python 3.5+ to be installed) and refresh the Test Explorer. You will see a list of Scalability tests for your planner. Run the suite to see how your planner struggles (or does not :-]). See how the .json files are concise to capture the test case definitions. Check the logic in the problem.pddl template.
For any more advanced data transformation during the problem file generation, refer to the ScriptedTemplate sample. It shows how the same
problem.pddl template may be populated by a static .json file, or, by contrast, the template may be populated using data dynamically queried/transformed by a custom Python script.
transform.py script takes the arguments supplied via the
.ptest.json test case manifest (i.e. numbers 1, 2 and 3) and sums them up before outputting the result to the PDDL problem. The
transform.py script (or for that matter any custom program you may write) takes the templated problem from its standard input stream and outputs the rendered template to the standard output. VS Code just orchestrates the data flow during PDDL domain authoring and testing.
When such a solution is deployed (e.g. as a planning service), it is easy to wrap the
transform.py with a Python Flask service.
This is an important pattern, which helps externalize calculations or decision making from the planning domain, when it is more efficient (or less complex) to perform it outside the planning problem. This helps making the planning process faster.