Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

What is Decidable about Gradual Types?

Zeina Migeed, Jens Palsberg

This artifact is for our POPL 2020 submission.

Step 1A: Introduction

In this section, we will introduce some of the files in the root directory and discuss their functionality.

  • Lang.hs: Contains our definition of the Gradually Typed Lambda Calculus.

  • TypeCheck.hs: Contains our typechecker for the Gradually Typed Lambda Calculus.

  • Constraint.hs: Defines our constraint system, and contain our constraint generation algorithm

  • Finiteness.hs Defines our finiteness checker algorithm.

  • Maximality.hs : Contains the 3 checks about maximal migrations. Singleton, TopChoice and Maximality.

  • Formula.hs NPHard.hs Define the mapping from formulas to programs.

Step 1B: Run all the tests

  • If you downloaded the VM, open the terminal and run: cd Maximal-Migration
  • If you downloaded the repo, then go to the root directory.
  • Run stack build
  • Run stack test/Test.hs

This will run all algorithms on the examples mentioned in the paper so we can verify their correctness. Performance will be evaluated using another command. One general note about these tests is that the symbol * in the artifact stands for dyn. So λx : * . x (succ x) is the same as λx : dyn . x (succ x) and also the same as λx . x (succ x) (because * stands for the unknown type). Throughout the artifact, we use first representation only, but the other two representations appear in the paper. We will verify the results in figures 4,5,6 and 7 as well as two examples mentioned in section 7.

Step 2: Verify Figure 4

First, we want to verify the results which appear in Figure 4 in the paper. The figure summarizes four kinds of checks on 12 benchmarks.

In the terminal, you will now see the output for each of the four checks on the 12 benchmarks in Figure 4, in the order they appear in the paper (except for the last check which we will discuss). Below is some sample output for each check. In all the results below, True corresponds to and False corresponds to ✕ in figure 4.

Singleton check

sees that λx : * . x (succ x) is singleton? = True
sees that λx : * . x (succ (x True)) is singleton? = False
sees that λx : * . + (x 4) (x True) is singleton? = False

Top choice check

 sees that λx : * . x (succ x) has a top choice? =  True
 sees that λx : * . x (succ (x True)) has a top choice? =  True
 sees that λx : * . + (x 4) (x True) has a top choice? =  True

Finiteness check

  sees that λx : * . x (succ x) is finite? = True
  sees that λx : * . x (succ (x True)) is finite? = True
  sees that λx : * . + (x 4) (x True) is finite? = True

Maximality check is divided into two categories in our artifact. One for small programs (going upto level 5 in the lattice, and one for large programs going up to level 4). We will describe how to change this level in step 5. You will see this output in the terminal:

Maximality check (benchmarks 1-10) The output below this corresponds to benchmarks 10-12 in Fig 4

Maximality check (benchmarks 11, 12 & NPHard examples) The output consists of 4 benchmarks. The first two are benchmarks 11 and 12 in the figure while the last two are the benchmark E_{f_2} from section 6.3 of the paper, and the mapping generated from f_8 in section 7. We can see that E_{f_2} has a maximal migration and that the mapping from f_8 does not, since f_8 is unsatisfiable.

Step 3: Verify Figure 6

We now verify the results in Figure 6.

In the terminal, we will see text that says: Show migrations (fig 6) which displays the actual migration for each figure in the table if it exists, and outputs Nothing otherwise. This check is divided into three categories. We explain them and show sample output.

Show migrations (fig 6) benchmarks 1-9

sees that λx : * . x (succ x) has a maximal migration λx : * . x (succ x)

Here, we have benchmarks 1-9 which run our semi algorithm. Note that if you pass a program with no maximal migration to this check, it will run forever.

**Show migrations (fig 6) benchmark 10 ** This check contains benchmark 10 and will only run the maximality algorithm upto level 5. (see step 5 for more details).

Show migrations (fig 6) 11,12 and f8

 sees that (λx : * . λy : * . y (x (λa : * . a)) (x (λb : * . λc : * . b))) (λd : * . d d) has a maximal migration Nothing

Next, the above check will run benchmarks 11 and 12, as well as the program generated by f_8 upto level 4 of the lattice.

Step 4: NP Hardness examples

We want to verify that the NP hardness example seen in section 6.3 has a maximal migration.

For this, read the part of the output which says:

NP hard example E_f Below that line, you will see the proram and the corresponding maximal migration. For this, we have implemented a mapping function make_mapping in NPHard.hs which takes boolean formulas to programs. A check is called on that program to verify that it is indeed a maximal migration.

Step 5A: Exploring the lattice (optional)

To run the maximality algorithm up to a given level in the lattice, uncomment the line of code that says -- test_mult_migrate in test/Test.hs then rerun the command: stack test/Test.hs

In Test.hs, go to the line of code that says test_mult_migrate :: Spec where you will see a sample test: example 3 lam [lam1, lam2] This means, run the program lam upto level 3 of the lattice and output the maximal migrations. [lam1, lam2] are the maximal migrations for this program.

When you run this test, the following output will be displayed:

  sees that at level 3 λx : * . (λy : * . x) x x has migrations [λx : * . (λy : int . x) x x,λx : * . (λy : bool . x) x x]

Sample programs to run tests on can be found insrc/Examples.hs.

Note that this check uses the function

findAllMaximalMigrationsN :: Int -> Expr -> Env -> [Expr]

Which, given a level and an expression, returns the set of maximal migrations found up to that level.

However, we also have several other functions to explore the lattice. In Maximality.hs, we have:

closestMaximalMigration_n :: Expr -> Int ->  Env -> Maybe Expr

which returns only the first migration found while exploring the lattice. If no migration is found up to level n, we return Nothing.

topchoice :: Expr -> Env -> [Expr]

Is the function used for our performance evaluation. This function first checks if the lattice is finite, and if so, retrieves all maximal migrations. Otherwise, returns an empty list.

Step 5B: λx : * . λy : * . y x x

Here, we will see a specific example of exploring different levels of the lattice. In figure 6, we notice that for the program

λx : * . λy : * . y x x, 

the closest maximal migration is the following:

λx : * . λy : int -> bool -> int . y x x

This is because in our maximality check, we find the closest maximal in the lattice. We want to see that the migration in the table is indeed a valid migration and can be found using our algorithm. Consider the check:

find specific maximal migration

sees that λx : int . λy : int -> int -> int . y x x is a maximal migration for λx : * . λy : * . y x x

Here, we run the migration algorithm upto level 5 and collect all maximal migrations and check that the migration shown in the table is valid. We will not display all migrations in this case for space reasons but we will show in step 6 how to find migrations at different levels in the lattice.

Step 6: Verify Figure 7

You will see the check: Typecheck (fig 7) This will list all maximal benchmarks and their corresponding types. Figure 7 lists the programs before migration, and the types after migration. However, we have already verified that we find the maximal migrations correctly, so it remains to check that the types of those migrations correspond to those in figure 7.

Step 7: Verify Figure 5

To run the performance tests in Figure 5, run the command:

stack test/Performance.hs

You can also generate an html report with regression tests and time density tests for each benchmark and each check by running the command:

stack test/Performance.hs --output Report.html

In both cases, the following will be displayed on the screen:

> benchmarking Singleton /1
time                 319.9 ns   (317.3 ns .. 323.6 ns)
                     0.999 R²   (0.999 R² .. 0.999 R²)
mean                 327.4 ns   (324.7 ns .. 330.1 ns)
std dev              9.265 ns   (7.906 ns .. 10.93 ns)
variance introduced by outliers: 41% (moderately inflated)

The above output means we are running the Singleton check on the benchmark 1. We can see the mean, std dev. and other statistics. Our 12 benchmarks are marked 1 to 12 in the order they appear in Figure 4. We also have the two additional benchmarks from section 7. The SAT example is benchmark 13, and the program generated from f_8 is benchmark 14. Singleton, Finiteness and TopChoice run on the base 12 benchmarks and Maximality runs on all 14 benchmarks.

Performance tests are separated into groups. Let us identify these groups.

  • Singleton Runs all singleton checks on our 12 benchmarks

  • Finiteness Runs finiteness checks on the 9 (smaller) benchmarks 1-8 and 10

  • FLarge Runs finiteness checks on the 3 (large) benchmarks 9, 11 and 12

  • TopChoice Runs topchoice checks on the first 9 (smaller) benchmarks

  • TCLarge Runs topchoice checks on the last 3 (large) benchmarks

  • Maximality Runs maximality checks on the first 9 (smaller) benchmarks upto level 5 of the lattice

  • MLarge Runs maximality checks on the last 3 (larger) benchmarks (9,11,12). Benchmark 9 runs to level 5 and benchmarks 11 and 12 run to level 4.

  • MappingSAT Measures the maximality check on the NPhardness SAT program E_f (benchmark 13)

  • MappingUNSAT Measures the maximality check on the NPhardness UNSAT program generated from f_8 (benchmark 14)

Seperating larger benchmarks into groups allows us to run specific groups for a longer time such that we run each function at least 100 times.

You can run each group individually to see the performance and generate a separate html report (this can be opened in firefox on the VM). A report also lets you see how many times each benchmark was run and provides additional statistical information.

To run a separate group, use this command:

stack test/Performance.hs --output report.html groupName

Without report: stack test/Performance.hs groupName

For the group MLarge, we recommend running it for a longer time to ensure that each benchmark runs at least 100 times. For this, use the command

stack test/Performance.hs --output report.html -L 1000 MLarge

Note that for performance, instead of using the function closestMaximalMigration_n for maximality, we use hardcoded versions:


This is to make all functions used in our evaluation functions of one argument, which makes the code more readable. Please see Performance.hs for more details.


No description, website, or topics provided.







No packages published