Permalink
Browse files

paper revisions

darcs-hash:20070228112805-974a0-a1f99b4d104b3995330bf26c536f31b34c59e5ab.gz
  • Loading branch information...
1 parent ce201b9 commit 4abfa25542974f5cb1e19fad6a318713f89fe60c eb committed Feb 28, 2007
@@ -95,11 +95,12 @@ @inproceedings{ fta
year = "2000",
url = "citeseer.ist.psu.edu/geuvers01constructive.html" }
-@misc{mckinna-expr,
- title = {A type-correct, stack-safe, provably correct, expression compiler in Epigram},
+@article{mckinna-expr,
+ title = {A type-correct, stack-safe, provably correct, expression compiler in {Epigram}},
author = {James McKinna and Joel Wright},
- year = 2006,
- howpublished = {Submitted for publication}
+ year = 2007,
+ journal = {Journal of Functional Programming},
+ note = {To appear}
}
@article{pugh-omega,
@@ -116,10 +117,14 @@ @misc{four-colour
year = 2005,
howpublished = {\url{http://research.microsoft.com/~gonthier/4colproof.pdf}}}
-@misc{sparkle,
+@inproceedings{sparkle,
author = "Maarten {de Mol} and Marko {van Eekelen} and Rinus Plasmeijer",
title = "Theorem Proving for Functional Programmers",
url = "citeseer.ist.psu.edu/654423.html",
+ BOOKTITLE = {Proc. Implementation of Functional Languages (IFL 2002)},
+ publisher = {Springer},
+ series = {LNCS},
+ volume = {2312},
year = 2002
}
@@ -17,4 +17,11 @@ ivor.dvi: $(SOURCES)
-latex ivor
-latex ivor
+package: ifl06.tar
+
+ifl06.tar: ivor.dvi ivor.ps ivor.pdf .PHONY
+ mkdir -p ifl06
+ cp ../bib/*.bib *.tex *.ltx *.bib *.ps *.pdf *.dvi *.cls ifl06
+ tar cvf ivor06.tar ifl06
+
.PHONY:
@@ -3,10 +3,12 @@ \section{Related Work}
The ability to extend a theorem prover with user defined tactics has
its roots in Robin Milner's LCF~\cite{lcf-milner}. This introduced the
programming language ML to allow users to write tactics; we follow the
-LCF approach in exposing the tactic engine as an API. However, unlike
-other systems, we have not treated the theorem prover as an end in
-itself, but intend to expose the technology to any Haskell application
-which may need it. The implementation of \Ivor{} is based on the
+LCF approach in exposing the tactic engine as an API.
+%However, unlike
+%other systems, we have not treated the theorem prover as an end in
+%itself, but intend to expose the technology to any Haskell application
+%which may need it.
+The implementation of \Ivor{} is based on the
presentation of \Oleg{} in Conor McBride's
thesis~\cite{mcbride-thesis}. We use implementation
techniques from \cite{not-a-number} for dealing with variables and
@@ -27,19 +29,22 @@ \section{Related Work}
Other theorem provers such as \Coq{}~\cite{coq-manual},
\Agda{}~\cite{agda} and Isabelle~\cite{isabelle} have varying degrees
-of extensibility. The interface design largely follows that of
-\Coq{}. \Coq{} includes a high level domain specific language for
+of extensibility.
+%The interface design largely follows that of
+%\Coq{}.
+\Coq{} includes a high level domain specific language for
combining tactics and creating new tactics, along the lines of the
tactic combinators presented in section \ref{combinators}. This
language is ideal for many purposes, such as our \hdecl{contradiction}
tactic, but more complex examples such as \hdecl{buildTerm} would
-require extending \Coq{} itself. One advantage of using a
-DSEL~\cite{hudak-edsl} as provided by \Ivor{} is that it gives complete
+require extending \Coq{} itself. Using a
+DSEL~\cite{hudak-edsl} as provided by \Ivor{} gives complete
flexibility in the construction of tactics, and allows a close
-relationship between the tactics and the Haskell-implemented
-structures on which they operate (e.g. \hdecl{Raw}). In future, it may
-be worth exploring automatic translation between \Ivor{} and other
-theorem provers.
+relationship between the tactics and the structures on which they
+operate (e.g. \hdecl{Raw}).
+%In future, it may
+%be worth exploring automatic translation between \Ivor{} and other
+%theorem provers.
Isabelle~\cite{isabelle} is a
generic theorem prover, in that it includes a large body of object
@@ -62,9 +67,8 @@ \section{Related Work}
\section{Conclusions and Further Work}
-We have seen an overview of the \Ivor{} library, including basic
-tactics for building proofs similar to the tactics available in other
-proof assistants such as \Coq{}. By exposing the tactic API and
+We have seen an overview of the \Ivor{} library, including the term
+and tactic language. By exposing the tactic API and
providing an interface for term construction and evaluation, we are
able to embed theorem proving technology in a Haskell
application. This in itself is not a new idea, having first been seen
@@ -76,8 +80,8 @@ \section{Conclusions and Further Work}
section \ref{example2}.
The library includes several features we have not been able to discuss
-here. Dependently typed pattern matching
-
+here, e.g. dependently typed pattern matching~\cite{coquand-pm}, which
+gives a better notation for \remph{programming} as well as proof.
There is experimental support for multi-stage programming with
dependent types, exploited in~\cite{dtpmsp-gpce}. The term language
can be extended with primitive types and operations, e.g. integers and
@@ -102,10 +106,11 @@ \section{Conclusions and Further Work}
our research into Hume~\cite{Hume-GPCE}, a resource aware functional
language. We are investigating the use of dependent types in
representing and verifying resource bounded functional
-programs~\cite{dt-framework}. For this, automatic generation of
+programs~\cite{dt-framework}.
+For this, automatic generation of
injectivity and disjointness lemmas for constructors will be
-essential~\cite{concon}, as well as an elimination with a
-motive~\cite{elim-motive} tactic. Future versions will include
+essential~\cite{concon}.
+Future versions will include
optimisations from \cite{brady-thesis} and some support for compiling
$\source$ terms; this would not only improve the efficiency of the
library (and in particular its use for evaluating certified code)
@@ -14,11 +14,14 @@ \subsection{The Core Calculus}
ETT~\cite{epireloaded}. This language, which I call
$\source$~\cite{brady-thesis}, is an enriched lambda calculus, with
the usual reduction rules, and properties of subject reduction, Church
-Rosser, and uniqueness of types up to conversion. The strong
+Rosser, and uniqueness of types up to conversion.
+More details on programming in $\source$ are given
+in~\cite{brady-thesis,dt-framework}.
+The strong
normalisation property (i.e. that evaluation always terminates) is
guaranteed by allowing only primitive recursion over strictly positive
inductive datatypes. The syntax of terms ($\vt$) and binders ($\vb$)
-in this language is as follows:
+in this language is:
\DM{
\begin{array}{rll@{\hg}|rll}
@@ -41,10 +44,8 @@ \subsection{The Core Calculus}
abbreviate it to \mbox{$\vS\to\vT$} if $\vx$ is not free in
$\vT$. This is both for readability and a notation more consistent
with traditional functional programming languages.
-
Universe levels on types (e.g. $\Type_0$ for values, $\Type_1$ for
-types, etc.) may be left implicit and inferred
-by the machine as in~\cite{implicit-pollack}.
+types, etc.) may be inferred as in~\cite{implicit-pollack}.
Contexts ($\Gamma$) are collections of binders.
%% defined inductively; the empty context
@@ -58,11 +59,10 @@ \subsection{The Core Calculus}
%% {\Gamma;\vb\proves\RW{valid}}
%% }
-The typing rules are given below. These depend on a conversion
+The typing rules, given below, depend on a conversion
relation \mbox{$\Gamma\proves\vx\conv\vy$}, which holds if and only if
$\vx$ and $\vy$ have a common reduct. This requires the typechecker to
-perform some evaluation --- in order to find a common redex, types are
-reduced to normal form --- so it is
+normalise terms, to find the common reduct, so it is
important for decidability of typechecking that the language is
strongly normalising.
@@ -157,7 +157,7 @@ \subsection{Inductive Families}
}
}
-$\source$ is a dependently typed calculus, meaning that types can be
+Types can be
parametrised over values. Using this, we can declare the type of
vectors (lists with length), where the empty list is statically known
to have length zero, and the non empty list is statically known to
@@ -181,11 +181,10 @@ \subsection{Elimination Rules}
\label{elimops}
When we declare an inductive family $\dD$, we give the constructors
-which explain how to build objects in that family. Along with this,
-\Ivor{} generates an \demph{elimination operator} $\delim$ (the
-type of which we call the \demph{elimination rule}) and corresponding
-reductions. The operator implements the reduction and recursion
-behaviour of terms in the family --- it is effectively a fold
+which explain how to build objects in that family.
+\Ivor{} generates from this an \demph{elimination operator} $\delim$ and corresponding
+reductions, which implements the reduction and recursion
+behaviour of terms in the family --- it is a fold
operator. The method for constructing elimination operators
automatically is well documented, in particular
by~\cite{dybjer94,luo94,mcbride-thesis}. For $\Vect$, \Ivor{} generates the
@@ -218,7 +217,7 @@ \subsection{Elimination Rules}
}
}
-The arguments to the elimination operator are the \demph{parameters}
+The arguments are the \demph{parameters}
and \demph{indices} ($\vA$ and $\vn$ here), the \demph{target} (the
object being eliminated; $\vv$ here), the \demph{motive} (a function
which computes the return type of the elimination; $\vP$ here) and the
@@ -228,12 +227,13 @@ \subsection{Elimination Rules}
is not passed to the methods as an argument, but $\vn$ does vary, so
is passed. A more detailed explanation of this distinction can be
found in~\cite{luo94,brady-thesis}.
-
A case analysis operator $\dcase$, is obtained similarly, but without
-the induction hypotheses. These operators are the only means to
-analyse a data structure and the only operators which can make
-recursive calls. This, along with the restriction that data types must
-be strictly positive, ensures that evaluation always terminates.
+the induction hypotheses.
+
+%These operators are the only means to
+%analyse a data structure and the only operators which can make
+%recursive calls. This, along with the restriction that data types must
+%be strictly positive, ensures that evaluation always terminates.
\subsection{The Development Calculus}
@@ -86,7 +86,7 @@ \subsection{A Propositional Logic Theorem Prover}
}
Then $\interp{\neg\ve}\:=\:\interp{\ve}\to\False$. The automatically
-derived elimination rule has the following type, showing that a value
+derived elimination shows that a value
of \remph{any} type can be created from a proof of the empty type:
\DM{
@@ -153,13 +153,14 @@ \subsection{\Funl{}, a Functional Language with a Built-in Theorem Prover}
representation and correctness proofs in that language. By using the
same framework for both, it is a small step from implementing the
language to implementing a theorem prover for showing properties of
-programs in the language, making use of the theorem prover developed
+programs, making use of the theorem prover developed
in sec. \ref{example1}.
An implementation is available from
\url{http://www.cs.st-andrews.ac.uk/~eb/Funl/}; in this section I will
sketch some of the important details of this implementation. Like the
propositional logic theorem prover, much of the detail is in the
-parsing and pretty printing of terms and propositions.
+parsing and pretty printing of terms and propositions, relying on
+\Ivor{} for typechecking and evaluation.
\subsubsection{Programs and Properties}
View
@@ -8,7 +8,7 @@ \section{Introduction}
Type theory based theorem provers such as \Coq{}~\cite{coq-manual} and
\Agda{}~\cite{agda} have been used as tools for verification of programs
(e.g.~\cite{leroy-compiler,why-tool,mckinna-expr}), extraction of
-correct programs from proofs (e.g.~\cite{extraction-coq,large-extraction})
+correct programs from proofs (e.g.~\cite{extraction-coq})
and formal proofs of mathematical properties
(e.g.~\cite{fta,four-colour}). However, these tools are designed with a
human operator in mind; the interface is textual which makes it
@@ -57,17 +57,17 @@ \subsection{Motivating Examples}
\begin{description}
\item[Programming Languages]
-Dependent type theory can be used as the internal representation for a
+Dependent type theory is a possible internal representation for a
functional programming language.
%The core language of the Glasgow
%Haskell Compiler is \SystemF{}~\cite{core} --- dependent type theory
%generalises this by allowing types to be parametrised over values.
-One advantage of a pure functional language is that correctness
-properties of programs in the language can be proven by equational
-reasoning. Some tools exist to assist with this,
-e.g. Sparkle~\cite{sparkle} for the Clean language~\cite{clean}, or
+Correctness
+properties of programs in purely functional languages can be proven by
+equational reasoning,
+e.g. with Sparkle~\cite{sparkle} for the Clean language~\cite{clean}, or
Cover~\cite{cover} for translating Haskell into
-\Agda{}~\cite{agda}. However a problem with such tools is that they
+\Agda{}~\cite{agda}. However these tools
separate the language implementation from the theorem prover --- every
language feature must be translated into the theorem prover's
representation, and any time the language implementation is changed,
@@ -77,21 +77,22 @@ \subsection{Motivating Examples}
language with a built-in theorem prover, with a common representation
for both.
-\item[Verified DSL Implementation]
-We have previously used \Ivor{} to demonstrate the implementation of
-a verified domain specific language~\cite{dtpmsp-gpce}. The idea is to
-represent the abstract syntax tree of a DSL program as a dependent
-type, and write an interpreter which guarantees that invariant
-properties of the program are maintained. Using staging
-annotations~\cite{multi-taha}, such an interpreter can be specialised
-to a translator. We are continuing to explore these techniques in the
-context of resource aware programming~\cite{dt-framework}.
+\item[Verified DSL Implementation]
+
+We have previously implementated a verified domain specific
+language~\cite{dtpmsp-gpce} with \Ivor{}. The abstract syntax tree of
+a program is a dependent data structure, and the type system
+guarantees that invariant properties of the program are maintained
+during evaluation. Using staging annotations~\cite{multi-taha}, such
+an interpreter can be specialised to a translator. We are continuing
+to explore these techniques in the context of resource aware
+programming~\cite{dt-framework}.
\item[Formal Systems]
A formal system can be modelled in dependent type theory, and
-properties of the system and derivations within the system can be
-proved. A simple example is propositional logic --- the connectives
+derivations within the system can be constructed and checked.
+A simple example is propositional logic --- the connectives
$\land$, $\lor$ and $\to$ are represented as types, and a theorem
prover is used to prove logical formulae. Having an implementation of
type theory and an interactive theorem prover accessible as an API
@@ -112,8 +113,8 @@ \subsection{Motivating Examples}
implementation of such tactics as easy as possible.
In \Ivor{}'s dependent type system, types may be predicated on
-arbitrary values. This means that programs and properties can be
-expressed within the same self-contained system --- properties can be
+arbitrary values. Programs and properties can be
+expressed within the same self-contained system --- properties are
proved by construction, at the same time as the program is
written. The tactic language can thus be used not only for
constructing proofs but also for interactive program development.
@@ -67,10 +67,10 @@
\section*{Acknowledgements}
-This work is generously supported by EPSRC grant EP/C001346/1.
My thanks to Kevin Hammond and James McKinna for their comments on an
earlier draft of this paper, and to the anonymous reviewers for their
helpful comments.
+This work is generously supported by EPSRC grant EP/C001346/1.
\bibliographystyle{abbrv}
\begin{small}
Oops, something went wrong.

0 comments on commit 4abfa25

Please sign in to comment.