Skip to content
Browse files

Merge branch 'master' of github.com:junniest/bach_test_repo

  • Loading branch information...
2 parents 9a68a82 + 2fafe08 commit 87d67e4fa3a4d962eedf9d064c29d68a958175aa @ashinkarov ashinkarov committed Mar 27, 2012
Showing with 13 additions and 13 deletions.
  1. +11 −11 paper/dynamic-extensions.tex
  2. +1 −1 paper/intro.tex
  3. +1 −1 paper/parser-model.tex
View
22 paper/dynamic-extensions.tex
@@ -1,7 +1,7 @@
\section{\label{sec:dynext}Transformation system}
In this section we are going to describe a syntax of the rules of
-the transformation system and demonstrate the way to prove a correctness
+the transformation system and demonstrate a way to prove the correctness
of the transformation.
\subsection{Match Syntax}
@@ -52,26 +52,26 @@ \subsection{Match Syntax}
Now we can demonstrate a simple substitution example on the language
defined in Fig.~\ref{fig:grammar}. Assume that function \verb|replace|
-is defined in $T$ with three arguments and it replaces in any list
-of pseudo-tokens occurrence each of the second argument with the third,
+is defined in $T$ with three arguments and in any list of pseudo-tokens
+it replaces each ocurrence of the second argument with the third,
and what we need is to call a function called \verb|bar| with an
-argument being a summed-up arguments of function called \verb|foo|.
+argument being the summed-up arguments of function called \verb|foo|.
In that case the following match would perform such a substitution.
\begin{verbatim}
match [\fun_call] v = foo ( \expr \( , \expr \) \* )
-> [\fun_call] cons bar (replace (tail v) \, \+)
\end{verbatim}
\subsection{Definition of matches}
-Match rules can be defined in the arbitrary places of a program
+Match rules can be defined in arbitrary places of a program
and the rule activates immediately after the definition was parsed.
We do however differentiate between the global matches and context
matches. In our case the context is created by a \verb|stmt_block|
production. In that case, all the matches declared within the
\verb|stmt_block| production are valid only within this particular
production. When the production is finished, the matches declared
within the production would be removed. Declaration of the context
-is up to the parser, it may define it in any which way by calling
+is up to the parser, it may define it in any way by calling
two interface functions. The context definition can be omitted in
which case all the matches would be global.
@@ -87,7 +87,7 @@ \subsection{Definition of matches}
\end{verbatim}
Here we can see that the last regular expression includes the
previous as \verb|\primary_expr| is also an \verb|\expr| and
-so on. But introducing priorities, one may still have a different
+so on. But by introducing priorities one may still have a different
behaviour in each case.
Nested context matches overload the outer matches, in a same
@@ -102,7 +102,7 @@ \subsection{Definition of matches}
\subsection{$T$ language and correctness}
In order to perform a transformation of the matched list we define a
-minimalistic functional language called $T$ in order to demonstrate
+minimalistic functional language called $T$ to demonstrate
the approach. The core definition of $T$ is given by Fig.~\ref{fig:t}.
\begin{figure}
@@ -127,13 +127,13 @@ \subsection{$T$ language and correctness}
list of pseudo-tokens applying recursion, head, tail and cons
constructs. In order to stop the recursion we also introduce
arithmetic operations on integers. In order to perform partial
-evaluation, we need to have an interface to the value of the
+evaluation, we need to have an interface to get the value of a
pseudo-token. For that reason we introduce function \verb|value|
which is applicable to the pseudo-tokens which have a constant
integer value (in our example it is a \verb|\number| pseudo-token).
In order to construct an object from integer, we are using
\verb|\number[42]| syntax. The \verb|value| function operates
-on integers only for the simplicity of the model only, the basic
+on integers only for the simplicity of the model, but the basic
types can be extended in future.
\subsubsection{Type system}
@@ -154,7 +154,7 @@ \subsubsection{Type system}
However from the regular expression we have additional information
about the structure of this list. In order to perform a type
inference, we observe that regular languages, hence regular
-expression bring a number of set operations, which is the key
+expressions, bring a number of set operations, which is the key
driving force of the inference. First of all, it is easy to
define a subset relationship on two regular expressions
$r_1 \sqsubseteq r_2$. As we know, we can always build a DFA for
View
2 paper/intro.tex
@@ -16,7 +16,7 @@ \section{\label{sec:intro}Introduction}
aspect to the proper self-modifying language is an ability to
change a grammar on the fly. If so, one can say that the
reasonable approach might be to create a cross compiler which
-would transform a desired syntax into the syntax recognized by
+would transform the desired syntax into the syntax recognized by
some standard compiler. The problem with this approach is, that most of
the base-line languages come with a syntax that is very difficult
to parse using an automatic tool. As an example consider the
View
2 paper/parser-model.tex
@@ -31,7 +31,7 @@ \section{\label{sec:parser}Parser model}
\end{figure}
\noindent
-As the transformation system is build as an extension to the parser,
+As the transformation system is built as an extension to the parser,
it expects a certain behaviour of the parser. Further down
we list a set of properties we require to be present in the
implementation of the parser.

0 comments on commit 87d67e4

Please sign in to comment.
Something went wrong with that request. Please try again.