Skip to content

Introduction to Extended Affix Grammars

Denis Kuniss edited this page Jan 27, 2022 · 17 revisions

The parser generator schema

Parser generators are quite well known: They process a context free grammar specification to generate a parser.

The generated parser processes an arbitrary input text deciding whether this text is a sentence of the language described by the context free grammar the parser generator has generated the parser for. And if so, it almost generate a parse tree representing the input structurally.

parser generator schema

The compiler generator schema

A compiler generator, however, consist of an parser generator accompanied with an evaluator generator. The constructed parstree built by the generated parser is passed to the generated evaluator which performs semantic checks and generates the output in the specified target language.

compiler generator schema

However, what is the input to a compiler generator?

It must be partly a context free grammar specification. But this is not sufficient! Almost all real compilers are written for languages (like all general purpose programming languages) which are not context free, they are context-sensitive.

The specification for an compiler generator must belong to the class of context-sensitive grammars to be powerfull enough to define a context-sensitive language.

A context-sensitive language example

A quite famous context-sensitive language example which is not context-free is

{ anbncn | n ≥ 1 } = { abc, aabbcc, aaabbbccc, … }

E.g., words of this language are "abc", "aabbcc", "aaabbbccc" and so on. The number of "a", "b" and "c" characters is always equal in a word that belongs to that language.
This cannot be specified by a context-free grammar!

First attempt using a context-free grammar

Let’s try to describe this language using a context-free grammar. We will use a formalism similar to the BNF grammar formalism.

S: A B C. 
A: "a" A | . 
B: "b" B | . 
C: "c" C | . 

Possible word derivation using that grammar are

S →* aaabbbccc
S →* aaabbc

Both words are part of the language defined by our context-free grammar. However, the second word is not part of our context-sensitive language { anbncn | n ≥ 1 } — the number of "a", "b" and "c" characters are not equal! The language defined by this context free grammar has "more" words than the context-sensitive one we are looking for.

So, the the context-free grammar from above is not sufficient to describe our language. We somehow need to "count" the "a", "b" and "c" characters and compare the counts.

Short excursion: Unary Numeral System

The Unary Numeral System is probably the first numeral system at all. Any child is starting to count using its fingers — its first unary numeral system. The first humans have probably used it. And in Germany they are counting the beers in the pub that way.

Here is an example using tally marks and a second using "i" for counting:

0:             0: 
1: |           1: i
4: ||||        4: iiii

This kind of counting can be expressed by a context free grammar:

N → 'i' N .
N → .

The nonterminal N derives to either a leading 'i' followed by a recursive application of the same nonterminal N, or it derives to the empty alternative. The final dot just marks the end of rule, especially helful for seeing empty alternatives.

This grammar defines the infinite language of all countable 'i’s. Formally:

L(GN) = { i, ii, iii, …​ } = { in | n ≥ 0}

We will come back to that later on.

Counting as part of a context free grammar

But back to our context sensitive example anbncn. As we have figured out, we need to count the "a", "b" and "c" characters to compare the equality of their counts.

Let’s first do it naively by setting up a context free grammar for our language which represents the number of to be recognized characters of "a", "b" and "c" as part of the nonterminal name. The number of the characters is expressed as the corresponding number of "i"s.

We would start with the S, the start symbol of the grammar.

S → Ai   Bi   Ci .
S → Aii  Bii  Cii .
S → Aiii Biii Ciii .
...

We would need to add one rule for each natural number where the number of "i" behind the A, B and C nonterminal references corresponds to a natural number.

The same for the A rule:

A    → .
Ai   → "a" A .
Aii  → "a" Ai . 
Aiii → "a" Aii .
...

Informally, here the number of "i" on the left side of the grammar rule corrsponds to the number of recognized "a" characters when all nonterminal gets resolved. This has been done similar for rules B and C.

Yes, this leads to an infinite number of rules. So it cannot be written down! But this infinite grammar extactly defines our language in questions.

Nevertheless, let’s verify this on a example by writing down the grammar derivation steps for a particular sentence of our language. The example sentence is

aabbcc

We start at the grammars start symbol trying to derive the example sentence. The grammar rules applied for the current derivation step is given in paranthesis behind:

S   → Aii Bii Cii
    → aAi Bii Cii   [Aii → ”a” Ai .]
    → aaA Bii Cii   [Ai → ”a” A .]
    → aa Bii Cii    [A → .]
    → aa bBi Cii    [Bii → ”b” Bi .]
    → aa bbB Cii    [Bi → ”b” B .]
    → aa bb Cii     [B → .]
    → aa bb cCi     [Cii → ”c” Ci .]
    → aa bb ccC     [Ci → ”c” C .]
    → aa bb cc      [C → .]

This shows that the sentence aa bb cc can be derived from the start symbol S and therefore this sentence is part of the language defined by the infinte grammar from above.

Get rid of the infinity

Still, the grammar is infinite. It cannot be written down. How to overcome this?

to be continued…​