Skip to content

nayas360/goply

Repository files navigation

goply

A pure go lexer and parser generator library.

goply stands for go parser lex yacc.

This project was inspired by rply and David Beazley's PLY both of which are excellent parser and lexer generator libraries for python.

For those of you familiar with the *nix tools lex and yacc, lex is the lexer generator and yacc is the parser generator. So having a parser in front of them in the acronym probably doesn't make any sense, but I liked the name goply and stuck with it, redefining the p from python to parser. Though yacc itself is the parser generator and the acronym expansion does not make sense, so ignore it ;)

Making a lexer

Making a lexer with goply is as simple as it gets, only a single line is required:

lex := goply.NewLexer(false)

The above line would create a lenient lexer, meaning if a symbol cannot be matched to any rule it is simply ignored. The strict version would panic and return an error in this situation instead.

Now for the lexer to actually work, lexical rules need to be added. A lexical rule is a mapping from a token to a regular expression or regex. A token identifies the type or class of the match that will performed using the regex.

Say for the lexer to recognise numbers we can add the following rule:

lex.AddRule("<number>", "[0-9]+")

The first argument is the token type and the second argument is the regex. Note that we could have also done the following:

lex.AddRule("NUMBER", "[0-9]+")

or even the following:

lex.AddRule("INTEGER","[0-9]+")

Hence, the first argument is only to give an unique name to the class of regex the lexer will try match. In this case its an integer value.

These token type's will actually be used by the parser in order to denote terminal tokens in the grammar.

Now to actually get the tokens from the source file

tokens, err := lex.GetTokens(source)

Where source is a string containing the source contents.

Lexer Gotcha's

  • The lexer uses regexp package from the go standard library.
  • The lexer will try to match the first rule first and then the second and so on.

Say you are trying to develop a programming language and it has a keyword called var and also variable names. So you the lexer rules as follows:

lex.AddRule("<identifier>","[A-Za-z_][A-Za-z0-9_]+") // your typical variable names
lex.AddRule("<var_keyword>","var") // var keyword

The lexer would end up giving you tokens that does not have var at all, because var is also a valid <identifier> !!

So you must add general case rules like <identifier> after you add rules for all the specific cases.

  • The lexer will ignore redeclaration's of the same regex rule with different token type

  • The lexer will discard all previous declaration's if a redeclaration is done with the same token type.

  • The lexer will panic if it cannot compile the regular expression given.

Making a parser

The parser is yet to be implemented. see issue #2

License

This project is licensed under MIT License - see LICENSE for a copy.

About

A pure go lexer and parser generator library

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages