- V2
- Introduction
- Tutorial
- Tag syntax
- Overview
- Grammar syntax
- Capturing
- Streaming
- Lexing
- Options
- Examples
- Performance
- Concurrency
- Error reporting
- Limitations
- EBNF
- Syntax/Railroad Diagrams
This is an alpha of version 2 of Participle. It is still subject to change but should be mostly stable at this point.
See the Change Log for details.
Note: semantic versioning API guarantees do not apply to the experimental packages - the API may break between minor point releases.
It can be installed with:
$ go get github.com/alecthomas/participle/v2@latest
The latest version from v0 can be installed via:
$ go get github.com/alecthomas/participle@latest
The goal of this package is to provide a simple, idiomatic and elegant way of defining parsers in Go.
Participle's method of defining grammars should be familiar to any Go
programmer who has used the encoding/json
package: struct field tags define
what and how input is mapped to those same fields. This is not unusual for Go
encoders, but is unusual for a parser.
A tutorial is available, walking through the creation of an .ini parser.
Participle supports two forms of struct tag grammar syntax.
The easiest to read is when the grammar uses the entire struct tag content, eg.
Field string `@Ident @("," Ident)*`
However, this does not coexist well with other tags such as JSON, etc. and
may cause issues with linters. If this is an issue then you can use the
parser:""
tag format. In this case single quotes can be used to quote
literals making the tags somewhat easier to write, eg.
Field string `parser:"@ident (',' Ident)*" json:"field"`
A grammar is an annotated Go structure used to both define the parser grammar, and be the AST output by the parser. As an example, following is the final INI parser from the tutorial.
type INI struct {
Properties []*Property `@@*`
Sections []*Section `@@*`
}
type Section struct {
Identifier string `"[" @Ident "]"`
Properties []*Property `@@*`
}
type Property struct {
Key string `@Ident "="`
Value *Value `@@`
}
type Value struct {
String *string ` @String`
Number *float64 `| @Float`
}
Note: Participle also supports named struct tags (eg.
Hello string `parser:"@Ident"`
).
A parser is constructed from a grammar and a lexer:
parser, err := participle.Build(&INI{})
Once constructed, the parser is applied to input to produce an AST:
ast := &INI{}
err := parser.ParseString("", "size = 10", ast)
// ast == &INI{
// Properties: []*Property{
// {Key: "size", Value: &Value{Number: &10}},
// },
// }
Participle grammars are defined as tagged Go structures. Participle will
first look for tags in the form parser:"..."
. It will then fall back to
using the entire tag body.
The grammar format is:
@<expr>
Capture expression into the field.@@
Recursively capture using the fields own type.<identifier>
Match named lexer token.( ... )
Group."..."
or'...'
Match the literal (note that the lexer must emit tokens matching this literal exactly)."...":<identifier>
Match the literal, specifying the exact lexer token type to match.<expr> <expr> ...
Match expressions.<expr> | <expr> | ...
Match one of the alternatives. Each alternative is tried in order, with backtracking.~<expr>
Match any token that is not the start of the expression (eg:@~";"
matches anything but the;
character into the field).(?= ... )
Positive lookahead group - requires the contents to match further input, without consuming it.(?! ... )
Negative lookahead group - requires the contents not to match further input, without consuming it.
The following modifiers can be used after any expression:
*
Expression can match zero or more times.+
Expression must match one or more times.?
Expression can match zero or once.!
Require a non-empty match (this is useful with a sequence of optional matches eg.("a"? "b"? "c"?)!
).
Notes:
- Each struct is a single production, with each field applied in sequence.
@<expr>
is the mechanism for capturing matches into the field.- if a struct field is not keyed with "parser", the entire struct tag will be used as the grammar fragment. This allows the grammar syntax to remain clear and simple to maintain.
Prefixing any expression in the grammar with @
will capture matching values
for that expression into the corresponding field.
For example:
// The grammar definition.
type Grammar struct {
Hello string `@Ident`
}
// The source text to parse.
source := "world"
// After parsing, the resulting AST.
result == &Grammar{
Hello: "world",
}
For slice and string fields, each instance of @
will accumulate into the
field (including repeated patterns). Accumulation into other types is not
supported.
For integer and floating point types, a successful capture will be parsed
with strconv.ParseInt()
and strconv.ParseFloat()
respectively.
A successful capture match into a bool
field will set the field to true.
Tokens can also be captured directly into fields of type lexer.Token
and
[]lexer.Token
.
Custom control of how values are captured into fields can be achieved by a
field type implementing the Capture
interface (Capture(values []string) error
).
Additionally, any field implementing the encoding.TextUnmarshaler
interface
will be capturable too. One caveat is that UnmarshalText()
will be called once
for each captured token, so eg. @(Ident Ident Ident)
will be called three times.
By default a boolean field is used to indicate that a match occurred, which turns out to be much more useful and common in Participle than parsing true or false literals. For example, parsing a variable declaration with a trailing optional syntax:
type Var struct {
Name string `"var" @Ident`
Type string `":" @Ident`
Optional bool `@"?"?`
}
In practice this gives more useful AST's. If bool were to be parsed literally then you'd need to have some alternate type for Optional such as string or a custom type.
To capture literal boolean values such as true
or false
, implement the
Capture interface like so:
type Boolean bool
func (b *Boolean) Capture(values []string) error {
*b = values[0] == "true"
return nil
}
type Value struct {
Float *float64 ` @Float`
Int *int `| @Int`
String *string `| @String`
Bool *Boolean `| @("true" | "false")`
}
Participle supports streaming parsing. Simply pass a channel of your grammar into
Parse*()
. The grammar will be repeatedly parsed and sent to the channel. Note that
the Parse*()
call will not return until parsing completes, so it should generally be
started in a goroutine.
type token struct {
Str string ` @Ident`
Num int `| @Int`
}
parser, err := participle.Build(&token{})
tokens := make(chan *token, 128)
err := parser.ParseString("", `hello 10 11 12 world`, tokens)
for token := range tokens {
fmt.Printf("%#v\n", token)
}
Participle relies on distinct lexing and parsing phases. The lexer takes raw bytes and produces tokens which the parser consumes. The parser transforms these tokens into Go values.
The default lexer, if one is not explicitly configured, is based on the Go
text/scanner
package and thus produces tokens for C/Go-like source code. This
is surprisingly useful, but if you do require more control over lexing the
builtin participle/lexer/stateful
lexer should
cover most other cases. If that in turn is not flexible enough, you can
implement your own lexer.
Configure your parser with a lexer using the participle.Lexer()
option.
To use your own Lexer you will need to implement two interfaces: Definition (and optionally StringsDefinition and BytesDefinition) and Lexer.
In addition to the default lexer, Participle includes an optional
stateful/modal lexer which provides powerful yet convenient
construction of most lexers. (Notably, indentation based lexers cannot
be expressed using the stateful
lexer -- for discussion of how these
lexers can be implemented, see #20).
It is sometimes the case that a simple lexer cannot fully express the tokens required by a parser. The canonical example of this is interpolated strings within a larger language. eg.
let a = "hello ${name + ", ${last + "!"}"}"
This is impossible to tokenise with a normal lexer due to the arbitrarily deep nesting of expressions.
To support this case Participle's lexer is now stateful by default.
The lexer is a state machine defined by a map of rules keyed by the state name. Each rule within the state includes the name of the produced token, the regex to match, and an optional operation to apply when the rule matches.
As a convenience, any Rule
starting with a lowercase letter will be elided from output.
Lexing starts in the Root
group. Each rule is matched in order, with the first
successful match producing a lexeme. If the matching rule has an associated Action
it will be executed.
A state change can be introduced with the Action Push(state)
. Pop()
will
return to the previous state.
To reuse rules from another state, use Include(state)
.
A special named rule Return()
can also be used as the final rule in a state
to always return to the previous state.
As a special case, regexes containing backrefs in the form \N
(where N
is
a digit) will match the corresponding capture group from the immediate parent
group. This can be used to parse, among other things, heredocs. See the
tests
for an example of this, among others.
Here's a cut down example of the string interpolation described above. Refer to the stateful example for the corresponding parser.
var lexer = stateful.Must(Rules{
"Root": {
{`String`, `"`, Push("String")},
},
"String": {
{"Escaped", `\\.`, nil},
{"StringEnd", `"`, Pop()},
{"Expr", `\${`, Push("Expr")},
{"Char", `[^$"\\]+`, nil},
},
"Expr": {
Include("Root"),
{`whitespace`, `\s+`, nil},
{`Oper`, `[-+/*%]`, nil},
{"Ident", `\w+`, nil},
{"ExprEnd", `}`, Pop()},
},
})
Other than the default and stateful lexers, it's easy to define your
own stateless lexer using the stateful.MustSimple()
and
stateful.NewSimple()
methods. These methods accept a slice of
stateful.Rule{}
objects consisting of a key and a regex-style pattern.
The stateful lexer replaced the old regex lexer.
For example, the lexer for a form of BASIC:
var basicLexer = stateful.MustSimple([]stateful.Rule{
{"Comment", `(?i)rem[^\n]*`, nil},
{"String", `"(\\"|[^"])*"`, nil},
{"Number", `[-+]?(\d*\.)?\d+`, nil},
{"Ident", `[a-zA-Z_]\w*`, nil},
{"Punct", `[-[!@#$%^&*()+_={}\|:;"'<,>.?/]|]`, nil},
{"EOL", `[\n\r]+`, nil},
{"whitespace", `[ \t]+`, nil},
})
Participle v2 now has experimental support for generating code to perform
lexing. Use participle/experimental/codegen.GenerateLexer()
to compile a
stateful
lexer to Go code.
This will generally provide around a 10x improvement in lexing performance while producing O(1) garbage.
The Parser's behaviour can be configured via Options.
There are several examples included:
Example | Description |
---|---|
BASIC | A lexer, parser and interpreter for a rudimentary dialect of BASIC. |
EBNF | Parser for the form of EBNF used by Go. |
Expr | A basic mathematical expression parser and evaluator. |
GraphQL | Lexer+parser for GraphQL schemas |
HCL | A parser for the HashiCorp Configuration Language. |
INI | An INI file parser. |
Protobuf | A full Protobuf version 2 and 3 parser. |
SQL | A very rudimentary SQL SELECT parser. |
Stateful | A basic example of a stateful lexer and corresponding parser. |
Thrift | A full Thrift parser. |
TOML | A TOML parser. |
Included below is a full GraphQL lexer and parser:
package main
import (
"fmt"
"os"
"github.com/alecthomas/kong"
"github.com/alecthomas/repr"
"github.com/alecthomas/participle/v2"
"github.com/alecthomas/participle/v2/lexer"
"github.com/alecthomas/participle/v2/lexer/stateful"
)
type File struct {
Entries []*Entry `@@*`
}
type Entry struct {
Type *Type ` @@`
Schema *Schema `| @@`
Enum *Enum `| @@`
Scalar string `| "scalar" @Ident`
}
type Enum struct {
Name string `"enum" @Ident`
Cases []string `"{" @Ident* "}"`
}
type Schema struct {
Fields []*Field `"schema" "{" @@* "}"`
}
type Type struct {
Name string `"type" @Ident`
Implements string `( "implements" @Ident )?`
Fields []*Field `"{" @@* "}"`
}
type Field struct {
Name string `@Ident`
Arguments []*Argument `( "(" ( @@ ( "," @@ )* )? ")" )?`
Type *TypeRef `":" @@`
Annotation string `( "@" @Ident )?`
}
type Argument struct {
Name string `@Ident`
Type *TypeRef `":" @@`
Default *Value `( "=" @@ )`
}
type TypeRef struct {
Array *TypeRef `( "[" @@ "]"`
Type string ` | @Ident )`
NonNullable bool `( @"!" )?`
}
type Value struct {
Symbol string `@Ident`
}
var (
graphQLLexer = stateful.MustSimple([]stateful.Rule{
{"Comment", `(?:#|//)[^\n]*\n?`, nil},
{"Ident", `[a-zA-Z]\w*`, nil},
{"Number", `(?:\d*\.)?\d+`, nil},
{"Punct", `[-[!@#$%^&*()+_={}\|:;"'<,>.?/]|]`, nil},
{"Whitespace", `[ \t\n\r]+`, nil},
})
parser = participle.MustBuild(&File{},
participle.Lexer(graphQLLexer),
participle.Elide("Comment", "Whitespace"),
participle.UseLookahead(2),
)
)
var cli struct {
EBNF bool `help"Dump EBNF."`
Files []string `arg:"" optional:"" type:"existingfile" help:"GraphQL schema files to parse."`
}
func main() {
ctx := kong.Parse(&cli)
if cli.EBNF {
fmt.Println(parser.String())
ctx.Exit(0)
}
for _, file := range cli.Files {
ast := &File{}
r, err := os.Open(file)
ctx.FatalIfErrorf(err)
err = parser.Parse(file, r, ast)
r.Close()
repr.Println(ast)
ctx.FatalIfErrorf(err)
}
}
One of the included examples is a complete Thrift parser (shell-style comments are not supported). This gives a convenient baseline for comparing to the PEG based pigeon, which is the parser used by go-thrift. Additionally, the pigeon parser is utilising a generated parser, while the participle parser is built at run time.
You can run the benchmarks yourself, but here's the output on my machine:
BenchmarkParticipleThrift-12 5941 201242 ns/op 178088 B/op 2390 allocs/op
BenchmarkGoThriftParser-12 3196 379226 ns/op 157560 B/op 2644 allocs/op
On a real life codebase of 47K lines of Thrift, Participle takes 200ms and go- thrift takes 630ms, which aligns quite closely with the benchmarks.
A compiled Parser
instance can be used concurrently. A LexerDefinition
can be used concurrently. A Lexer
instance cannot be used concurrently.
There are a few areas where Participle can provide useful feedback to users of your parser.
- Errors returned by Parser.Parse*() will be of type Error. This will contain positional information where available.
- Participle will make a best effort to return as much of the AST up to the error location as possible.
- Any node in the AST containing a field
Pos lexer.Position
will be automatically populated from the nearest matching token. - Any node in the AST containing a field
EndPos lexer.Position
will be automatically populated from the token at the end of the node. - Any node in the AST containing a field
Tokens []lexer.Token
will be automatically populated with all tokens captured by the node, including elided tokens.
These related pieces of information can be combined to provide fairly comprehensive error reporting.
Internally, Participle is a recursive descent parser with backtracking (see
UseLookahead(K)
).
Among other things, this means that they do not support left recursion. Left recursion must be eliminated by restructuring your grammar.
The old EBNF
lexer was removed in a major refactoring at
362b26
-- if you have an EBNF grammar you need to implement, you can either translate it into
regex-style stateful.Rule{} syntax or implement your own EBNF lexer --
you might be able to use
the old EBNF lexer
as a starting point.
Participle supports outputting an EBNF grammar from a Participle parser. Once
the parser is constructed simply call String()
.
Participle also includes a parser for this form of EBNF (naturally).
eg. The GraphQL example gives in the following EBNF:
File = Entry* .
Entry = Type | Schema | Enum | "scalar" ident .
Type = "type" ident ("implements" ident)? "{" Field* "}" .
Field = ident ("(" (Argument ("," Argument)*)? ")")? ":" TypeRef ("@" ident)? .
Argument = ident ":" TypeRef ("=" Value)? .
TypeRef = "[" TypeRef "]" | ident "!"? .
Value = ident .
Schema = "schema" "{" Field* "}" .
Enum = "enum" ident "{" ident* "}" .
Participle includes a command-line utility to take an EBNF representation of a Participle grammar
(as returned by Parser.String()
) and produce a Railroad Diagram using
tabatkins/railroad-diagrams.
Here's what the GraphQL grammar looks like: