Skip to content

Commit

Permalink
Support Lexing with custom(None RegExp) Token Patterns.
Browse files Browse the repository at this point in the history
fixes #331
  • Loading branch information
Shahar Soel committed Dec 24, 2016
1 parent 55c7db9 commit 72b2c8b
Show file tree
Hide file tree
Showing 10 changed files with 335 additions and 42 deletions.
76 changes: 76 additions & 0 deletions docs/custom_token_patterns.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
## Custom Token Patterns

See: [**Runnable example**](../examples/lexer/custom_patterns/custom_patterns.js) for quick starting.

### Background
Normally a Token's pattern is defined using a JavaScript regular expression:

```JavaScript
let IntegerToken = createToken({name: "IntegerToken", pattern: /\d+/})
```

However in some circumstances the capability to provide a custom pattern matching implementation may be required.
Perhaps a special Token which cannot be easily defined using regular expressions, or perhaps
to enable working around performance problems in a specific RegularExpression engine, for example:

* [WebKit/Safari multiple orders of magnitude performance degradation for specific regExp patterns](https://bugs.webkit.org/show_bug.cgi?id=152578) 😞


### Usage
A custom pattern must conform to the API of the [RegExp.prototype.exec](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp/exec)
function. Additionally it must perform any matches from the **start** of the input. In RegExp semantics this means
that any custom pattern implementations should behave as if the [start of input anchor](http://www.rexegg.com/regex-anchors.html#caret)
has been used.


The basic syntax for supplying a custom pattern is defined by the [ICustomPattern](http://sap.github.io/chevrotain/documentation/0_20_0/interfaces/icustompattern.html) interface.
Example:

```JavaScript
function matchInteger(text) {
let i = 0
let charCode = text.charCodeAt(i)
while (charCode >= 48 && charCode <= 57) {
i++
charCode = text.charCodeAt(i)
}

// No match, must return null to conform with the RegExp.prototype.exec signature
if (i === 0) {
return null
}
else {
let matchedString = text.substring(0, i)
// according to the RegExp.prototype.exec API the first item in the returned array must be the whole matched string.
return [matchedString]
}
}

let IntegerToken = createToken({
name: "IntegerToken",
pattern: {
exec: matchInteger,
containsLineTerminator: false
}})
```

The **containsLineTerminator** property is used by the lexer to properly compute the line/column numbers.
If the custom matched pattern could possibly include a line terminator then this property must be defined as "true".
Most Tokens can never contain a line terminator so the property is optional (false by default) which enables a shorter syntax:

```JavaScript
let IntegerToken = createToken({
name: "IntegerToken",
pattern: {
exec: matchInteger
}})
```

Using an Object literal with only a single property is still a little verbose so an even more concise syntax is also supported:
```JavaScript
let IntegerToken = createToken({name: "IntegerToken", pattern: matchInteger})
```




1 change: 1 addition & 0 deletions examples/lexer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ A few simple examples of using the Chevrotain Lexer to resolve some common lexin
* [Keywords vs Identifiers](https://github.com/SAP/Chevrotain/blob/master/examples/lexer/keywords_vs_identifiers/keywords_vs_identifiers.js)
* [Token Groups](https://github.com/SAP/Chevrotain/blob/master/examples/lexer/token_groups/token_groups.js)
* [Lexer with Multiple Modes](https://github.com/SAP/Chevrotain/blob/master/examples/lexer/multi_mode_lexer/multi_mode_lexer.js)
* [Custom Token Patterns implementations](https://github.com/SAP/Chevrotain/blob/master/examples/lexer/custom_patterns/custom_patterns.js)


to run all the lexer examples's tests:
Expand Down
62 changes: 62 additions & 0 deletions examples/lexer/custom_patterns/custom_patterns.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
/**
* This example demonstrate usage of custom token patterns.
* custom token patterns allow implementing token matchers using arbitrary JavaScript code
* instead of being limited to only using regular expressions.
*
* For additional details see the docs:
* https://github.com/SAP/chevrotain/blob/master/docs/custom_token_patterns.md
*/
let chevrotain = require("chevrotain")
let createToken = chevrotain.createToken
let Lexer = chevrotain.Lexer


// First lets define our custom pattern for matching an Integer Literal.
// This function's signature matches the RegExp.prototype.exec function.
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp/exec
function matchInteger(text) {
let i = 0
let charCode = text.charCodeAt(i)
while (charCode >= 48 && charCode <= 57) {
i++
charCode = text.charCodeAt(i)
}

// No match, must return null to conform with the RegExp.prototype.exec signature
if (i === 0) {
return null
}
else {
let matchedString = text.substring(0, i)
// according to the RegExp.prototype.exec API the first item in the returned array must be the whole matched string.
return [matchedString]
}
}

// Now we can simply replace the regExp pattern with our custom pattern.
// Consult the Docs (linked above) for additional syntax variants.
let IntegerLiteral = createToken({name: "IntegerLiteral", pattern: matchInteger})
let Comma = createToken({name: "Comma", pattern: /,/})
let Whitespace = createToken({name: "Whitespace", pattern: /\s+/, group: Lexer.SKIPPED})

customPatternLexer = new Lexer(
[
Whitespace,
Comma,
IntegerLiteral
])

module.exports = {

IntegerLiteral: IntegerLiteral,
Comma: Comma,

tokenize: function(text) {
let lexResult = customPatternLexer.tokenize(text)

if (lexResult.errors.length >= 1) {
throw new Error("sad sad panda lexing errors detected")
}
return lexResult
}
}
23 changes: 23 additions & 0 deletions examples/lexer/custom_patterns/custom_patterns_spec.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
let assert = require("assert")
let customPatternExample = require("./custom_patterns")

let tokenize = customPatternExample.tokenize
let Comma = customPatternExample.Comma
let IntegerLiteral = customPatternExample.IntegerLiteral

describe('The Chevrotain Lexer ability to use custom pattern implementations.', () => {

it('Can Lex a simple input using a Custom Integer Literal RegExp', () => {
let text = `1 , 2 , 3`
let lexResult = tokenize(text)

assert.equal(lexResult.errors.length, 0)
assert.equal(lexResult.tokens.length, 5)

expect(lexResult.tokens[0]).to.be.an.instanceof(IntegerLiteral)
expect(lexResult.tokens[1]).to.be.an.instanceof(Comma)
expect(lexResult.tokens[2]).to.be.an.instanceof(IntegerLiteral)
expect(lexResult.tokens[3]).to.be.an.instanceof(Comma)
expect(lexResult.tokens[4]).to.be.an.instanceof(IntegerLiteral)
})
})
1 change: 1 addition & 0 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ any code generation phase.
* [Multiple Lexer Modes][lexer_modes] depending on the context.
* [Tokens Grouping][lexer_groups].
* [Different Token types for balancing performance, memory usage and ease of use](docs/token_types.md).
* [Custom Token patterns(none RegExp) support](docs/custom_token_patterns.md)
* **No code generation** The Lexer does not require any code generation phase.

3. [**High Performance**][benchmark].
Expand Down
80 changes: 56 additions & 24 deletions src/scan/lexer.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import {Token, tokenName, ISimpleTokenOrIToken} from "./tokens_public"
import {TokenConstructor, ILexerDefinitionError, LexerDefinitionErrorType, Lexer, IMultiModeLexerDefinition} from "./lexer_public"
import {Token, tokenName, ISimpleTokenOrIToken, CustomPatternMatcherFunc} from "./tokens_public"
import {
TokenConstructor, ILexerDefinitionError, LexerDefinitionErrorType, Lexer, IMultiModeLexerDefinition,
IRegExpExec
} from "./lexer_public"
import {
reject,
indexOf,
map,
zipObject,
isString,
isUndefined,
reduce,
Expand All @@ -19,7 +21,8 @@ import {
uniq,
every,
keys,
isArray
isArray,
isFunction
} from "../utils/utils"
import {isLazyTokenType, isSimpleTokenType} from "./tokens"

Expand All @@ -28,7 +31,7 @@ export const DEFAULT_MODE = "defaultMode"
export const MODES = "modes"

export interface IAnalyzeResult {
allPatterns:RegExp[]
allPatterns:IRegExpExec[]
patternIdxToClass:Function[]
patternIdxToGroup:any[]
patternIdxToLongerAltIdx:number[]
Expand All @@ -38,22 +41,36 @@ export interface IAnalyzeResult {
emptyGroups:{ [groupName:string]:Token[] }
}

const CONTAINS_LINE_TERMINATOR = "containsLineTerminator"

export function analyzeTokenClasses(tokenClasses:TokenConstructor[]):IAnalyzeResult {

let onlyRelevantClasses = reject(tokenClasses, (currClass) => {
return currClass[PATTERN] === Lexer.NA
})

let allTransformedPatterns = map(onlyRelevantClasses, (currClass) => {
return addStartOfInput(currClass[PATTERN])
})
let currPattern = currClass[PATTERN]

let allPatternsToClass = zipObject(<any>allTransformedPatterns, onlyRelevantClasses)
if (isRegExp(currPattern)) {
return addStartOfInput(currPattern)
}
// CustomPatternMatcherFunc - custom patterns do not require any transformations, only wrapping in a RegExp Like object
else if (isFunction(currPattern)) {
return {exec: currPattern}
}
// ICustomPattern
else if (has(currPattern, "exec")) {
return currPattern
}
else {
throw Error("non exhaustive match")
}

let patternIdxToClass:any = map(allTransformedPatterns, (pattern) => {
return allPatternsToClass[pattern.toString()]
})

let patternIdxToClass = onlyRelevantClasses

let patternIdxToGroup = map(onlyRelevantClasses, (clazz:any) => {
let groupName = clazz.GROUP
if (groupName === Lexer.SKIPPED) {
Expand Down Expand Up @@ -84,8 +101,16 @@ export function analyzeTokenClasses(tokenClasses:TokenConstructor[]):IAnalyzeRes
let patternIdxToPopMode = map(onlyRelevantClasses, (clazz:any) => has(clazz, "POP_MODE"))

let patternIdxToCanLineTerminator = map(allTransformedPatterns, (pattern:RegExp) => {
// TODO: unicode escapes of line terminators too?
return /\\n|\\r|\\s/g.test(pattern.source)
if (isRegExp(pattern)) {
// TODO: unicode escapes of line terminators too?
return /\\n|\\r|\\s/g.test(pattern.source)
}
else {
if (has(pattern, CONTAINS_LINE_TERMINATOR)) {
return pattern[CONTAINS_LINE_TERMINATOR]
}
return false
}
})

let emptyGroups = reduce(onlyRelevantClasses, (acc, clazz:any) => {
Expand All @@ -112,18 +137,13 @@ export function validatePatterns(tokenClasses:TokenConstructor[], validModesName
let errors = []

let missingResult = findMissingPatterns(tokenClasses)
let validTokenClasses = missingResult.valid
errors = errors.concat(missingResult.errors)

let invalidResult = findInvalidPatterns(validTokenClasses)
validTokenClasses = invalidResult.valid
let invalidResult = findInvalidPatterns(missingResult.valid)
let validTokenClasses = invalidResult.valid
errors = errors.concat(invalidResult.errors)

errors = errors.concat(findEndOfInputAnchor(validTokenClasses))

errors = errors.concat(findUnsupportedFlags(validTokenClasses))

errors = errors.concat(findDuplicatePatterns(validTokenClasses))
errors = errors.concat(validateRegExpPattern(validTokenClasses))

errors = errors.concat(findInvalidGroupType(validTokenClasses))

Expand All @@ -132,6 +152,19 @@ export function validatePatterns(tokenClasses:TokenConstructor[], validModesName
return errors
}

function validateRegExpPattern(tokenClasses:TokenConstructor[]):ILexerDefinitionError[] {
let errors = []
let withRegExpPatterns = filter(tokenClasses, (currTokClass) => isRegExp(currTokClass[PATTERN]))

errors = errors.concat(findEndOfInputAnchor(withRegExpPatterns))

errors = errors.concat(findUnsupportedFlags(withRegExpPatterns))

errors = errors.concat(findDuplicatePatterns(withRegExpPatterns))

return errors
}

export interface ILexerFilterResult {
errors:ILexerDefinitionError[]
valid:TokenConstructor[]
Expand All @@ -157,12 +190,13 @@ export function findMissingPatterns(tokenClasses:TokenConstructor[]):ILexerFilte
export function findInvalidPatterns(tokenClasses:TokenConstructor[]):ILexerFilterResult {
let tokenClassesWithInvalidPattern = filter(tokenClasses, (currClass) => {
let pattern = currClass[PATTERN]
return !isRegExp(pattern)
return !isRegExp(pattern) && !isFunction(pattern) && !has(pattern, "exec")
})

let errors = map(tokenClassesWithInvalidPattern, (currClass) => {
return {
message: "Token class: ->" + tokenName(currClass) + "<- static 'PATTERN' can only be a RegExp",
message: "Token class: ->" + tokenName(currClass) + "<- static 'PATTERN' can only be a RegExp, a" +
" Function matching the {CustomPatternMatcherFunc} type or an Object matching the {ICustomPattern} interface.",
type: LexerDefinitionErrorType.INVALID_PATTERN,
tokenClasses: [currClass]
}
Expand Down Expand Up @@ -361,8 +395,6 @@ export function performRuntimeChecks(lexerDefinition:IMultiModeLexerDefinition):
})
}
})

// lexerDefinition.modes[currModeName] = reject<Function>(currModeValue, (currTokClass) => isUndefined(currTokClass))
})
}

Expand Down
Loading

0 comments on commit 72b2c8b

Please sign in to comment.