From 6d7984ff723f2e699eede7d85df29ef2c03955f2 Mon Sep 17 00:00:00 2001 From: Michael Williamson Date: Sat, 25 Aug 2012 17:01:03 +0100 Subject: [PATCH] Use tokenise in README consistently --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 97691e7..8d9395b 100644 --- a/README.md +++ b/README.md @@ -119,8 +119,8 @@ The main advantage of using `lop.Token` is that you can then use the rules `lop. To parse an array of tokens, you can call the method `parseTokens` on `lop.Parser`, passing in the parsing rule and the array of tokens. For instance, assuming we already have a `tokenise` function (the one above would do fine): ```javascript -function parseSentence(inputString) { - var tokens = tokenise(inputString); +function parseSentence(source) { + var tokens = tokenise(source); var parser = new lop.Parser(); var parseResult = parser.parseTokens(sentenceRule, tokens); if (!parseResult.isSuccess()) {