Skip to content

Commit

Permalink
[README.md] typo fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
chrissimpkins committed Feb 2, 2018
1 parent e3d2062 commit 695507b
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ for i, tok := range Tokens {
}
```

Now that we have defined a set of three tokens (WILD, SPACE, BANG), lets create
Now that we have defined a set of three tokens (WILD, SPACE, BANG), let's create
a token object:

```go
Expand Down Expand Up @@ -440,7 +440,7 @@ lexer.Add(
Lexmachine (like most lexical analysis frameworks) uses [Regular
Expressions](https://en.wikipedia.org/wiki/Regular_expression) to specify the
*patterns* to match when spitting the string up into categorized *tokens.*
*patterns* to match when splitting the string up into categorized *tokens.*
For a more advanced introduction to regular expressions engines see Russ Cox's
[articles](https://swtch.com/~rsc/regexp/). To learn more about how regular
expressions are used to *tokenize* string take a look at Alex Aiken's [video
Expand All @@ -453,7 +453,7 @@ up of *characters* such as `a` or `b`, characters with special meanings (such as
`.` which matches any character), and operators. The regular expression `abc`
matches exactly one string `abc`.
### Charater Expressions
### Character Expressions
In lexmachine most characters (eg. `a`, `b` or `#`) represent themselves. Some
have special meanings (as detailed below in operators). However, all characters
Expand Down

0 comments on commit 695507b

Please sign in to comment.