Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Commits on May 29, 2015
  1. Merge pull request #74 from jwhitlock/68_fix_open

    Use to read README.rst
Commits on May 16, 2015
  1. @jwhitlock

    Use to read README.rst

    jwhitlock authored
    Fixes #68
Commits on Apr 13, 2015
  1. @Osmose
  2. @Osmose

    Drop Python 3.1 support.

    Osmose authored
  3. @Osmose
Commits on Mar 8, 2015
  1. Remove what turns out to be a useless __slots__ definition from Token…

    …Matcher, add a useful one to Token, and confirm with a test that __slots__ does not behave in CPython according to the language docs.
  2. Merge experimental token-based parsing, via TokenGrammar class, for t…

    …hose operating on pre-lexed streams of tokens.
Commits on Mar 3, 2015
  1. Simplify design of token-based parsing.

    This removes the need for special token syntax in the grammar DSL, just reusing the string literals that are useless for token lists. It also should speed things up, since we no longer need to dispatch at parse time between basestrings and lists.
    To do:
    * Token doesn't belong in parsimonious.expressions.
    * Needs better tests: an eq_ in test_parse_success, for starters.
    * Documentation in readme
  2. Clean up Token.

    * Don't subclass StrAndRepr; Token has no __unicode__ attr, so there's no point.
    * Move it to the bottom of the file, since it's a fairly niche affordance.
    * Document it.
Commits on Oct 8, 2014
  1. Speed up grammar compilation by 100x. Close #62. Bump version to 0.6.2.

    Avoid redundant work in the reference resolver. Lift the set of exprs that have already been resolved up beyond the scope of just _resolve_refs(). This avoids forgetting all that experience on each loop around the list comp of rules in visit_rules(). For grammars with hundreds of rules, this can be the difference between 13s and 95ms compilation times.
    Keeping an explicit set of done exprs means we no longer need the members=() trick anymore. Perhaps the trick would continue to save us a few calls, but it's worth those to keep the code simple.
Commits on Sep 24, 2014
  1. Fix bug which made the default rule of a grammar invalid when it cont…

    …ained a forward reference. Bump version to 0.6.1. Fix #61.
    Previously, the default rule was ending up a LazyReference, and so Grammar.parse() would explode.
Commits on Sep 9, 2014
  1. Add `unwrapped_exceptions` attribute.

    This help us do further validity checks *during* visitation, like for out-of-order regex ranges ([c-a]) in DXR.
  2. Remove unread var "e".

Commits on Jul 30, 2014
  1. Make UndefinedLabel no longer a kind of VisitationError.

    This way, we can reuse it for when people try to parse using AbstractGrammars.
    It being a VisitationError never made any sense anyway. It didn't even call the VisitationError constructor or comply with its invariants.
Commits on Jul 26, 2014
Commits on Jul 15, 2014
  1. Make Compound public, in case anyone wants to make custom Expressions…

    … based on it.
    To do: make all Expressions worth subclassing public.
  2. Make _match() public. Close #52.

    It should be this way not only so it can be called from custom rules but also to make the calls to other Expressions' _match methods from Compound expressions legit.
  3. Move custom rules to a `custom` kwarg.

    Custom rules are enough of a niche thing that it's not worth making future kwargs so hard to add.
Commits on Jul 14, 2014
  1. Bring readme up to date.

  2. Replace GrammarFromDocstrings with @rule. Everything works fine excep…

    …t subclassing; I haven't figured that out yet.
    @rule is better. It's more explicit, it's the same number of lines, it avoids visitors having to subclass something else, and it lets us still have visitor docstrings that actually document things.
Commits on Jul 13, 2014
  1. Make Grammars extensible through custom Python code using "custom rul…

    …es". Close #47.
    * Add `expression()` factory for turning simple lambdas into Expressions, mostly for use internally by Grammars in converting custom rules.
    * Add support for "custom rules" in Grammars. These provide a hook for simple custom parsing hooks spelled as Python lambdas. If you want to abuse the packrat cache, you can even be stateful! (If this becomes useful, we'll add another arg for passing a proper blackboard around.) For heavy-duty needs, you can put in Compound Expressions with LazyReferences as subexpressions, and the Grammar will hook them up for optimal efficiency--no trips through the Grammar's hash at parse time.
    * Allow grammars without a default rule (in cases where there are no string rules), which leads to also allowing empty grammars. Maybe someone generating grammars dynamically will find that useful.
Commits on Jul 12, 2014
  1. Stop relying on the object identity of multiple instantiations of ().

    I don't think that's guaranteed across Python implementations.
Commits on Jul 11, 2014
  1. Improve some Grammar docs.

  2. Implement grammar-from-docstrings visitors. Close #46. Consolidate pu…

    …blic API in __init__.
    There's no nice way to start parsing from a non-default rule using the NodeVisitor.parse() shortcut, but if you need to do that, I suppose it's time to pay for what you eat.
Commits on Jul 10, 2014
  1. Add `parse()` and `match()` convenience methods to `NodeVisitor`, and…

    … add the concept of a default grammar for a visitor.
    This makes the common case of parsing a string and applying exactly one visitor to the AST shorter and simpler.
    It is unlikely that a visitor would ever be used with more than one grammar, so I expect all visitors to have an easily chosen default grammar.
  2. Improve NodeVisitor documentation.

    Still not complete, but it's better.
Commits on Jun 12, 2014
  1. Show the production visit_or_term unpacks.

    Marcell suggested this while learning the lib.
Commits on Jun 11, 2014
  1. Make clear that the grammar is a string, not some magical pile of ope…

    …rator overloading.
    Marcell got the wrong impression.
Something went wrong with that request. Please try again.