pyre2: Python RE2 wrapper for linear-time regular expressions
Table of Contents
pyre2 is a Python extension that wraps Google's RE2 regular expression library. The RE2 engine compiles (strictly) regular expressions to deterministic finite automata, which guarantees linear-time behavior.
Intended as a drop-in replacement for
re. Unicode is supported by encoding
to UTF-8, and bytes strings are treated as UTF-8 when the UNICODE flag is given.
For best performance, work with UTF-8 encoded bytes strings.
Normal usage for Linux/Mac/Windows:
$ pip install pyre2
Requirements for building the C++ extension from the repo source:
- A build environment with
sudo apt-get install build-essential)
- Build tools and libraries: RE2, pybind11, and cmake installed in the build
- On Ubuntu/Debian:
sudo apt-get install build-essential cmake ninja-build python3-dev cython3 pybind11-dev libre2-dev
- On Gentoo, install dev-util/cmake, dev-python/pybind11, and dev-libs/re2
- For a venv you can install the pybind11, cmake, and cython packages from PyPI
- On Ubuntu/Debian:
On MacOS, use the
brew package manager:
$ brew install -s re2 pybind11
On Windows use the
vcpkg package manager:
$ vcpkg install re2:x64-windows pybind11:x64-windows
You can pass some cmake environment variables to alter the build type or pass a toolchain file (the latter is required on Windows) or specify the cmake generator. For example:
$ CMAKE_GENERATOR="Unix Makefiles" CMAKE_TOOLCHAIN_FILE=clang_toolchain.cmake tox -e deploy
For development, get the source:
$ git clone git://github.com/andreasvc/pyre2.git $ cd pyre2 $ make install
The stated goal of this module is to be a drop-in replacement for
try: import re2 as re except ImportError: import re
That being said, there are features of the
re module that this module may
never have; these will be handled through fallback to the original
- lookahead assertions
- backreferences (
\\nin search pattern)
- W and S not supported inside character classes
On the other hand, unicode character classes are supported (e.g.,
Syntax reference: https://github.com/google/re2/wiki/Syntax
However, there are times when you may want to be notified of a failover. The
set_fallback_notification determines the behavior in these cases:
try: import re2 as re except ImportError: import re else: re.set_fallback_notification(re.FALLBACK_WARNING)
set_fallback_notification takes three values:
re.FALLBACK_WARNING (raise a warning),
re.FALLBACK_EXCEPTION (raise an exception).
Consult the docstrings in the source code or interactively
through ipython or
pydoc re2 etc.
unicode strings are fully supported, but note that
RE2 works with UTF-8 encoded strings under the hood, which means that
unicode strings need to be encoded and decoded back and forth.
There are two important factors:
- whether a
unicodepattern and search string is used (will be encoded to UTF-8 internally)
UNICODEflag: whether operators such as
\wrecognize Unicode characters.
To avoid the overhead of encoding and decoding to UTF-8, it is possible to pass
UTF-8 encoded bytes strings directly but still treat them as
In : re2.findall(u'\w'.encode('utf8'), u'Mötley Crüe'.encode('utf8'), flags=re2.UNICODE) Out: ['M', '\xc3\xb6', 't', 'l', 'e', 'y', 'C', 'r', '\xc3\xbc', 'e'] In : re2.findall(u'\w'.encode('utf8'), u'Mötley Crüe'.encode('utf8')) Out: ['M', 't', 'l', 'e', 'y', 'C', 'r', 'e']
However, note that the indices in
Match objects will refer to the bytes string.
The indices of the match in the
unicode string could be computed by
decoding/encoding, but this is done automatically and more efficiently if you
>>> re2.search(u'ü'.encode('utf8'), u'Mötley Crüe'.encode('utf8'), flags=re2.UNICODE) <re2.Match object; span=(10, 12), match='\xc3\xbc'> >>> re2.search(u'ü', u'Mötley Crüe', flags=re2.UNICODE) <re2.Match object; span=(9, 10), match=u'\xfc'>
Finally, if you want to match bytes without regard for Unicode characters,
pass bytes strings and leave out the
UNICODE flag (this will cause Latin 1
encoding to be used with
RE2 under the hood):
>>> re2.findall(br'.', b'\x80\x81\x82') ['\x80', '\x81', '\x82']
Performance is of course the point of this module, so it better perform well.
Regular expressions vary widely in complexity, and the salient feature of
that it behaves well asymptotically. This being said, for very simple substitutions,
I've found that occasionally python's regular
re module is actually slightly faster.
However, when the
re module gets slow, it gets really slow, while this module
In the below example, I'm running the data against 8MB of text from the colossal Wikipedia
XML file. I'm running them multiple times, being careful to use the
To see more details, please see the performance script.
|Test||Description||# total runs||
|Findall URI|Email||Find list of '([a-zA-Z][a-zA-Z0-9]*)://([^ /]+)(/[^ ]*)?|([^ @]+)@([^ @]+)'||2||6.262||0.131||2.08%||5.119||2.55%|
|Replace WikiLinks||This test replaces links of the form [[Obama|Barack_Obama]] to Obama.||100||4.374||0.815||18.63%||1.176||69.33%|
|Remove WikiLinks||This test splits the data by the <page> tag.||100||4.153||0.225||5.43%||0.537||42.01%|
Feel free to add more speed tests to the bottom of the script and send a pull request my way!
The tests show the following differences with Python's
$operator in Python's
rematches twice if the string ends with
\n. This can be simulated using
\n?$, except when doing substitutions.
pyre2module and Python's
remay behave differently with nested groups. See
tests/test_emptygroups.txtfor the examples.
Please report any further issues with
If you would like to help, one thing that would be very useful is writing comprehensive tests for this. It's actually really easy:
- Come up with regular expression problems using the regular python 're' module.
- Write a session in python traceback format Example.
- Replace your
import re2 as re.
- Save it with as
test_<name>.txtin the tests directory. You can comment on it however you like and indent the code with 4 spaces.
This code builds on the following projects (in chronological order):