Skip to content
This repository has been archived by the owner. It is now read-only.
Python package for parsing very large XML files
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
tests Version 0.3.0 May 23, 2016
xmlr Version 0.3.1 Aug 16, 2016
.coveragerc Version 0.3.0 May 23, 2016
.gitignore Version 0.2.0 May 20, 2016
.travis.yml Added pypy as allowed failure on Travis Aug 16, 2016
HISTORY.rst Version 0.3.1 Aug 16, 2016
LICENSE Initial commit May 17, 2016
MANIFEST.in Ported README to restructured text. May 23, 2016
README.rst Update README, param name for parsing method Oct 3, 2018
measure_memory.py Version 0.3.0 May 23, 2016
measure_timing.py Version 0.3.0 May 23, 2016
setup.cfg Initial commit May 17, 2016
setup.py Ported README to restructured text. May 23, 2016

README.rst

xmlr

Build Status Coverage Status

If you find this repo and consider using it, reconsider using xmltodict instead since it is more feature complete, well tested and much more actively maintained!

It can be problematic to handle large XML files (>> 10 MB) and using the xml module in Python directly leads to huge memory overheads. Most often, these large XML files are pure data files, storing highly structured data that have no intrinsic need to be stored in XML.

This package provides iterative methods for dealing with them, reading the XML documents into Python dict representation instead, according to methodology specifed on the page Converting Between XML and JSON. xmlr is inspired by the solutions described in the blog posts High-performance XML parsing in Python with lxml and Parsing large XML files, serially, in Python, enabling the parsing of very large documents without problems with overtaxing the memory.

This package generally provides a one way trip; there is not necessarily a bijectional relation with the XML source after parsing.

Installation

pip install xmlr

Usage

To parse an entire document, use the xmlparse method:

from xmlr import xmlparse

doc = xmlparse('very_large_doc.xml')

An iterator, xmliter, yielding elements of a specified type as they are parsed from the document is also present:

from xmlr import xmliter

for d in xmliter('very_large_record.xml', 'Record'):
    print(d)

The desired parser can also be specified. Available methods are:

  • ELEMENTTREE - Using xml.etree.ElementTree as backend.
  • C_ELEMENTTREE - Using xml.etree.cElementTree as backend.
  • LXML_ELEMENTTREE - Using lxml.etree as backend. Requires installation of the lxml package.

These can then be used like this:

from xmlr import xmliter, XMLParsingMethods

for d in xmliter('very_large_record.xml', 'Record', parsing_method=XMLParsingMethods.LXML_ELEMENTTREE):
    print(d)

No type conversion is performed right now. A value in the output dictionary can have the type dict (a subdocument), list (an array of similar documents), str (a leaf or value) or None (empty XML leaf tag). All keys are of the type str.

Tests

Tests are run with pytest:

$ py.test tests/
============================= test session starts ==============================
platform linux2 -- Python 2.7.6, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: /home/hbldh/Repos/xmlr, inifile:
collected 50 items

tests/test_iter.py ...........................
tests/test_methods.py ..
tests/test_parsing.py .....................

========================== 50 passed in 0.50 seconds ===========================

The tests fetches some XML documents from W3Schools XML tutorials and also uses a bundled, slimmed down version of the document available at U.S. copyright renewal records available for download.

You can’t perform that action at this time.