Skip to content


Subversion checkout URL

You can clone with
Download ZIP
A SAX-based XML parser for parsing large files into manageable chunks
Fetching latest commit...
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.

Saxerator Build Status

Saxerator is a SAX-based xml-to-hash parser designed for parsing very large files into manageable chunks. Rather than dealing directly with SAX callback methods, Saxerator gives you Enumerable access to chunks of an xml document. This approach is ideal for large xml files containing a collection of elements that you can process independently.

Each xml chunk is parsed into a JSON-like Ruby Hash structure for consumption.


You can parse any valid xml in 3 simple steps.

  1. Initialize the parser
  2. Tell it which tag you care about
  3. Perform your work in an 'each' block, or using any Enumerable method
parser = Saxerator.parser("rss.xml"))

parser.for_tag(:item).each do |item|
  # where the xml contains <item><title>...</title><author>...</author></item>
  # item will look like {'title' => '...', 'author' => '...'}
  puts "#{item['title']}: #{item['author']}"

# a String is returned here since the given element contains only character data
puts "First title: #{parser.for_tag(:title).first}"

Attributes are stored as a part of the Hash or String object they relate to

# author is a String here, but also responds to .attributes
primary_authors = parser.for_tag(:author).select { |author| author.attributes['type'] == 'primary' }

You can combine predicates to isolate just the tags you want.

parser.for_tag(:name).each { |x| all_the_names_in_a_document << x }
parser.for_tag(:name).at_depth(2).each { |x| names_nested_under_document_root << x }
parser.for_tag(:name).within(:author).each { |x| author_names << x }

Known Issues

  • JRuby closes the file stream at the end of parsing, therefor to perform multiple operations which parse a file you will need to instantiate a new parser with a new File object.


Why the name 'Saxerator'?

It's a combination of SAX + Enumerator.

Why use Saxerator over regular SAX parsing?

Much of the SAX parsing code I've written over the years has fallen into a pattern that Saxerator encapsulates: marshall a chunk of an XML document into an object, operate on that object, then move on to the next chunk. Saxerator alleviates the pain of marshalling and allows you to focus solely on operating on the document chunk.

Why not DOM parsing?

DOM parsers load the entire document into memory. Saxerator only holds a single chunk in memory at a time. If your document is very large, this can be an important consideration.


Saxerator was inspired by Nori and Gregory Brown's Practicing Ruby

Legal Stuff

Copyright © Bradley Schaefer. MIT License (see LICENSE file).

Something went wrong with that request. Please try again.