Open-source tools for working with BIBFRAME (see:, by default BIBFRAME Lite (see: and more generally Library Linked Data. For some background thoughts see:
Clone or download
Pull request Compare This branch is 505 commits ahead of uogbuji:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
test Merge branch 'master' into develop May 26, 2017


Requires Python 3.4 or more recent (also tested with PyPy3.5 v5.7). To install dependencies:

pip install -r requirements.txt

Then install pybibframe:

python install


Converting MARC/XML to RDF or Versa output (command line)

Note: Versa is a model for Web resources and relationships. Think of it as an evolution of Resource Description Framework (RDF) that's at once simpler and more expressive. It's the default internal representation for pybibframe, though regular RDF is an optional output.

marc2bf records.mrx

Reads MARC/XML from the file records.mrx and outputs a Versa representation of the resulting BIBFRAME records in JSON format. You can send that output to a file as well:

marc2bf -o resources.versa.json records.mrx

The Versa representation is the primary format for ongoing, pipeline processing.

If you want an RDF/Turtle representation of this file you can do:

marc2bf -o resources.versa.json --rdfttl resources.ttl records.mrx

If you want an RDF/XML representation of this file you can do:

marc2bf -o resources.versa.json --rdfxml resources.rdf records.mrx

These two options do build the full RDF model in memory, so they can slow things down quite a bit.

You can get the source MARC/XML from standard input:

curl | marc2bf

In this case a record is pulled from the Web, in particular Library of Congress Online Catalog / LCCN Permalink. Another example, Das Innere des Glaspalastes in London:

curl | marc2bf

You can process more than one MARC/XML file at a time by listing them on the command line:

marc2bf records1.mrx records2.mrx records3.mrx

Or by using wildcards:

marc2bf records?.mrx

PyBibframe is highly configurable and extensible. You can specify plug-ins from the command line. You need to specify the Python module from which the plugins can be imported and a configuration file specifying how the plugins are to be used. For example, to use the linkreport plugin that comes with PyBibframe you can do:

marc2bf -c config1.json --mod=bibframe.plugin records.mrx

Where the contents of config1.json might be:

    "plugins": [
            {"id": "",
             "lookup": {
                 "": "",
                 "": ""

Which in this case will add RDFS label statements for Works and Instances to the output.

Converting MARC/XML to RDF or Versa output (API)

The bibframe.reader.bfconvert function can be used as an API to run the conversion.

>>> from bibframe.reader import bfconvert
>>> inputs = open('records.mrx', 'r')
>>> out = open('resorces.versa.json', 'w')
>>> bfconvert(inputs=inputs, entbase='', out=out)


  • marcspecials-vocab: List of vocabulary (base) IRIs to qualify relationships and resource types generated from processing the special MARC fields 006, 007, 008 and the leader.


'transforms': {
    'bib': '',

See also

Some open-source tools for working with BIBFRAME (see

Note: very useful to have around yaz-marcdump (which e.g. you can use to conver other MARC formats to MARC/XML)

Download from , unpack then do:

$ ./configure --prefix=$HOME/.local
$ make && make install

If you're on a Debian-based Linux you might find useful these installation notes

MarcEdit - - can also convert to MARC/XML. Just install, select "MARC Tools" from the menu, choose your input file, specify an output file, and specify the conversion you need to perform, e.g. "MARC21->MARC21XML" for MARC to MARC/XML. Note the availability of the UTF-8 output option too.



  • Possible Python injection attack via configs (even strictly in JSON). Make sure you check for tainting.


pybibframe developement, led by Zepheira, has been supported in part by the Library of Congress, BIBFLOW (an IMLS project of the UC Davis library), and thanks to contributions and refinements to the default transformation recipes made by librarians participating in Zepheira's Linked Data and BIBFRAME Practical Practitioner Training program