DBpedia Information Extraction Framework
Get in touch with DBpedia: https://wiki.dbpedia.org/join/get-in-touch
Slack: join the #dev-team slack channel within the the DBpedia Slack workspace - the main point for developement updates and discussions
- About DBpedia
- Getting Started
- The DBpedia Extraction Framework
- Contribution Guidelines
DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself.
To check out the projects of DBpedia, visit the official DBpedia website.
The Easy Way - Execution using the MARVIN release bot
Running the extraction framework is a relatively complex task which is in details documented in the advanced QuickStart guide. To run the extraction process same as the DBpedia core team does, you can do using the MARVIN release bot. The MARVIN bot automates the overall extraction process, from downloading the ontology, mappings and Wikipedia dumps, to extraction and post-processing the data.
git clone https://git.informatik.uni-leipzig.de/dbpedia-assoc/marvin-config cd marvin-config ./setup-or-reset-dief.sh # test run Romanian extraction, very small ./marvin_extraction_run.sh test # around 4-7 days ./marvin_extraction_run.sh generic
If you plan to work on improving the codebase of the framework you would need to run the extraction framework alone as described in the QuickStart guide. This is highly recommended, since during this process you will learn a lot about the extraction framework.
Extractors represent the core of the extraction framework. So far, many extractors have been developed for extraction of particular information from different Wikimedia projects. To learn more, check the New Extractors guide, which explains the process of writing new extractor.
Check the Debugging Guide and learn how to debug the extraction framework.
Execution using Apache Spark
In order to speed up the extraction process, the extraction framework has been adopted to run on Apache Spark. Currently, more than half of the extractors can be executed using Spark. The extraction process using Spark is a slightly different process and requires different Execution. Check the QuickStart guide on how to run the extraction using Apache Spark.
Note: if possible, new extractors should be implemented using Apache Spark. To learn more, check the New Extractors guide, which explains the process of writing new extractor.
The DBpedia Extraction Framework
The DBpedia community uses a flexible and extensible framework to extract different kinds of structured information from Wikipedia. The DBpedia extraction framework is written using Scala 2.8. The framework is available from the DBpedia Github repository (GNU GPL License). The change log may reveal more recent developments. More recent configuration options can be found here: https://github.com/dbpedia/extraction-framework/wiki
The DBpedia extraction framework is structured into different modules
- Core Module : Contains the core components of the framework.
- Dump extraction Module : Contains the DBpedia dump extraction application.
- Source : The Source package provides an abstraction over a source of Media Wiki pages.
- WikiParser : The Wiki Parser package specifies a parser, which transforms an Media Wiki page source into an Abstract Syntax Tree (AST).
- Extractor : An Extractor is a mapping from a page node to a graph of statements about it.
- Destination : The Destination package provides an abstraction over a destination of RDF statements.
In addition to the core components, a number of utility packages offers essential functionality to be used by the extraction code:
- Ontology Classes used to represent an ontology. Methods for both, reading and writing ontologies are provided. All classes are located in the namespace org.dbpedia.extraction.ontology
- DataParser Parsers to extract data from nodes in the abstract syntax tree. All classes are located in the namespace org.dbpedia.extraction.dataparser
- Util Various utility classes. All classes are located in the namespace org.dbpedia.extraction.util
Dump extraction Module
More recent configuration options can be found here: https://github.com/dbpedia/extraction-framework/wiki/Extraction-Instructions.
To know more about the extraction framework, click here
If you want to work on one of the issues, assign yourself to it or at least leave a comment that you are working on it and how.
If you have an idea for a new feature, make an issue first, assign yourself to it, then start working.
Please make sure you have read the Developer's Certificate of Origin, further down on this page!
- Fork the main extraction-framework repository on GitHub.
- Clone this fork onto your machine (
git clone <your_repo_url_on_github>).
- Switch to the
git checkout dev).
- From the latest revision of the dev branch, make a new development branch from the latest revision. Name the branch something meaningful, for example fixRestApiParams (
git checkout dev -b fixRestApiParams).
- Make changes and commit them to this branch.
- Please commit regularly in small batches of things "that go together" (for example, changing a constructor and all the instance creating calls). Putting a huge batch of changes in one commit is bad for code reviews.
- In the commit messages, summarize the commit in the first line using not more than 70 characters. Leave one line blank and describe the details in the following lines, preferably in bullet points, like in 7776e31....
- When you are done with a bugfix or feature,
rebaseyour branch onto
git pull --rebase git://github.com/dbpedia/extraction-framework.git). Resolve possible conflicts and commit.
- Push your branch to GitHub (
git push origin fixRestApiParams).
- Send a pull request from your branch into
- In the description, reference the associated commit (for example, "Fixes #123 by ..." for issue number 123).
- Your changes will be reviewed and discussed on GitHub.
- In addition, Travis-CI will test if the merged version passes the build.
- If there are further changes you need to make, because Travis said the build fails or because somebody caught something you overlooked, go back to item 4. Stay on the same branch (if it is still related to the same issue). GitHub will add the new commits to the same pull request.
- When everything is fine, your changes will be merged into
extraction-framework/dev, finally the
devtogether with your improvements will be merged with the
Please keep in mind:
- Try not to modify the indentation. If you want to re-format, use a separate "formatting" commit in which no functionality changes are made.
- Never rebase the master onto a development branch (i.e. never call
extraction-framework/master). Only rebase your branch onto the dev branch, if and only if nobody already pulled from the development branch!
- If you already pushed a branch to GitHub, later rebased the master onto this branch and then tried to push again, GitHub won't let you saying "To prevent you from losing history, non-fast-forward updates were rejected". If (and only if) you are sure that nobody already pulled from this branch, add
--forceto the push command.
"Don’t rebase branches you have shared with another developer."
"Rebase is awesome, I use rebase exclusively for everything local. Never for anything that I've already pushed."
"Never ever rebase a branch that you pushed, or that you pulled from another person"
- In general, we prefer Scala over Java.
- Guides to setup your development environment for IntelliJ IDEA or Eclipse.
- Get help with the Maven build or another form of installation.
- Download some data to work with.
- How to run from Scala/Java or from a JAR.
- Having different troubles? Check the troubleshooting page or post on https://forum.dbpedia.org.
Important: Developer's Certificate of Origin
The source code is under the terms of the GNU General Public License, version 2.