Text Extraction API for SilverStripe CMS (mostly used with 'fulltextsearch' module)
Switch branches/tags
Nothing to show
Clone or download
Pull request Compare This branch is 81 commits behind silverstripe:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


Text Extraction Module

Build Status


Provides an extraction API for file content, which can hook into different extractor engines based on availability and the parsed file format. The output is always a string: the file content.

Via the FileTextExtractable extension, this logic can be used to cache the extracted content on a DataObject subclass (usually File).

Note: Previously part of the sphinx module.


Supported Formats

  • HTML (built-in)
  • PDF (with XPDF or Solr)
  • Microsoft Word, Excel, Powerpoint (Solr)
  • OpenOffice (Solr)
  • CSV (Solr)
  • RTF (Solr)
  • EPub (Solr)


The recommended installation is through composer. Add the following to your composer.json:

	"require": {
		"silverstripe/textextraction": "*"

The module depends on the Guzzle HTTP Library, which is automatically checked out by composer. Alternatively, install Guzzle through PEAR and ensure its in your include_path.



By default, only extraction from HTML documents is supported. No configuration is required for that, unless you want to make the content available through your DataObject subclass. In this case, add the following to mysite/_config.php:

DataObject::add_extension('File', 'FileTextExtractable');


PDFs require special handling, for example through the XPDF commandline utility. Follow their installation instructions, its presence will be automatically detected. You can optionally set the binary path in mysite/_config/config.yml:

	binary_location: /my/path/pdftotext

Apache Solr

Apache Solr is a fulltext search engine, an aspect which is often used alongside this module. But more importantly for us, it has bindings to Apache Tika through the ExtractingRequestHandler interface. This allows Solr to inspect the contents of various file formats, such as Office documents and PDF files. The textextraction module retrieves the output of this service, rather than altering the index. With the raw text output, you can decide to store it in a database column for fulltext search in your database driver, or even pass it back to Solr as part of a full index update.

In order to use Solr, you need to configure a URL for it (in mysite/_config/config.yml):

	base_url: 'http://localhost:8983/solr/update/extract'

Note that in case you're using multiple cores, you'll need to add the core name to the URL (e.g. 'http://localhost:8983/solr/PageSolrIndex/update/extract'). The "fulltext" module uses multiple cores by default, and comes prepackaged with a Solr server. Its a stripped-down version of Solr, follow the module README on how to add Apache Tika text extraction capabilities.

You need to ensure that some indexable property on your object returns the contents, either by directly accessing FileTextExtractable->extractFileAsText(), or by writing your own method around FileTextExtractor->getContent() (see "Usage" below). The property should be listed in your SolrIndex subclass, e.g. as follows:

class MyDocument extends DataObject {
	static $db = array('Path' => 'Text');
	function getContent() {
		$extractor = FileTextExtractor::for_file($this->Path);
		return $extractor ? $extractor->getContent($this->Path) : null;		
class NZQASolrIndex extends SolrIndex {
	function init() {
		$this->addFulltextField('Content', 'HTMLText');

Note: This isn't a terribly efficient way to process large amounts of files, since each HTTP request is run synchronously.


Manual extraction:

$myFile = '/my/path/myfile.pdf';
$extractor = FileTextExtractor::for_file($myFile);
$content = $extractor->getContent($myFile);

Extraction with FileTextExtractable extension applied:

$myFileObj = File::get()->First();
$content = $myFileObj->extractFileAsText();