- To run absolutely everyting, you will need to:
- Install requirements
- Install Elasticsearch
- Install Cassandra
- Install harvesters
- Install rabbitmq (optional)
- To only run harvesters locally, you do not have to install rabbitmq
- Create and enter virtual environment for scrapi, and go to the top level project directory. From there, run
$ pip install -r requirements.txt
Or, if you'd like some nicer testing and debugging utilities in addition to the core requirements, run
$ pip install -r dev-requirements.txt
This will also install the core requirements like normal.
Elasticsearch is required only if "elasticsearch" is specified in your settings, or if RECORD_HTTP_TRANSACTIONS is set to True
.
Note: Elasticsearch requires JDK 7.
$ brew install elasticsearch
-
Download and install the Public Signing Key.
$ wget -qO - https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
-
Add the ElasticSearch repository to yout /etc/apt/sources.list.
$ sudo add-apt-repository "deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main"
-
Install the package
$ sudo apt-get update $ sudo apt-get install elasticsearch
#### Running
```bash
$ elasticsearch
Cassandra is required only if "cassandra" is specified in your settings, or if RECORD_HTTP_TRANSACTIONS is set to True
.
Note: Cassandra requires JDK 7.
$ brew install cassandra
-
Check which version of Java is installed by running the following command:
$ java -version
Use the latest version of Oracle Java 7 on all nodes.
-
Add the DataStax Community repository to the /etc/apt/sources.list.d/cassandra.sources.list
$ echo "deb http://debian.datastax.com/community stable main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
-
Add the DataStax repository key to your aptitude trusted keys.
$ curl -L http://debian.datastax.com/debian/repo_key | sudo apt-key add -
-
Install the package.
$ sudo apt-get update $ sudo apt-get install cassandra
$ cassandra
Or, if you'd like your cassandra session to be bound to your current session, run:
$ cassandra -f
and you should be good to go.
(Note, if you're developing locally, you do not have to run Rabbitmq!)
$ brew install rabbitmq
$ sudo apt-get install rabbitmq-server
You will need to have a local copy of the settings. Copy local-dist.py into your own version of local.py -
cp scrapi/settings/local-dist.py scrapi/settings/local.py
If you installed Cassandra and Elasticsearch earlier, you will want add the following configuration to your local.py:
RECORD_HTTP_TRANSACTIONS = True # Only if cassandra is installed
NORMALIZED_PROCESSING = ['cassandra', 'elasticsearch']
RAW_PROCESSING = ['cassandra']
Otherwise, you will want to make sure your local.py has the following configuration:
RECORD_HTTP_TRANSACTIONS = False
NORMALIZED_PROCESSING = ['storage']
RAW_PROCESSING = ['storage']
This will save all harvested/normalized files to the directory archive/<source>/<document identifier>
note: Be careful with this, as if you harvest too many documents with the storage module enabled, you could start experiencing inode errors
If you'd like to be able to run all harvesters, you'll need to register for a PLOS API key.
Add the following line to your local.py file:
PLOS_API_KEY = 'your-api-key-here'
- from the top-level project directory run:
$ invoke beat
to start the scheduler, and
$ invoke worker
to start the worker.
Run all harvesters with
$ invoke harvesters
or, just one with
$ invoke harvester harvester-name
For local development, running the mit
harvester is recommended.
Note: harvester-name is the same as the defined harvester "short name".
Invoke a harvester for a certain start date with the --start
or -s
argument. Invoke a harvester for a certain end date with the --end
or -e
argument.
For example, to run a harvester between the dates of March 14th and March 16th 2015, run:
$ invoke harvester harvester-name --start 2015-03-14 --end 2015-03-16
Either --start or --end can also be used on their own. Not supplying arguments will default to starting the number of days specified in settings.DAYS_BACK
and ending on the current date.
If --end is given with no --start, start will default to the number of days specified in settings.DAYS_BACK
before the given end date.
Writing a harvester for inclusion with scrAPI? If the provider makes their metadata available using the OAI-PMH standard, then autooai is a utility that will do most of the work for you.
To configure scrapi to work in a local OSF dev environment:
- Ensure
'elasticsearch'
is in theNORMALIZED_PROCESSING
list inscrapi/settings/local.py
- Run at least one harvester
- Configure the
share_v2
alias - Generate the provider map
Multiple SHARE indices may be used by the OSF. By default, OSF uses the share_v2
index. Activate this alias by running:
$ inv alias share share_v2
Note that aliases must be activated before the provider map is generated.
$ inv alias share share_v2
$ inv provider_map
To remove both the share
and share_v2
indices from elasticsearch:
$ curl -XDELETE 'localhost:9200/share*'
- To run the tests for the project, just type
$ invoke test
and all of the tests in the 'tests/' directory will be run.
If you're using anaconda on your system at all, using pip to install all requirements from scratch from requirements.txt and dev-requirements.txt results in an Import Error when invoking tests or harvesters.
Example:
ImportError: dlopen(/Users/username/.virtualenvs/scrapi2/lib/python2.7/site-packages/lxml/etree.so, 2): Library not loaded: libxml2.2.dylib Referenced from: /Users/username/.virtualenvs/scrapi2/lib/python2.7/site-packages/lxml/etree.so Reason: Incompatible library version: etree.so requires version 12.0.0 or later, but libxml2.2.dylib provides version 10.0.0
To fix:
- run
pip uninstall lxml
- remove the anaconda/bin from your system path in your bash_profile
- reinstall requirements as usual
Answer found in this stack overflow question and answer