Simple CLI tools to load a subset of Wikidata into Elasticsearch. Part of the Heritage Connector project.
There are a couple of reasons you may not want to do this when running searches programmatically:
- time constraints/large volumes: APIs are rate-limited, and you can only do one text search per SPARQL query
- better search: using Elasticsearch allows for more flexible and powerful text search capabilities.* We're using our own Elasticsearch instance to do nearest neighbour search on embeddings, too.
* CirrusSearch is a Wikidata extension that enables direct search on Wikidata using Elasticsearch, if you require powerful search and are happy with the rate limit.
pip install elastic_wikidata
pip install -e .
elastic-wikidata needs the Elasticsearch credentials
ELASTICSEARCH_PASSWORD to connect to your ES instance. You can set these in one of three ways:
- Using environment variables:
- Using config.ini: pass the
-cparameter followed by a path to an ini file containing your Elasticsearch credentials. Example here.
- Pass each variable in at runtime using options
Once installed the package is accessible through the keyword
ew. A call is structured as follows:
ew <task> <options>
Task is either:
A full list of options can be found with
ew --help, but the following are likely to be useful:
--index/-i: the index name to push to. If not specified at runtime, elastic-wikidata will prompt for it
--limit/-l: limit the number of records pushed into ES. You might want to use this for a small trial run before importing the whole thing.
--properties/-prop: a whitespace-separated list of properties to include in the ES index e.g. 'p31 p21', or the path to a text file containing newline-separated properties e.g. this one.
--language/-lang: Wikimedia language code. Only one supported at this time.
Loading from Wikidata dump (.ndjson)
ew dump -p <path_to_json> <other_options>
This is useful if you want to create one or more large subsets of Wikidata in different Elasticsearch indexes (millions of entities).
Time estimate: Loading all ~8million humans into an AWS Elasticsearch index took me about 20 minutes. Creating the humans subset using
wikibase-dump-filter took about 3 hours using its instructions for parallelising.
- Download the complete Wikidata dump (latest-all.json.gz from here). This is a large file: 87GB on 07/2020.
- Use maxlath's wikibase-dump-filter to create a subset of the Wikidata dump. Note: don't use the
--simplifyflag when running the dump. elastic-wikidata will take care of simplification.
ew dumpwith flag
-ppointing to the JSON subset. You might want to test it with a limit (using the
Loading from SPARQL query
ew query -p <path_to_sparql_query> <other_options>
For smaller collections of Wikidata entities it might be easier to populate an Elasticsearch index directly from a SPARQL query rather than downloading the whole Wikidata dump to take a subset.
ew query automatically paginates SPARQL queries so that a heavy query like 'return all the humans' doesn't result in a timeout error.
Time estimate: Loading 10,000 entities into Wikidata into an AWS hosted Elasticsearch index took me about 6 minutes.
- Write a SPARQL query and save it to a text/.rq file. See example.
ew querywith the
-poption pointing to the file containing the SPARQL query. Optionally add a
--page_sizefor the SPARQL query.
Temporary side effects
As of version 0.3.1 refreshing the search index is disabled for the duration of load by default, as recommended by ElasticSearch. Refresh is re-enabled to the default interval of
1s after load is complete. To disable this behaviour use the flag