The CKAN DataStore provides a database for structured storage of data together with a powerful Web-accessible Data API, all seamlessly integrated into the CKAN interface and authorization system.
Note
The DataStore requires PostgreSQL 9.0 or later.
Warning
Make sure that you follow the steps below and make sure that the settings are correct. Wrong settings could lead to serious security issues.
The DataStore in previous lives required a custom setup of ElasticSearch and Nginx, but that is no more, as it now uses the relational database management system PostgreSQL. However, you should set up a separate database for the DataStore and create a read-only user to make your CKAN and DataStore installation safe.
In your config
file ensure that the datastore extension is enabled:
ckan.plugins = datastore
Also ensure that the ckan.datastore.write_url
and datastore.read_url
variables are set:
ckan.datastore.write_url = postgresql://writeuser:pass@localhost/datastore
ckan.datastore.read_url = postgresql://readonlyuser:pass@localhost/datastore
A few things have to be kept in mind:
- The DataStore cannot be on the CKAN database (except for testing)
- The write user (i.e.
writeuser
) and read-only user (i.e.readonlyuser
) cannot be the same
To create a new database with a write user and read-only user, use this paster command:
paster datastore create-all SQL_SUPER_USER
To test the set-up you can create a new DataStore. To do so you can run the following command:
curl -X POST http://127.0.0.1:5000/api/3/action/datastore_create -H "Authorization: {YOUR-API-KEY}" -d '{"resource_id": "{RESOURCE-ID}", "fields": [ {"id": "a"}, {"id": "b"} ], "records": [ { "a": 1, "b": "xyz"}, {"a": 2, "b": "zzz"} ]}'
A table named after the resource id should have been created on your DataStore database. Visiting the following URL should return a response from the DataStore with the records inserted above:
http://127.0.0.1:5000/api/3/action/datastore_search?resource_id={RESOURCE_ID}
The DataStore is distinct but complementary to the FileStore (see filestore
). In contrast to the the FileStore which provides 'blob' storage of whole files with no way to access or query parts of that file, the DataStore is like a database in which individual data elements are accessible and queryable. To illustrate this distinction, consider storing a spreadsheet file like a CSV or Excel document. In the FileStore this file would be stored directly. To access it you would download the file as a whole. By contrast, if the spreadsheet data is stored in the DataStore, one would be able to access individual spreadsheet rows via a simple web API, as well as being able to make queries over the spreadsheet contents.
Often, one wants data that is added to CKAN (whether it is linked to or uploaded to the FileStore <filestore>
) to be automatically added to the DataStore. This requires some processing, to extract the data from your files and to add it to the DataStore in the format the DataStore can handle.
This task of automatically parsing and then adding data to the DataStore is performed by a DataStorer, a queue process that runs asynchronously and can be triggered by uploads or other activities. The DataStorer is an extension and can be found, along with installation instructions, at: https://github.com/okfn/ckanext-datastorer
The DataStore's Data API, which derives from the underlying data table, is RESTful and JSON-based with extensive query capabilities.
Each resource in a CKAN instance can have an associated DataStore 'table'. The basic API for accessing the DataStore is outlined below. For a detailed tutorial on using this API see using-data-api
.
Note
Lists can always be expressed in different ways. It is possible to use lists, comma separated strings or single items. These are valid lists: ['foo', 'bar']
, 'foo, bar'
, "foo", "bar"
and 'foo'
.
The datastore_create API endpoint allows a user to post JSON data to be stored against a resource. This endpoint also supports altering tables, aliases and indexes and bulk insertion. The JSON must be in the following form:
{
resource_id: # the data is going to be stored against.
aliases: # list of names for read only aliases to the resource
fields: # a list of dictionaries of fields/columns and their extra metadata.
records: # a list of dictionaries of the data, eg: [{"dob": "2005", "some_stuff": ['a', 'b']}, ..]
primary_key: # list of fields that represent a unique key
indexes: # indexes on table
}
See fields
and records
for details on how to lay out records.
The datastore_delete API endpoint allows a user to delete records from a resource. The JSON for searching must be in the following form:
{
resource_id: # the data that is going to be deleted.
filter: # dictionary of matching conditions to delete
# e.g {'key1': 'a', 'key2': 'b'}
# this will be equivalent to "delete from table where key1 = 'a' and key2 = 'b' "
}
The datastore_upsert API endpoint allows a user to add or edit records in an existing DataStore resource. In order for the upsert
and update
methods to work, a unique key has to defined via the datastore_create API endpoint command. The JSON for searching must be in the following form:
{
resource_id: # resource id that the data is going to be stored under.
records: # a list of dictionaries of the data, eg: [{"dob": "2005", "some_stuff": ['a', 'b']}, ..]
method: # the method to use to put the data into the datastore
# possible options: upsert (default), insert, update
}
upsert
Update if record with same key already exists, otherwise insert. Requires unique key.
insert
Insert only. This method is faster that upsert, but will fail if any inserted record matches an existing one. Does not require a unique key.
update
Update only. An exception will occur if the key that should be updated does not exist. Requires unique key.
The datastore_search API endpoint allows a user to search data in a resource. The JSON for searching must be in the following form:
{
resource_id: # the resource id to be searched against
filters : # dictionary of matching conditions to select e.g {'key1': 'a. 'key2': 'b'}
# this will be equivalent to "select * from table where key1 = 'a' and key2 = 'b' "
q: # full text query
plain: # treat as plain text query (default: true)
language: # language of the full text query (default: english)
limit: # limit the amount of rows to size (default: 100)
offset: # offset the amount of rows
fields: # list of fields return in that order, defaults (empty or not present) to all fields in fields order.
sort: # ordered list of field names as, eg: "fieldname1, fieldname2 desc"
}
The datastore_search_sql API endpoint allows a user to search data in a resource or connect multiple resources with join expressions. The underlying SQL engine is the PostgreSQL engine. The JSON for searching must be in the following form:
{
sql: # a single sql select statement
}
Note
HTSQL is not in the core DataStore. To use it, it is necessary to install the ckanext-htsql extension available at https://github.com/okfn/ckanext-htsql.
The datastore_search_htsql API endpoint allows a user to search data in a resource using the HTSQL query expression language. The JSON for searching must be in the following form:
{
htsql: # a htsql query statement.
}
Fields define the column names and the type of the data in a column. A field is defined as follows:
{
"id": # a string which defines the column name
"type": # the data type for the column
}
Field types are optional and will be guessed by the DataStore from the provided data. However, setting the types ensures that future inserts will not fail because of wrong types. See valid-types
for details on which types are valid.
Example:
[
{
"id": "foo",
"type": "int4"
},
{
"id": "bar"
# type is optional
}
]
A record is the data to be inserted in a table and is defined as follows:
{
"<id>": # data to be set
# .. more data
}
Example:
[
{
"foo": 100,
"bar": "Here's some text"
},
{
"foo": 42
}
]
The DataStore supports all types supported by PostgreSQL as well as a few additions. A list of the PostgreSQL types can be found in the type section of the documentation. Below you can find a list of the most common data types. The json
type has been added as a storage for nested data.
In addition to the listed types below, you can also use array types. They are defines by prepending a _
or appending []
or [n]
where n denotes the length of the array. An arbitrarily long array of integers would be defined as int[]
.
- text
Arbitrary text data, e.g.
Here's some text
.- json
Arbitrary nested json data, e.g
{"foo": 42, "bar": [1, 2, 3]}
. Please note that this type is a custom type that is wrapped by the DataStore.- date
Date without time, e.g
2012-5-25
.- time
Time without date, e.g
12:42
.- timestamp
Date and time, e.g
2012-10-01T02:43Z
.- int
Integer numbers, e.g
42
,7
.- float
Floats, e.g.
1.61803
.- bool
Boolean values, e.g.
true
,0
You can find more information about the formatting of dates in the date/time types section of the PostgreSQL documentation.
A resource in the DataStore can have multiple aliases that are easier to remember than the resource id. Aliases can be created and edited with the datastore_create API endpoint. All aliases can be found in a special view called _table_metadata
.
The DataStore supports querying with multiple API endpoints. They are similar but support different features. The following list gives an overview of the different methods.
|
|
|
|
============================== | ======================= | =========================== | ============================= |
Status | Stable | Stable | Available as extension |
Ease of use | Easy | Complex | Medium |
Flexibility | Low | High | Medium |
Query language | Custom (JSON) | SQL | HTSQL |
Connect multiple resources | No | Yes | Not yet |
Use aliases | Yes | Yes | Yes |