Datasette provides a JSON API for your SQLite databases. Anything you can do through the Datasette user interface can also be accessed as JSON via the API.
To access the API for a page, either click on the .json
link on that page or
edit the URL and add a .json
extension to it.
The default JSON representation of data from a SQLite table or custom query looks like this:
{
"ok": true,
"rows": [
{
"id": 3,
"name": "Detroit"
},
{
"id": 2,
"name": "Los Angeles"
},
{
"id": 4,
"name": "Memnonia"
},
{
"id": 1,
"name": "San Francisco"
}
],
"truncated": false
}
"ok"
is always true
if an error did not occur.
The "rows"
key is a list of objects, each one representing a row.
The "truncated"
key lets you know if the query was truncated. This can happen if a SQL query returns more than 1,000 results (or the :ref:`setting_max_returned_rows` setting).
For table pages, an additional key "next"
may be present. This indicates that the next page in the pagination set can be retrieved using ?_next=VALUE
.
The _shape
parameter can be used to access alternative formats for the
rows
key which may be more convenient for your application. There are three
options:
?_shape=objects
-"rows"
is a list of JSON key/value objects - the default?_shape=arrays
-"rows"
is a list of lists, where the order of values in each list matches the order of the columns?_shape=array
- a JSON array of objects - effectively just the"rows"
key from the default representation?_shape=array&_nl=on
- a newline-separated list of JSON objects?_shape=arrayfirst
- a flat JSON array containing just the first value from each row?_shape=object
- a JSON object keyed using the primary keys of the rows
_shape=arrays
looks like this:
{
"ok": true,
"next": null,
"rows": [
[3, "Detroit"],
[2, "Los Angeles"],
[4, "Memnonia"],
[1, "San Francisco"]
]
}
_shape=array
looks like this:
[
{
"id": 3,
"name": "Detroit"
},
{
"id": 2,
"name": "Los Angeles"
},
{
"id": 4,
"name": "Memnonia"
},
{
"id": 1,
"name": "San Francisco"
}
]
_shape=array&_nl=on
looks like this:
{"id": 1, "value": "Myoporum laetum :: Myoporum"} {"id": 2, "value": "Metrosideros excelsa :: New Zealand Xmas Tree"} {"id": 3, "value": "Pinus radiata :: Monterey Pine"}
_shape=arrayfirst
looks like this:
[1, 2, 3]
_shape=object
looks like this:
{
"1": {
"id": 1,
"value": "Myoporum laetum :: Myoporum"
},
"2": {
"id": 2,
"value": "Metrosideros excelsa :: New Zealand Xmas Tree"
},
"3": {
"id": 3,
"value": "Pinus radiata :: Monterey Pine"
}
]
The object
shape is only available for queries against tables - custom SQL
queries and views do not have an obvious primary key so cannot be returned using
this format.
The object
keys are always strings. If your table has a compound primary
key, the object
keys will be a comma-separated string.
The default JSON representation includes a "next_url"
key which can be used to access the next page of results. If that key is null or missing then it means you have reached the final page of results.
Other representations include pagination information in the link
HTTP header. That header will look something like this:
link: <https://latest.datasette.io/fixtures/sortable.json?_next=d%2Cv>; rel="next"
Here is an example Python function built using requests that returns a list of all of the paginated items from one of these API endpoints:
def paginate(url):
items = []
while url:
response = requests.get(url)
try:
url = response.links.get("next").get("url")
except AttributeError:
url = None
items.extend(response.json())
return items
Every Datasette endpoint that can return JSON also accepts the following query string arguments:
?_shape=SHAPE
- The shape of the JSON to return, documented above.
?_nl=on
- When used with
?_shape=array
produces newline-delimited JSON objects. ?_json=COLUMN1&_json=COLUMN2
If any of your SQLite columns contain JSON values, you can use one or more
_json=
parameters to request that those columns be returned as regular JSON. Without this argument those columns will be returned as JSON objects that have been double-encoded into a JSON string value.Compare this query without the argument to this query using the argument
?_json_infinity=on
- If your data contains infinity or -infinity values, Datasette will replace
them with None when returning them as JSON. If you pass
_json_infinity=1
Datasette will instead return them asInfinity
or-Infinity
which is invalid JSON but can be processed by some custom JSON parsers. ?_timelimit=MS
- Sets a custom time limit for the query in ms. You can use this for optimistic queries where you would like Datasette to give up if the query takes too long, for example if you want to implement autocomplete search but only if it can be executed in less than 10ms.
?_ttl=SECONDS
- For how many seconds should this response be cached by HTTP proxies? Use
?_ttl=0
to disable HTTP caching entirely for this request. ?_trace=1
Turns on tracing for this page: SQL queries executed during the request will be gathered and included in the response, either in a new
"_traces"
key for JSON responses or at the bottom of the page if the response is in HTML.The structure of the data returned here should be considered highly unstable and very likely to change.
Only available if the :ref:`setting_trace_debug` setting is enabled.
The Datasette table view takes a number of special query string arguments.
You can filter the data returned by the table based on column values using a query string argument.
?column__exact=value
or?_column=value
- Returns rows where the specified column exactly matches the value.
?column__not=value
- Returns rows where the column does not match the value.
?column__contains=value
- Rows where the string column contains the specified value (
column like "%value%"
in SQL). ?column__notcontains=value
- Rows where the string column does not contain the specified value (
column not like "%value%"
in SQL). ?column__endswith=value
- Rows where the string column ends with the specified value (
column like "%value"
in SQL). ?column__startswith=value
- Rows where the string column starts with the specified value (
column like "value%"
in SQL). ?column__gt=value
- Rows which are greater than the specified value.
?column__gte=value
- Rows which are greater than or equal to the specified value.
?column__lt=value
- Rows which are less than the specified value.
?column__lte=value
- Rows which are less than or equal to the specified value.
?column__like=value
- Match rows with a LIKE clause, case insensitive and with
%
as the wildcard character. ?column__notlike=value
- Match rows that do not match the provided LIKE clause.
?column__glob=value
- Similar to LIKE but uses Unix wildcard syntax and is case sensitive.
?column__in=value1,value2,value3
Rows where column matches any of the provided values.
You can use a comma separated string, or you can use a JSON array.
The JSON array option is useful if one of your matching values itself contains a comma:
?column__in=["value","value,with,commas"]
?column__notin=value1,value2,value3
- Rows where column does not match any of the provided values. The inverse of
__in=
. Also supports JSON arrays. ?column__arraycontains=value
Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value.
This is only available if the
json1
SQLite extension is enabled.?column__arraynotcontains=value
Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value.
This is only available if the
json1
SQLite extension is enabled.?column__date=value
- Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g.
2018-01-02
. ?column__isnull=1
- Matches rows where the column is null.
?column__notnull=1
- Matches rows where the column is not null.
?column__isblank=1
- Matches rows where the column is blank, meaning null or the empty string.
?column__notblank=1
- Matches rows where the column is not blank.
?_col=COLUMN1&_col=COLUMN2
- List specific columns to display. These will be shown along with any primary keys.
?_nocol=COLUMN1&_nocol=COLUMN2
- List specific columns to hide - any column not listed will be displayed. Primary keys cannot be hidden.
?_labels=on/off
- Expand foreign key references for every possible column. See below.
?_label=COLUMN1&_label=COLUMN2
- Expand foreign key references for one or more specified columns.
?_size=1000
or?_size=max
- Sets a custom page size. This cannot exceed the
max_returned_rows
limit passed todatasette serve
. Usemax
to getmax_returned_rows
. ?_sort=COLUMN
- Sorts the results by the specified column.
?_sort_desc=COLUMN
- Sorts the results by the specified column in descending order.
?_search=keywords
- For SQLite tables that have been configured for full-text search executes a search with the provided keywords.
?_search_COLUMN=keywords
- Like
_search=
but allows you to specify the column to be searched, as opposed to searching all columns that have been indexed by FTS. ?_searchmode=raw
- With this option, queries passed to
?_search=
or?_search_COLUMN=
will not have special characters escaped. This means you can make use of the full set of advanced SQLite FTS syntax, though this could potentially result in errors if the wrong syntax is used. ?_where=SQL-fragment
If the :ref:`permissions_execute_sql` permission is enabled, this parameter can be used to pass one or more additional SQL fragments to be used in the WHERE clause of the SQL used to query the table.
This is particularly useful if you are building a JavaScript application that needs to do something creative but still wants the other conveniences provided by the table view (such as faceting) and hence would like not to have to construct a completely custom SQL query.
Some examples:
?_through={json}
This can be used to filter rows via a join against another table.
The JSON parameter must include three keys:
table
,column
andvalue
.table
must be a table that the current table is related to via a foreign key relationship.column
must be a column in that other table.value
is the value that you want to match against.For example, to filter
roadside_attractions
to just show the attractions that have a characteristic of "museum", you would construct this JSON:{ "table": "roadside_attraction_characteristics", "column": "characteristic_id", "value": "1" }
As a URL, that looks like this:
?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22}
Here's an example.
?_next=TOKEN
- Pagination by continuation token - pass the token that was returned in the
"next"
property by the previous page. ?_facet=column
- Facet by column. Can be applied multiple times, see :ref:`facets`. Only works on the default JSON output, not on any of the custom shapes.
?_facet_size=100
- Increase the number of facet results returned for each facet. Use
?_facet_size=max
for the maximum available size, determined by :ref:`setting_max_returned_rows`. ?_nofacet=1
- Disable all facets and facet suggestions for this page, including any defined by :ref:`facets_metadata`.
?_nosuggest=1
- Disable facet suggestions for this page.
?_nocount=1
- Disable the
select count(*)
query used on this page - a count ofNone
will be returned instead.
Datasette can detect foreign key relationships and resolve those references into
labels. The HTML interface does this by default for every detected foreign key
column - you can turn that off using ?_labels=off
.
You can request foreign keys be expanded in JSON using the _labels=on
or
_label=COLUMN
special query string parameters. Here's what an expanded row
looks like:
[
{
"rowid": 1,
"TreeID": 141565,
"qLegalStatus": {
"value": 1,
"label": "Permitted Site"
},
"qSpecies": {
"value": 1,
"label": "Myoporum laetum :: Myoporum"
},
"qAddress": "501X Baker St",
"SiteOrder": 1
}
]
The column in the foreign key table that is used for the label can be specified
in metadata.json
- see :ref:`label_columns`.
Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link
mechanism.
You can find this near the top of the source code of those pages, looking like this:
<link rel="alternate"
type="application/json+datasette"
href="https://latest.datasette.io/fixtures/sortable.json">
The JSON URL is also made available in a Link
HTTP header for the page:
Link: https://latest.datasette.io/fixtures/sortable.json; rel="alternate"; type="application/json+datasette"
If you start Datasette with the --cors
option, each JSON endpoint will be
served with the following additional HTTP headers:
Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Authorization, Content-Type Access-Control-Expose-Headers: Link Access-Control-Allow-Methods: GET, POST, HEAD, OPTIONS Access-Control-Max-Age: 3600
This allows JavaScript running on any domain to make cross-origin requests to interact with the Datasette API.
If you start Datasette without the --cors
option only JavaScript running on
the same domain as Datasette will be able to access the API.
Here's how to serve data.db
with CORS enabled:
datasette data.db --cors
Datasette provides a write API for JSON data. This is a POST-only API that requires an authenticated API token, see :ref:`CreateTokenView`. The token will need to have the specified :ref:`authentication_permissions`.
This requires the :ref:`permissions_insert_row` permission.
A single row can be inserted using the "row"
key:
POST /<database>/<table>/-/insert Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
{
"row": {
"column1": "value1",
"column2": "value2"
}
}
If successful, this will return a 201
status code and the newly inserted row, for example:
{
"rows": [
{
"id": 1,
"column1": "value1",
"column2": "value2"
}
]
}
To insert multiple rows at a time, use the same API method but send a list of dictionaries as the "rows"
key:
POST /<database>/<table>/-/insert Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
{
"rows": [
{
"column1": "value1",
"column2": "value2"
},
{
"column1": "value3",
"column2": "value4"
}
]
}
If successful, this will return a 201
status code and a {"ok": true}
response body.
The maximum number rows that can be submitted at once defaults to 100, but this can be changed using the :ref:`setting_max_insert_rows` setting.
To return the newly inserted rows, add the "return": true
key to the request body:
{
"rows": [
{
"column1": "value1",
"column2": "value2"
},
{
"column1": "value3",
"column2": "value4"
}
],
"return": true
}
This will return the same "rows"
key as the single row example above. There is a small performance penalty for using this option.
If any of your rows have a primary key that is already in use, you will get an error and none of the rows will be inserted:
{
"ok": false,
"errors": [
"UNIQUE constraint failed: new_table.id"
]
}
Pass "ignore": true
to ignore these errors and insert the other rows:
{
"rows": [
{
"id": 1,
"column1": "value1",
"column2": "value2"
},
{
"id": 2,
"column1": "value3",
"column2": "value4"
}
],
"ignore": true
}
Or you can pass "replace": true
to replace any rows with conflicting primary keys with the new values. This requires the :ref:`permissions_update_row` permission.
Pass "alter: true
to automatically add any missing columns to the table. This requires the :ref:`permissions_alter_table` permission.
An upsert is an insert or update operation. If a row with a matching primary key already exists it will be updated - otherwise a new row will be inserted.
The upsert API is mostly the same shape as the :ref:`insert API <TableInsertView>`. It requires both the :ref:`permissions_insert_row` and :ref:`permissions_update_row` permissions.
POST /<database>/<table>/-/upsert Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
{
"rows": [
{
"id": 1,
"title": "Updated title for 1",
"description": "Updated description for 1"
},
{
"id": 2,
"description": "Updated description for 2",
},
{
"id": 3,
"title": "Item 3",
"description": "Description for 3"
}
]
}
Imagine a table with a primary key of id
and which already has rows with id
values of 1
and 2
.
The above example will:
- Update the row with
id
of1
to set bothtitle
anddescription
to the new values - Update the row with
id
of2
to settitle
to the new value -description
will be left unchanged - Insert a new row with
id
of3
and bothtitle
anddescription
set to the new values
Similar to /-/insert
, a row
key with an object can be used instead of a rows
array to upsert a single row.
If successful, this will return a 200
status code and a {"ok": true}
response body.
Add "return": true
to the request body to return full copies of the affected rows after they have been inserted or updated:
{
"rows": [
{
"id": 1,
"title": "Updated title for 1",
"description": "Updated description for 1"
},
{
"id": 2,
"description": "Updated description for 2",
},
{
"id": 3,
"title": "Item 3",
"description": "Description for 3"
}
],
"return": true
}
This will return the following:
{
"ok": true,
"rows": [
{
"id": 1,
"title": "Updated title for 1",
"description": "Updated description for 1"
},
{
"id": 2,
"title": "Item 2",
"description": "Updated description for 2"
},
{
"id": 3,
"title": "Item 3",
"description": "Description for 3"
}
]
}
When using upsert you must provide the primary key column (or columns if the table has a compound primary key) for every row, or you will get a 400
error:
{
"ok": false,
"errors": [
"Row 0 is missing primary key column(s): \"id\""
]
}
If your table does not have an explicit primary key you should pass the SQLite rowid
key instead.
Pass "alter: true
to automatically add any missing columns to the table. This requires the :ref:`permissions_alter_table` permission.
To update a row, make a POST
to /<database>/<table>/<row-pks>/-/update
. This requires the :ref:`permissions_update_row` permission.
POST /<database>/<table>/<row-pks>/-/update Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
{
"update": {
"text_column": "New text string",
"integer_column": 3,
"float_column": 3.14
}
}
<row-pks>
here is the :ref:`tilde-encoded <internals_tilde_encoding>` primary key value of the row to update - or a comma-separated list of primary key values if the table has a composite primary key.
You only need to pass the columns you want to update. Any other columns will be left unchanged.
If successful, this will return a 200
status code and a {"ok": true}
response body.
Add "return": true
to the request body to return the updated row:
{
"update": {
"title": "New title"
},
"return": true
}
The returned JSON will look like this:
{
"ok": true,
"row": {
"id": 1,
"title": "New title",
"other_column": "Will be present here too"
}
}
Any errors will return {"errors": ["... descriptive message ..."], "ok": false}
, and a 400
status code for a bad input or a 403
status code for an authentication or permission error.
Pass "alter: true
to automatically add any missing columns to the table. This requires the :ref:`permissions_alter_table` permission.
To delete a row, make a POST
to /<database>/<table>/<row-pks>/-/delete
. This requires the :ref:`permissions_delete_row` permission.
POST /<database>/<table>/<row-pks>/-/delete Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
<row-pks>
here is the :ref:`tilde-encoded <internals_tilde_encoding>` primary key value of the row to delete - or a comma-separated list of primary key values if the table has a composite primary key.
If successful, this will return a 200
status code and a {"ok": true}
response body.
Any errors will return {"errors": ["... descriptive message ..."], "ok": false}
, and a 400
status code for a bad input or a 403
status code for an authentication or permission error.
To create a table, make a POST
to /<database>/-/create
. This requires the :ref:`permissions_create_table` permission.
POST /<database>/-/create Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
{
"table": "name_of_new_table",
"columns": [
{
"name": "id",
"type": "integer"
},
{
"name": "title",
"type": "text"
}
],
"pk": "id"
}
The JSON here describes the table that will be created:
table
is the name of the table to create. This field is required.columns
is a list of columns to create. Each column is a dictionary withname
andtype
keys.name
is the name of the column. This is required.type
is the type of the column. This is optional - if not provided,text
will be assumed. The valid types aretext
,integer
,float
andblob
.
pk
is the primary key for the table. This is optional - if not provided, Datasette will create a SQLite table with a hiddenrowid
column.If the primary key is an integer column, it will be configured to automatically increment for each new record.
If you set this to
id
without including anid
column in the list ofcolumns
, Datasette will create an auto-incrementing integer ID column for you.pks
can be used instead ofpk
to create a compound primary key. It should be a JSON list of column names to use in that primary key.ignore
can be set totrue
to ignore existing rows by primary key if the table already exists.replace
can be set totrue
to replace existing rows by primary key if the table already exists. This requires the :ref:`permissions_update_row` permission.alter
can be set totrue
if you want to automatically add any missing columns to the table. This requires the :ref:`permissions_alter_table` permission.
If the table is successfully created this will return a 201
status code and the following response:
{
"ok": true,
"database": "data",
"table": "name_of_new_table",
"table_url": "http://127.0.0.1:8001/data/name_of_new_table",
"table_api_url": "http://127.0.0.1:8001/data/name_of_new_table.json",
"schema": "CREATE TABLE [name_of_new_table] (\n [id] INTEGER PRIMARY KEY,\n [title] TEXT\n)"
}
Instead of specifying columns
directly you can instead pass a single example row
or a list of rows
.
Datasette will create a table with a schema that matches those rows and insert them for you:
POST /<database>/-/create Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
{
"table": "creatures",
"rows": [
{
"id": 1,
"name": "Tarantula"
},
{
"id": 2,
"name": "Kākāpō"
}
],
"pk": "id"
}
Doing this requires both the :ref:`permissions_create_table` and :ref:`permissions_insert_row` permissions.
The 201
response here will be similar to the columns
form, but will also include the number of rows that were inserted as row_count
:
{
"ok": true,
"database": "data",
"table": "creatures",
"table_url": "http://127.0.0.1:8001/data/creatures",
"table_api_url": "http://127.0.0.1:8001/data/creatures.json",
"schema": "CREATE TABLE [creatures] (\n [id] INTEGER PRIMARY KEY,\n [name] TEXT\n)",
"row_count": 2
}
You can call the create endpoint multiple times for the same table provided you are specifying the table using the rows
or row
option. New rows will be inserted into the table each time. This means you can use this API if you are unsure if the relevant table has been created yet.
If you pass a row to the create endpoint with a primary key that already exists you will get an error that looks like this:
{
"ok": false,
"errors": [
"UNIQUE constraint failed: creatures.id"
]
}
You can avoid this error by passing the same "ignore": true
or "replace": true
options to the create endpoint as you can to the :ref:`insert endpoint <TableInsertView>`.
To use the "replace": true
option you will also need the :ref:`permissions_update_row` permission.
Pass "alter": true
to automatically add any missing columns to the existing table that are present in the rows you are submitting. This requires the :ref:`permissions_alter_table` permission.
To drop a table, make a POST
to /<database>/<table>/-/drop
. This requires the :ref:`permissions_drop_table` permission.
POST /<database>/<table>/-/drop Content-Type: application/json Authorization: Bearer dstok_<rest-of-token>
Without a POST body this will return a status 200
with a note about how many rows will be deleted:
{
"ok": true,
"database": "<database>",
"table": "<table>",
"row_count": 5,
"message": "Pass \"confirm\": true to confirm"
}
If you pass the following POST body:
{
"confirm": true
}
Then the table will be dropped and a status 200
response of {"ok": true}
will be returned.
Any errors will return {"errors": ["... descriptive message ..."], "ok": false}
, and a 400
status code for a bad input or a 403
status code for an authentication or permission error.