All views and tables of the exposed schema <schemas>
and accessible by the active database role <roles>
are available for querying. They are exposed in one-level deep routes.
For instance the full contents of a table people is returned at
http
GET /people HTTP/1.1
bash Curl
curl "http://localhost:3000/people"
There are no deeply/nested/routes. Each route provides OPTIONS, GET, HEAD, POST, PATCH, and DELETE verbs depending entirely on database permissions.
Note
Why not provide nested routes? Many APIs allow nesting to retrieve related information, such as /films/1/director
. We offer a more flexible mechanism (inspired by GraphQL) to embed related information. It can handle one-to-many and many-to-many relationships. This is covered in the section about resource_embedding
.
You can filter result rows by adding conditions on columns. For instance, to return people aged under 13 years old:
http
GET /people?age=lt.13 HTTP/1.1
bash Curl
You can evaluate multiple conditions on columns by adding more query string parameters. For instance, to return people who are 18 or older and are students:
http
GET /people?age=gte.18&student=is.true HTTP/1.1
bash Curl
curl "http://localhost:3000/people?age=gte.18&student=is.true"
These operators are available:
Abbreviation | In PostgreSQL | Meaning |
---|---|---|
eq | = |
equals |
gt | > |
greater than |
gte | >= |
greater than or equal |
lt | < |
less than |
lte | <= |
less than or equal |
neq | <> or != |
not equal |
like | LIKE |
LIKE operator (to avoid URL encoding you can use * as an alias of the percent sign % for the pattern) |
ilike | ILIKE |
ILIKE operator (to avoid URL encoding you can use * as an alias of the percent sign % for the pattern) |
match | ~ |
~ operator, see pattern_matching |
imatch | ~* |
~* operator, see pattern_matching |
in |
|
one of a list of values, e.g. |
is | IS |
checking for exact equality (null,true,false,unknown) |
isdistinct | IS DISTINCT FROM |
not equal, treating NULL as a comparable value |
fts | @@ |
fts using to_tsquery |
plfts | @@ |
fts using plainto_tsquery |
phfts | @@ |
fts using phraseto_tsquery |
wfts | @@ |
fts using websearch_to_tsquery |
cs | @> |
contains e.g. ?tags=cs.{example, new} |
cd | <@ |
contained in e.g. ?values=cd.{1,2,3} |
ov |
|
overlap (have points in common), e.g. |
sl | << |
strictly left of, e.g. ?range=sl.(1,10) |
sr | >> |
strictly right of |
nxr | &< |
does not extend to the right of, e.g. ?range=nxr.(1,10) |
nxl | &> |
does not extend to the left of |
adj | -|- |
is adjacent to, e.g. ?range=adj.(1,10) |
not | NOT |
negates another operator, see logical_operators |
or | OR |
logical OR , see logical_operators |
and | AND |
logical AND , see logical_operators |
all | ALL |
comparison matches all the values in the list, see logical_operators |
any | ANY |
comparison matches any value in the list, see logical_operators |
For more complicated filters you will have to create a new view in the database, or use a stored procedure. For instance, here's a view to show "today's stories" including possibly older pinned stories:
CREATE VIEW fresh_stories AS
SELECT *
FROM stories
WHERE pinned = true
OR published > now() - interval '1 day'
ORDER BY pinned DESC, published DESC;
The view will provide a new endpoint:
http
GET /fresh_stories HTTP/1.1
bash Curl
Multiple conditions on columns are evaluated using AND
by default, but you can combine them using OR
with the or
operator. For example, to return people under 18 or over 21:
http
GET /people?or=(age.lt.18,age.gt.21) HTTP/1.1
bash Curl
curl "http://localhost:3000/people?or=(age.lt.18,age.gt.21)"
To negate any operator, you can prefix it with not
like ?a=not.eq.2
or ?not.and=(a.gte.0,a.lte.100)
.
You can also apply complex logic to the conditions:
http
GET /people?grade=gte.90&student=is.true&or=(age.eq.14,not.and(age.gte.11,age.lte.17)) HTTP/1.1
You may further simplify the logic using the any/all
modifiers of eq,like,ilike,gt,gte,lt,lte,match,imatch
.
For instance, to avoid repeating the same column for or
, use any
to get people with last names that start with O or P:
http
GET /people?last_name=like(any).{O*,P*} HTTP/1.1
bash Curl
curl "http://localhost:3000/people?last_name=like(any).{O*,P*}"
In a similar way, you can use all
to avoid repeating the same column for and
. To get the people with last names that start with O and end with n:
http
GET /people?last_name=like(all).{O*,*n} HTTP/1.1
bash Curl
curl "http://localhost:3000/people?last_name=like(all).{O*,*n}"
The pattern-matching operators (like
, ilike
, match
, imatch
) exist to support filtering data using patterns instead of concrete strings, as described in the PostgreSQL docs.
To ensure best performance on larger data sets, an appropriate index should be used and even then, it depends on the pattern value and actual data statistics whether an existing index will be used by the query planner or not.
The fts
filter mentioned above has a number of options to support flexible textual queries, namely the choice of plain vs phrase search and the language used for stemming. Suppose that tsearch
is a table with column my_tsv
, of type tsvector. The following examples illustrate the possibilities.
http
GET /tsearch?my_tsv=fts(french).amusant HTTP/1.1
bash Curl
curl "http://localhost:3000/tsearch?my_tsv=fts(french).amusant"
http
GET /tsearch?my_tsv=plfts.The%20Fat%20Cats HTTP/1.1
bash Curl
curl "http://localhost:3000/tsearch?my_tsv=plfts.The%20Fat%20Cats"
http
GET /tsearch?my_tsv=not.phfts(english).The%20Fat%20Cats HTTP/1.1
http
GET /tsearch?my_tsv=not.wfts(french).amusant HTTP/1.1
bash Curl
curl "http://localhost:3000/tsearch?my_tsv=not.wfts(french).amusant"
Using websearch_to_tsquery requires PostgreSQL of version at least 11.0 and will raise an error in earlier versions of the database.
When certain columns are wide (such as those holding binary data), it is more efficient for the server to withhold them in a response. The client can specify which columns are required using the select
parameter.
http
GET /people?select=first_name,age HTTP/1.1
bash Curl
[
{"first_name": "John", "age": 30},
{"first_name": "Jane", "age": 20}
]
The default is *
, meaning all columns. This value will become more important below in resource_embedding
.
You can rename the columns by prefixing them with an alias followed by the colon :
operator.
http
GET /people?select=fullName:full_name,birthDate:birth_date HTTP/1.1
[
{"fullName": "John Doe", "birthDate": "04/25/1988"},
{"fullName": "Jane Doe", "birthDate": "01/12/1998"}
]
Casting the columns is possible by suffixing them with the double colon ::
plus the desired type.
http
GET /people?select=full_name,salary::text HTTP/1.1
bash Curl
curl "http://localhost:3000/people?select=full_name,salary::text"
[
{"full_name": "John Doe", "salary": "90000.00"},
{"full_name": "Jane Doe", "salary": "120000.00"}
]
You can specify a path for a json
or jsonb
column using the arrow operators(->
or ->>
) as per the PostgreSQL docs.
CREATE TABLE people (
id int,
json_data json
);
http
GET /people?select=id,json_data->>blood_type,json_data->phones HTTP/1.1
bash Curl
curl "http://localhost:3000/people?select=id,json_data->>blood_type,json_data->phones"
[
{ "id": 1, "blood_type": "A-", "phones": [{"country_code": "61", "number": "917-929-5745"}] },
{ "id": 2, "blood_type": "O+", "phones": [{"country_code": "43", "number": "512-446-4988"}, {"country_code": "43", "number": "213-891-5979"}] }
]
http
GET /people?select=id,json_data->phones->0->>number HTTP/1.1
bash Curl
curl "http://localhost:3000/people?select=id,json_data->phones->0->>number"
[
{ "id": 1, "number": "917-929-5745"},
{ "id": 2, "number": "512-446-4988"}
]
This also works with filters:
http
GET /people?select=id,json_data->blood_type&json_data->>blood_type=eq.A- HTTP/1.1
bash Curl
curl "http://localhost:3000/people?select=id,json_data->blood_type&json_data->>blood_type=eq.A-"
[
{ "id": 1, "blood_type": "A-" },
{ "id": 3, "blood_type": "A-" },
{ "id": 7, "blood_type": "A-" }
]
Note that ->>
is used to compare blood_type
as text
. To compare with an integer value use ->
:
http
GET /people?select=id,json_data->age&json_data->age=gt.20 HTTP/1.1
bash Curl
curl "http://localhost:3000/people?select=id,json_data->age&json_data->age=gt.20"
[
{ "id": 11, "age": 25 },
{ "id": 12, "age": 30 },
{ "id": 15, "age": 35 }
]
The arrow operators(->
, ->>
) can also be used for accessing composite fields and array elements.
CREATE TYPE coordinates (
lat decimal(8,6),
long decimal(9,6)
);
CREATE TABLE countries (
id int,
location coordinates,
languages text[]
);
http
GET /countries?select=id,location->>lat,location->>long,primary_language:languages->0&location->lat=gte.19 HTTP/1.1
bash Curl
curl "http://localhost:3000/countries?select=id,location->>lat,location->>long,primary_language:languages->0&location->lat=gte.19"
[
{
"id": 5,
"lat": "19.741755",
"long": "-155.844437",
"primary_language": "en"
}
]
Important
When using the ->
and ->>
operators on composite and array columns, PostgREST uses a query like to_jsonb(<col>)->'field'
. To make filtering and ordering on those nested fields use an index, the index needs to be created on the same expression, including the to_jsonb(...)
call:
CREATE INDEX ON mytable ((to_jsonb(data) -> 'identification' ->> 'registration_number'));
The reserved word order
reorders the response rows. It uses a comma-separated list of columns and directions:
http
GET /people?order=age.desc,height.asc HTTP/1.1
bash Curl
curl "http://localhost:3000/people?order=age.desc,height.asc"
If no direction is specified it defaults to ascending order:
http
GET /people?order=age HTTP/1.1
bash Curl
If you care where nulls are sorted, add nullsfirst
or nullslast
:
http
GET /people?order=age.nullsfirst HTTP/1.1
bash Curl
http
GET /people?order=age.desc.nullslast HTTP/1.1
bash Curl
curl "http://localhost:3000/people?order=age.desc.nullslast"
You can also sort on fields of composite_array_columns
or json_columns
.
http
GET /countries?order=location->>lat HTTP/1.1
bash Curl
curl "http://localhost:3000/countries?order=location->>lat"
A HEAD method will behave identically to GET except that no body will be returned (RFC 2616) . As an optimization, the generated query won't execute an aggregate (to avoid unnecessary data transfer).
All tables and auto-updatable views can be modified through the API, subject to permissions of the requester's database role.
To create a row in a database table post a JSON object whose keys are the names of the columns you would like to create. Missing properties will be set to default values when applicable.
http
POST /table_name HTTP/1.1
{ "col1": "value1", "col2": "value2" }
bash Curl
curl "http://localhost:3000/table_name" -X POST -H "Content-Type: application/json" -d '{ "col1": "value1", "col2": "value2" }'
HTTP/1.1 201 Created
No response body will be returned by default but you can use prefer_return
to get the affected resource.
URL encoded payloads can be posted with Content-Type: application/x-www-form-urlencoded
.
http
POST /people HTTP/1.1 Content-Type: application/x-www-form-urlencoded
name=John+Doe&age=50&weight=80
bash Curl
curl "http://localhost:3000/people" -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "name=John+Doe&age=50&weight=80"
Note
When inserting a row you must post a JSON object, not quoted JSON.
Yes
{ "a": 1, "b": 2 }
No
"{ \"a\": 1, \"b\": 2 }"
Some JavaScript libraries will post the data incorrectly if you're not careful. For best results try one of the clientside_libraries
built for PostgREST.
Important
It's recommended that you use triggers instead of rules. Insertion on views with complex rules might not work out of the box with PostgREST due to its usage of CTEs. If you want to keep using rules, a workaround is to wrap the view insertion in a stored procedure and call it through the s_procs
interface. For more details, see this github issue.
Bulk insert works exactly like single row insert except that you provide either a JSON array of objects having uniform keys, or lines in CSV format. This not only minimizes the HTTP requests required but uses a single INSERT statement on the back-end for efficiency.
To bulk insert CSV simply post to a table route with Content-Type: text/csv
and include the names of the columns as the first row. For instance
http
POST /people HTTP/1.1 Content-Type: text/csv
name,age,height J Doe,62,70 Jonas,10,55
bash Curl
curl "http://localhost:3000/people" -X POST -H "Content-Type: text/csv" --data-binary @- << EOF name,age,height J Doe,62,70 Jonas,10,55 EOF
An empty field (,,
) is coerced to an empty string and the reserved word NULL
is mapped to the SQL null value. Note that there should be no spaces between the column names and commas.
To bulk insert JSON post an array of objects having all-matching keys
http
POST /people HTTP/1.1 Content-Type: application/json
- [
{ "name": "J Doe", "age": 62, "height": 70 }, { "name": "Janus", "age": 10, "height": 55 }
]
bash Curl
- curl "http://localhost:3000/people" -X POST -H "Content-Type: application/json" -d @- << EOF
- [
{ "name": "J Doe", "age": 62, "height": 70 }, { "name": "Janus", "age": 10, "height": 55 }
]
EOF
Any missing columns in the payload will be inserted as null
values. To use the DEFAULT
column value instead, use the Prefer: missing=default
header.
Having:
create table foo (
id bigint generated by default as identity primary key
, bar text
, baz int default 100
);
A request:
http
POST /foo?columns=id,bar,baz HTTP/1.1 Content-Type: application/json Prefer: missing=default, return=representation
- [
{ "bar": "val1" }
- , { "bar": "val2"
, "baz": 15 }
]
bash Curl
- curl "http://localhost:3000/foo?columns=id,bar,baz" -H "Content-Type: application/json" -H "Prefer: missing=default, return=representation" -d @- << EOF
- [
{ "bar": "val1" }
- , { "bar": "val2"
, "baz": 15 }
]
EOF
Will result in:
[
{ "id": 1
, "bar": "val1"
, "baz": 100
}
, { "id": 2
, "bar": "val2"
, "baz": 15
}
]
By using the columns
query parameter it's possible to specify the payload keys that will be inserted and ignore the rest of the payload.
http
POST /datasets?columns=source,publication_date,figure HTTP/1.1 Content-Type: application/json
- {
"source": "Natural Disaster Prevention and Control", "publication_date": "2015-09-11", "figure": 1100, "location": "...", "comment": "...", "extra": "...", "stuff": "..."
}
bash Curl
- curl "http://localhost:3000/datasets?columns=source,publication_date,figure" -X POST -H "Content-Type: application/json" -d @- << EOF
- {
"source": "Natural Disaster Prevention and Control", "publication_date": "2015-09-11", "figure": 1100, "location": "...", "comment": "...", "extra": "...", "stuff": "..."
}
EOF
In this case, only source, publication_date and figure will be inserted. The rest of the JSON keys will be ignored.
Using this also has the side-effect of being more efficient for bulk_insert
since PostgREST will not process the JSON and it'll send it directly to PostgreSQL.
To update a row or rows in a table, use the PATCH verb. Use h_filter
to specify which record(s) to update. Here is an example query setting the category
column to child for all people below a certain age.
http
PATCH /people?age=lt.13 HTTP/1.1
{ "category": "child" }
bash Curl
curl "http://localhost:3000/people?age=lt.13" -X PATCH -H "Content-Type: application/json" -d '{ "category": "child" }'
Updates also support prefer_return
plus v_filter
.
Warning
Beware of accidentally updating every row in a table. To learn to prevent that see block_fulltable
.
You can make an upsert with POST
and the Prefer: resolution=merge-duplicates
header:
http
POST /employees HTTP/1.1 Prefer: resolution=merge-duplicates
- [
{ "id": 1, "name": "Old employee 1", "salary": 30000 }, { "id": 2, "name": "Old employee 2", "salary": 42000 }, { "id": 3, "name": "New employee 3", "salary": 50000 }
]
bash Curl
- curl "http://localhost:3000/employees" -X POST -H "Content-Type: application/json" -H "Prefer: resolution=merge-duplicates" -d @- << EOF
- [
{ "id": 1, "name": "Old employee 1", "salary": 30000 }, { "id": 2, "name": "Old employee 2", "salary": 42000 }, { "id": 3, "name": "New employee 3", "salary": 50000 }
]
EOF
By default, upsert operates based on the primary key columns, you must specify all of them. You can also choose to ignore the duplicates with Prefer: resolution=ignore-duplicates
. This works best when the primary key is natural, but it's also possible to use it if the primary key is surrogate (example: "id serial primary key"). For more details read this issue.
Important
After creating a table or changing its primary key, you must refresh PostgREST schema cache for upsert to work properly. To learn how to refresh the cache see schema_reloading
.
By specifying the on_conflict
query parameter, you can make upsert work on a column(s) that has a UNIQUE constraint.
http
POST /employees?on_conflict=name HTTP/1.1 Prefer: resolution=merge-duplicates
- [
{ "name": "Old employee 1", "salary": 40000 }, { "name": "Old employee 2", "salary": 52000 }, { "name": "New employee 3", "salary": 60000 }
]
bash Curl
- curl "http://localhost:3000/employees?on_conflict=name" -X POST -H "Content-Type: application/json" -H "Prefer: resolution=merge-duplicates" -d @- << EOF
- [
{ "name": "Old employee 1", "salary": 40000 }, { "name": "Old employee 2", "salary": 52000 }, { "name": "New employee 3", "salary": 60000 }
]
EOF
A single row upsert can be done by using PUT
and filtering the primary key columns with eq
:
http
PUT /employees?id=eq.4 HTTP/1.1
{ "id": 4, "name": "Sara B.", "salary": 60000 }
bash Curl
curl "http://localhost/employees?id=eq.4" -X PUT -H "Content-Type: application/json" -d '{ "id": 4, "name": "Sara B.", "salary": 60000 }'
All the columns must be specified in the request body, including the primary key columns.
To delete rows in a table, use the DELETE verb plus h_filter
. For instance deleting inactive users:
http
DELETE /user?active=is.false HTTP/1.1
bash Curl
curl "http://localhost:3000/user?active=is.false" -X DELETE
Deletions also support prefer_return
plus v_filter
.
http
DELETE /user?id=eq.1 HTTP/1.1 Prefer: return=representation
bash Curl
curl "http://localhost:3000/user?id=eq.1" -X DELETE -H "Prefer: return=representation"
{"id": 1, "email": "johndoe@email.com"}
Warning
Beware of accidentally deleting all rows in a table. To learn to prevent that see block_fulltable
.
You can limit the amount of affected rows by update
or delete
with the limit
query parameter. For this, you must add an explicit order
on a unique column(s).
http
PATCH /users?limit=10&order=id&last_login=lt.2017-01-01 HTTP/1.1
{ "status": "inactive" }
bash Curl
curl -X PATCH "/users?limit=10&order=id&last_login=lt.2020-01-01" -H "Content-Type: application/json" -d '{ "status": "inactive" }'
http
DELETE /users?limit=10&order=id&status=eq.inactive HTTP/1.1
bash Curl
curl -X DELETE "http://localhost:3000/users?limit=10&order=id&status=eq.inactive"
If your table has no unique columns, you can use the ctid system column.
Using offset
to target a different subset of rows is also possible.
Note
There is no native UPDATE...LIMIT
or DELETE...LIMIT
support in PostgreSQL; the generated query simulates that behavior and is based on this Crunchy Data blog post.