Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PostgREST Benchmark #9

Open
3 tasks
kiwicopple opened this issue Sep 28, 2020 · 18 comments
Open
3 tasks

PostgREST Benchmark #9

kiwicopple opened this issue Sep 28, 2020 · 18 comments

Comments

@kiwicopple
Copy link
Member

Chore

Describe the chore

As part of our move from Alpha to Beta, we will want to be clear about the limitations and performance of each of the components in Supabase. We need to do 3 things:

  • Design a benchmark (if it isn't already designed)
  • Perform the benchmark
  • Create a blog post or documentation detailing the results

Additional context

Steve mentioned that there were some old benchmarks, it might just be a case of running these with the latest version of pgrst

@steve-chavez
Copy link
Member

I've done the load tests with the client(k6) and the server(pg/pgrst) being on separate ec2 instances on the same VPC(over local IP addresses). This is to avoid any networking issues and to make the tests reproducible.

Setup

Database

  • Chinook. Also used for the Hasura benchmark.
  • For reproducible POST tests, I removed the indexes and constraints from the employee table. Large row count degrades pg insert performance(this happens on about 10 runs of the tests).

VPC

  • Server: PostgreSQL 12 plus PostgREST v7.0.1 exposed directly on port 80(no Nginx). The two on the same instance.

    • Same setup on 2 instance types: t3a.nano and t2.nano.
    • pg + pgrst are running on systemd services. No containers to avoid any overhead.
    • OS: NixOS 20.09. Kernel: Linux 5.4.35.
  • Client: k6 on a t3a.medium.

    • Lower t3a instances(nano, micro, small) don't have enough RAM for enough k6 Virtual Users.
    • Linux settings(ulimit, tcp) tuned according to the k6 tuning guide.
    • OS: NixOS 20.09. Kernel: Linux 5.4.35.

(The description of the VPC can be found here)

Load test scenarios

The k6 scripts are done using constant request rates for 1 min. They can be found here.

  • GETSingle: GET a single row(filtered by id) /artist?select=*&artist_id=eq.3
  • GETSingleEmbed: GET a single row(filtered by id) with 2 embeds /album?select=*,track(*,genre(*))&artist_id=eq.127
  • GETAllEmbed: GET all rows(no filter) with 2 embeds /album?select=*,track(*,genre(*))
  • POSTSingle: POST single json object /employee
  • POSTBulk: POST bulk insert(json array of 20 objects) /employee

Results

Here are the summaries of the results. Full results for each test can be found here.

t3anano

http_reqs http_reqs(total) http_req_duration(avg) data_received data_sent vus failed_requests
GETSingle 1372.484403/s 82435 59.21ms 24 MB 8.5 MB 230 0.00%
GETSingleEmbed 455.587386/s 27922 1.04s 341 MB 3.4 MB 600 0.00%
GETAllEmbed 28.445898/s 2091 7.47s 1.7 GB 220 kB 388 0.00%
POSTSingle 1152.393369/s 69522 258.06ms 39 MB 33 MB 580 0.00%
POSTBulk 557.423585/s 34472 52.96ms 5.1 MB 240 MB 162 0.00%

t2nano

http_reqs http_reqs(total) http_req_duration(avg) data_received data_sent vus failed_requests
GETSingle 1018.047624/s 63055 292.38ms 18 MB 6.4 MB 529 0.00%
GETSingleEmbed 415.859432/s 25511 1.01s 312 MB 3.1 MB 600 0.00%
GETAllEmbed 24.2963/s 1939 10.62s 1.6 GB 220 kB 487 0.00%
POSTSingle 928.804556/s 55926 315.39ms 31 MB 26 MB 556 0.00%
POSTBulk 550.833996/s 34517 61.61ms 5.1 MB 240 MB 176 0.00%

Comments

  • There's a notable performance drop from the previously advertised 2000 req/s. The Heroku free tier dyno is equivalent to the t2.nano, so it's about 1018~ req/s now.
    • I have an idea around what could be causing this drop(SET LOCALs before every query). I'm looking forward to improve this.
  • If the GET request has no filter, the req/s will be very low. Might be reasonable considering around 1.7 GB is downloaded.
    • Lots of these requests can cause Linux to OOM kill PostgREST. This can be mitigated with a systemd Restart=Always(already on KPS, IIRC).
  • Other scenarios to test?
  • Should I test other ec2 types?

@steve-chavez
Copy link
Member

steve-chavez commented Oct 5, 2020

I was not satisfied with the results I got from PostgREST. So I tried some things for increasing the req/s:

  • Reduced logging: Got around ~200 req/s more. PR done.
  • Batching SET LOCALs: Failed approach, got less req/s. PR done.
  • More efficient concatenation of query fragments: ~220 req/s more. PR done.
  • Prepared statement for GET: ~240 req/s more. PR done.

Once I'm done with the above improvements, I should be able to get around 1900 req/s on the GETSingle case. Which would be close to the previous 2000 req/s we had.

After that I'll proceed with writing the blog post. @kiwicopple What do you think.. Is that good?

Edit: Improvements done. New GETSingle results on t3a.nano: 1980.830129 req/s.

@kiwicopple
Copy link
Member Author

I think that's amazing @steve-chavez - great job increasing throughput by 10% and completing the PR. got that we have this consistently running too, so that we can measure the changes to PostgREST over time

Looking forward to the blog post. Make sure you cover the changes you made to get if faster (and your failed attempts) - i'm very curious!

@awalias
Copy link
Member

awalias commented Oct 20, 2020

@steve-chavez could you run these on a larger instance (maybe t3a.large) and see if you can get any improvement. For KPS I can't improve on ~1200/s (have tried micro, medium, 2xlarge) even when I switch the instances to "Unlimited" mode

@steve-chavez
Copy link
Member

@awalias Sure. For reads(GETSingle), I'm also getting similar results on t3a.nano, t3a.large and t3a.xlarge - all unlimited. However with a c5.xlarge I get a noticeable improvement.

I'll post the results in a while.

@steve-chavez
Copy link
Member

steve-chavez commented Oct 22, 2020

I've done more load tests on the latest version. This time using more t3a instances and c5xlarge. The results are also on the supabase.io benchmarks project.

Latest version(7.0.1)

t3anano

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 28.14130368125745 1052 4392.724907480037 803 MB 108 kB 212 0
GETSingle 1368.1003585286514 41228 54.38214947382864 11 MB 4147 kB 177 0
GETSingleEmbed 478.7065957767772 14548 246.26307021693663 170 MB 1733 kB 182 0
POSTBulk 558.278675135931 17344 41.93758273547061 2524 kB 115 MB 0 0
POSTSingle 1125.3026654185057 34124 193.6772965780991 18 MB 15 MB 411 0
RPCSimple 1461.277419102948 44068 184.8882632989707 8736 kB 4519 kB 538 0
RPCGETSingle 1357.8467358391467 41033 211.10588839750883 9497 kB 4488 kB 565 0
RPCGETSingleEmbed 563.5562193025908 16953 209.60113417967398 196 MB 2550 kB 230 0

t3amicro

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 28.319923884594417 1052 4325.241133394486 803 MB 109 kB 209 0
GETSingle 1359.4629431116216 40860 66.68731558509552 11 MB 4150 kB 197 0
GETSingleEmbed 499.18213012816136 15001 5.586006356709547 175 MB 1802 kB 100 0
POSTBulk 562.2905272757457 17369 42.76031739541728 2527 kB 115 MB 128 0
POSTSingle 1130.4409066451226 34205 205.7264949658817 18 MB 15 MB 435 0

t3amedium

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 28.64676017095671 1061 4368.507001370408 810 MB 111 kB 211 0
GETSingle 1337.9942439864074 40234 119.74172412412452 11 MB 4126 kB 296 0
GETSingleEmbed 478.6294803275805 14528 246.4662328864953 169 MB 1759 kB 178 0
POSTBulk 557.0108925292726 17128 52.160808355499874 2492 kB 113 MB 137 0
POSTSingle 1024.3395454045806 30903 268.5424716246939 16 MB 14 MB 519 0

t3alarge

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 27.959444076327134 1046 4490.868743819312 799 MB 108 kB 213 0
GETSingle 1344.8077192588032 40748 88.26264817613146 11 MB 4138 kB 236 0
GETSingleEmbed 491.31689092671496 14857 104.77036235303191 173 MB 1785 kB 122 0
POSTBulk 554.3946223877862 17118 47.72673540501268 2491 kB 113 MB 142 0
POSTSingle 1129.0000963808495 34168 203.9136696720621 18 MB 15 MB 422 0

t3axlarge

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 49.90669359634122 1613 1501.471376600127 1232 MB 169 kB 124 0
GETSingle 1540.274836106219 46298 31.654532426022385 13 MB 4747 kB 177 0
GETSingleEmbed 758.428261/s 22793 75.40341754064875 279 MB 2.8 MB 186 0
POSTBulk 560.4358393526506 17434 41.623110891706105 2537 kB 115 MB 0 0
POSTSingle 1073.0229247909144 32371 164.82871096886123 17 MB 15 MB 349 0
RPCSimple 1554.2067168921103 46726 17.767453889868605 9263 kB 4883 kB 148 0
RPCGETSingle 1575.321899074914 47336 12.293285437447201 11 MB 5270 kB 128 0
RPCGETSingleEmbed 813.5611151030155 24465 120.73675498291435 283 MB 3727 kB 211 0

c5xlarge

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 64.33705786521816 2053 1092.1149330038968 1567 MB 213 kB 129 0
GETSingle 2207.0122163428027 66745 18.123930021919435 18 MB 6779 kB 184 0
GETSingleEmbed 1025.0301701378478 30810 26.454880600064858 359 MB 3701 kB 141 0
POSTBulk 615.2065824676657 18850 46.25723316201589 2743 kB 125 MB 157 0
POSTSingle 1394.0286261871204 41963 26.314313699044487 22 MB 19 MB 170 0
RPCSimple 2229.6218875590957 66996 14.15916249617884 13 MB 6935 kB 186 0
RPCGETSingle 2265.1258574916014 68219 13.350303690452941 15 MB 7528 kB 161 0
RPCGETSingleEmbed 1065.2575525253092 32015 42.555433064344946 371 MB 4846 kB 164 0

Comments

  • I was expecting to see an improvement with more vCPUs, since PostgREST should be able to use all cores automatically - thanks to a GHC option.
    • c5.xlarge(4 vCPUs, 8GB RAM) holds in this regard, it gives more req/s.
    • t3a.nano to t3a.large also hold, they all have give similar req/s with 2 vCPU(though they vary in RAM).
    • t3a.xlarge(4 vCPUs, 16GB RAM) req/s increase lightly. I was expecting it to have similar numbers to c5.xlarge because of the number of vCPUs.

Edit 1: corrected results for GETSingleEmbed on t3axlarge.
Edit 2: corrected results for GETSingle on t3axlarge.
Edit 3: added RPC results for t3anano/t3axlarge/c5xlarge.

@steve-chavez
Copy link
Member

steve-chavez commented Oct 23, 2020

These are load tests on the new master version(unreleased) with the above improvements:

Master version

t3anano

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 28.432965198469514 1069 4455.005381366703 816 MB 110 kB 223 0
GETSingle 2067.949048556139 62161 9.746079284696217 17 MB 6253 kB 135 0
GETSingleEmbed 804.9944509323249 24281 9.240602824924855 283 MB 2893 kB 106 0
POSTBulk 607.923018497801 18859 60.058675140834275 2560 kB 125 MB 0 0
POSTSingle 1546.5607244755133 46616 20.313292865175143 6328 kB 21 MB 168 0
RPCSimple 2011.3246029355748 60638 38.478822698918314 11 MB 6218 kB 212 0
RPCGETSingle 2165.9723477785415 65083 7.496874328472832 14 MB 7118 kB 126 0
RPCGETSingleEmbed 724.2786621387023 22015 335.15800572518646 255 MB 3311 kB 426 0

t3axlarge

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 52.79012209817422 1650 781.8713445618171 1260 MB 172 kB 100 0
GETSingle 2163.741375811913 65088 11.116856056615712 17 MB 6674 kB 163 0
GETSingleEmbed 1151.328495863017 34618 22.066126356230715 403 MB 4192 kB 166 0
POSTBulk 607.9391977183707 18903 59.37284083510547 2566 kB 125 MB 0 0
POSTSingle 1335.0433780553615 40225 58.09855885978874 5460 kB 18 MB 284 0
RPCSimple 2013.6575186156895 60505 15.883888903842529 11 MB 6322 kB 195 0
RPCGETSingle 2120.0409269479424 63843 16.249871573610218 14 MB 7108 kB 185 0
RPCGETSingleEmbed 1117.231833206941 33715 43.549919254901205 390 MB 5136 kB 200 0

c5xlarge

benchmark_name http_reqs_rate http_reqs_count http_req_duration_avg data_received data_sent vus failed_requests
GETAllEmbed 69.83586322263953 2100 43.52809438095244 1603 MB 217 kB 100 0
GETSingle 2845.2504094666474 85673 5.991303163178547 23 MB 8701 kB 140 0
GETSingleEmbed 1522.8594526998459 45759 7.049463338512587 533 MB 5496 kB 121 0
POSTBulk 629.4452167238903 19336 38.03010694414565 2625 kB 128 MB 148 0
POSTSingle 1514.6482852887295 45618 13.3789027014557 6192 kB 21 MB 166 0
RPCSimple 2755.724864338208 82801 10.698241249151723 15 MB 8571 kB 176 0
RPCGETSingle 2793.208517310614 83929 8.848548436511885 18 MB 9262 kB 162 0
RPCGETSingleEmbed 1449.2239286992774 43549 16.945833428367923 504 MB 6592 kB 153 0

Comments

  • More reads/writes than previous version but obtaining all the data without filtering(GETAllEmbed) stays the same.
  • t3anano to t3alarge results stay almost the same. I've omitted the results here for brevity.
  • t3axlarge holds on improving slightly on GETSingle despite having 4 vCPUs. c5xlarge(also 4 vCPUs) improves much more.

Edit 1: Corrected GETSingle t3a.xlarge results.
Edit 2: added RPC results for t3anano/t3axlarge/c5xlarge.
Edit 3: corrected RPC results(functions were not marked as STABLE and this reduced the req/s)

@kiwicopple
Copy link
Member Author

First of all, amazing job with the performance improvements @steve-chavez - 51% increase (!!) on nano for GET single 😲. Wow

t3a.xlarge(4 vCPUs, 16GB RAM) doesn't hold on GETSingle

I find this strange. Perhaps the t3 architecture has something unusual - are these on standard CPU or unlimited CPU? Perhaps they are on standard, and the CPU is being capped?

Either way - this does make one thing clear:

t3a.nano (2 vCPU): $0.0047 per Hour
t3a.xlarge (4 vCPU): $0.1504 per Hour - 32x more expensive than nano, minimal increase in throughput
c5.xlarge (4 vCPU): $0.17 per Hour - 36x more expensive than nano, 38% increase in throughput

From these numbers, it seems much better to horizontally scale than vertically scale. The cost of getting a couple of extra vCPUs inside the same box is very high

@steve-chavez
Copy link
Member

steve-chavez commented Oct 24, 2020

t3a.xlarge(4 vCPUs, 16GB RAM) doesn't hold on GETSingle
I find this strange. Perhaps the t3 architecture has something unusual - are these on standard CPU or unlimited CPU? Perhaps they are on standard, and the CPU is being capped?

@kiwicopple Double checked that they were on unlimited. But I made a mistake before - I didn't increase the constant request rate for GETSingle on the t3a.xlarge(fixed here).

It turns out that the req/s were improving slightly on GETSingle. I've corrected the results above.

@steve-chavez
Copy link
Member

I've added load tests for RPC. I think with this we cover all the relevant PostgREST features. The scenarios are:

  • RPCSimple: Simple addition of 5 numbers /rpc/add_them?a=1&b=2&c=3&d=4&e=5.
  • RPCGETSingle: Wrapper function that gets a single row a single row(filtered by id):
    /rpc/ret_artists?select=*&artist_id=eq.3
  • RPCGETSingleEmbed: Wrapper function that gets a single row(filtered by id) with 2 embeds:
    /rpc/ret_albums?select=album_id,title,artist_id,track(*,genre(*))&artist_id=eq.127

Results(only for t3a.nano, t3a.xlarge and c5.xlarge) on the above comments: v7.0.1 and new version.

(k6 scripts added on #7)

Comments

  • Using wrapper functions doesn't add much overhead if they're declared as IMMUTABLE. The req/s are almost the same as GETSingle and GETSingleEmbed.

@steve-chavez
Copy link
Member

steve-chavez commented Nov 12, 2020

A couple more findings here.

Unix Socket connection from PostgREST to PostgreSQL

Using Unix Socket instead of TCP Socket. This is only possible if pgrest/pg are on the same instance. Basically have this in the pgrest config:

db-uri = "postgres://postgres@/postgres"

Instead of:

db-uri = "postgres://postgres@localhost/postgres"

t3anano - Master version

benchmark_name http_reqs_rate
GETSingle 2450.151619
POSTSingle 1716.604133
RPCGETSingle 2436.285207

Comments

  • 11.6% more throughput compared to the TCP socket.

@steve-chavez
Copy link
Member

Pool connections

Number of connections kept on the pgrest pool(db-pool, 10 by default).

t3anano - Master version

I've done GETSingle load tests for different pool sizes.

db-pool http_reqs_rate
1 1026.592873
2 1419.438711
3 1632.219579
4 1822.274188
5 1986.049043
6 2041.79098
7 2136.138374
8 2173.482337
9 2189.527511
10 2214.895341
11 2203.853362
12 2197.928789
15 2190.344016

Comments

@steve-chavez steve-chavez mentioned this issue Nov 24, 2020
3 tasks
@steve-chavez
Copy link
Member

Added a test for updates(on #16):

  • PATCHSingle: Patches a single row /actor?actor_id=eq.<random_id>

t3a.nano - Nightly version

Ran the test for 15 minutes and got 1474.140078/s:

    data_received..............: 147 MB  164 kB/s
    data_sent..................: 224 MB  248 kB/s
    dropped_iterations.........: 23182   25.755935/s
  ✓ failed requests............: 0.00%   ✓ 0     ✗ 1326821
    http_req_blocked...........: avg=12.01µs  min=1.22µs  med=3.49µs  max=406.86ms p(90)=5.02µs   p(95)=6.09µs
    http_req_connecting........: avg=4.26µs   min=0s      med=0s      max=286.85ms p(90)=0s       p(95)=0s
  ✓ http_req_duration..........: avg=27.03ms  min=1.35ms  med=3.2ms   max=1.28s    p(90)=82.07ms  p(95)=180.63ms
    http_req_receiving.........: avg=230.28µs min=6.38µs  med=19.64µs max=248.27ms p(90)=51.66µs  p(95)=294.48µs
    http_req_sending...........: avg=3.06ms   min=10.11µs med=33.01µs max=407.83ms p(90)=203.92µs p(95)=3.35ms
    http_req_tls_handshaking...: avg=0s       min=0s      med=0s      max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=23.74ms  min=1.29ms  med=3.07ms  max=1.28s    p(90)=61.86ms  p(95)=157.09ms
    http_reqs..................: 1326821 1474.140078/s
    iteration_duration.........: avg=30.93ms  min=1.53ms  med=3.47ms  max=1.42s    p(90)=94.31ms  p(95)=199.88ms
    iterations.................: 1326821 1474.140078/s
    vus........................: 600     min=119 max=600
    vus_max....................: 600     min=119 max=600

@steve-chavez
Copy link
Member

Me and Ant found that while doing PATCH load tests on the read schema(1 mil rows, indexed), PostgREST gave a lot of 503 errors. The load tests were done on a t3anano and t3amicro, and the 503s happened because of low memory, the PostgreSQL connections kept getting OOM killed:

$ dmesg | grep oom

Out of memory: Killed process 14794 (postgres) total-vm:522372kB, anon-rss:103944kB, file-rss:0kB, shmem-rss:65160kB, UID:71 pgtables:468kB oom_score_adj:0
oom_reaper: reaped process 14794 (postgres), now anon-rss:0kB, file-rss:0kB, shmem-rss:65160kB

## note the anon-rss:103944kB = ~104MB. That's high considering a t3a.nano only has 500MB total.

What happens is that in pg, an UPDATE is actually an INSERT plus a DELETE, that has to happen for each row and it takes a considerable amount of resources(the indexes have to be updated). This problem didn't appear on my previous PATCH test for chinook because it has a few hundred rows(less work for updating the index).

So this is more of a db issue, and the simplest solution is to increase RAM. Still, I've ran load tests on the different t3a instances.

Nightly version

Patching a single row on the read schema: /read?id=eq.${Math.floor(Math.random() * 1000000 + 1)}

instance http_reqs_rate
t3a.nano 149.765072/s
t3a.micro 349.431943/s
t3a.small 748.744593/s
t3a.large 939.188997/s
t3a.xlarge 1236.351389/s

Comments

  • Memory does matter in this case and having more leads to more req/s. Increasing the load above the thresholds might lead to OOM. This is easily reproducible on nano and micro, on small and higher I didn't manage to get OOMs.
  • I enabled HOT Updates/fillfactor for the read table, this is recommended for heavy UPDATE loads. It's done like:
CREATE TABLE public.read (
  id bigserial,
  slug int,
  unique(id) with (fillfactor=70)
)
WITH (fillfactor=70);
  • For some cases, Cybertec recommends(search for "avoid UPDATE all together") not doing UPDATEs and instead redesign the table for just doing INSERTs.

@steve-chavez
Copy link
Member

steve-chavez commented Mar 5, 2021

I've added Nginx to the benchmark setup. A default Nginx config can lower the throughput for almost 30%, but a good config(unix socket + keepalive) can reduce the loss to about 10%.

t3a.nano - PostgREST nightly - Nginx with default config

Nginx default config means that a tcp connection is used to connect Nginx to PostgREST and there's no keepalive configured to the upstream server.

GETSingle - 1437.202365/s

    data_received..............: 14 MB  451 kB/s
    data_sent..................: 4.7 MB 156 kB/s
    dropped_iterations.........: 287    9.544987/s
  ✓ failed requests............: 0.00%  ✓ 0     ✗ 43214
    http_req_blocked...........: avg=8.86µs  min=1.47µs   med=3.2µs   max=7.01ms   p(90)=4.75µs  p(95)=16.59µs
    http_req_connecting........: avg=3.3µs   min=0s       med=0s      max=6.91ms   p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=21.67ms min=945.86µs med=4.62ms  max=123.44ms p(90)=67.34ms p(95)=76.07ms
    http_req_receiving.........: avg=70µs    min=12.89µs  med=36.22µs max=29.37ms  p(90)=79.64µs p(95)=177.04µs
    http_req_sending...........: avg=90.71µs min=8.03µs   med=27.92µs max=43.28ms  p(90)=52.37µs p(95)=69.82µs
    http_req_tls_handshaking...: avg=0s      min=0s       med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=21.51ms min=875.72µs med=4.53ms  max=123.1ms  p(90)=67.19ms p(95)=75.9ms
    http_reqs..................: 43214  1437.202365/s
    iteration_duration.........: avg=21.82ms min=1.04ms   med=4.76ms  max=123.57ms p(90)=67.51ms p(95)=76.24ms
    iterations.................: 43214  1437.202365/s
    vus........................: 133    min=104 max=133
    vus_max....................: 133    min=104 max=133

30.5% loss compared to the 2067 req/s obtained on standalone PostgREST above: #2 (comment)

POSTSingle - 1160.350482/s

    data_received..............: 5.6 MB 186 kB/s
    data_sent..................: 17 MB  555 kB/s
    dropped_iterations.........: 927    30.667871/s
  ✓ failed requests............: 0.00%  ✓ 0     ✗ 35074
    http_req_blocked...........: avg=30.28µs  min=1.5µs   med=3.49µs  max=84.09ms  p(90)=5.42µs   p(95)=12.89µs
    http_req_connecting........: avg=16.99µs  min=0s      med=0s      max=53.56ms  p(90)=0s       p(95)=0s
  ✓ http_req_duration..........: avg=85.31ms  min=1.81ms  med=93.53ms max=304.73ms p(90)=150.17ms p(95)=161.46ms
    http_req_receiving.........: avg=169.97µs min=12.12µs med=33.95µs max=120.95ms p(90)=84.75µs  p(95)=321.98µs
    http_req_sending...........: avg=1.19ms   min=10.88µs med=32.8µs  max=146.83ms p(90)=76.52µs  p(95)=491.42µs
    http_req_tls_handshaking...: avg=0s       min=0s      med=0s      max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=83.94ms  min=1.74ms  med=91.84ms max=304.25ms p(90)=149.55ms p(95)=160.73ms
    http_reqs..................: 35075  1160.383565/s
    iteration_duration.........: avg=85.61ms  min=2.01ms  med=93.84ms max=307.15ms p(90)=150.45ms p(95)=161.74ms
    iterations.................: 35074  1160.350482/s
    vus........................: 206    min=109 max=206
    vus_max....................: 206    min=109 max=206

24.9% loss compared to the 1546 req/s obtained on standalone PostgREST above: #2 (comment)

t3a.nano - PostgREST nightly - Nginx with best config

Nginx here connects to PostgREST through a unix socket and has keepalive 64.

GETSingle - 1786.875745/s

    data_received..............: 17 MB  561 kB/s
    data_sent..................: 5.8 MB 194 kB/s
    dropped_iterations.........: 307    10.216423/s
  ✓ failed requests............: 0.00%  ✓ 0     ✗ 53695
    http_req_blocked...........: avg=9.41µs   min=1.5µs    med=3.26µs  max=6.99ms   p(90)=4.75µs  p(95)=13.86µs
    http_req_connecting........: avg=3.48µs   min=0s       med=0s      max=6.71ms   p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=8.61ms   min=787.65µs med=2.4ms   max=166.16ms p(90)=27.73ms p(95)=42.97ms
    http_req_receiving.........: avg=79.52µs  min=13.56µs  med=34.77µs max=24.48ms  p(90)=80.19µs p(95)=178.56µs
    http_req_sending...........: avg=148.44µs min=7.55µs   med=27.75µs max=41.5ms   p(90)=53.4µs  p(95)=96.41µs
    http_req_tls_handshaking...: avg=0s       min=0s       med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=8.38ms   min=730.98µs med=2.32ms  max=165.61ms p(90)=26.97ms p(95)=42.24ms
    http_reqs..................: 53695  1786.875745/s
    iteration_duration.........: avg=8.75ms   min=888.37µs med=2.53ms  max=167.5ms  p(90)=27.91ms p(95)=43.16ms
    iterations.................: 53695  1786.875745/s
    vus........................: 121    min=105 max=121
    vus_max....................: 121    min=105 max=121

13.5% loss compared to the 2067 req/s obtained on standalone PostgREST above: #2 (comment)

POSTSingle - 1420.388499/s

    data_received..............: 6.9 MB 227 kB/s
    data_sent..................: 21 MB  680 kB/s
    dropped_iterations.........: 581    19.23291/s
  ✓ failed requests............: 0.00%  ✓ 0     ✗ 42908
    http_req_blocked...........: avg=33.29µs  min=1.46µs  med=3.55µs  max=114.74ms p(90)=5.26µs  p(95)=11.84µs
    http_req_connecting........: avg=17.32µs  min=0s      med=0s      max=66.6ms   p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=17.29ms  min=1.55ms  med=4.83ms  max=257.09ms p(90)=55ms    p(95)=76.62ms
    http_req_receiving.........: avg=134.88µs min=11.72µs med=35.04µs max=77.83ms  p(90)=78.06µs p(95)=261.57µs
    http_req_sending...........: avg=621.01µs min=11.12µs med=33.21µs max=64.28ms  p(90)=77.86µs p(95)=579.13µs
    http_req_tls_handshaking...: avg=0s       min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=16.53ms  min=1.47ms  med=4.7ms   max=257.03ms p(90)=52.55ms p(95)=74.66ms
    http_reqs..................: 42909  1420.421602/s
    iteration_duration.........: avg=17.58ms  min=1.72ms  med=5.09ms  max=257.7ms  p(90)=55.39ms p(95)=76.92ms
    iterations.................: 42908  1420.388499/s
    vus........................: 136    min=109 max=136
    vus_max....................: 136    min=109 max=136

8.15% loss compared to the 1546 req/s obtained on standalone PostgREST above: #2 (comment)

Comments

  • Tried tuning kernel settings, but they didn't seem to have any effect:
"net.core.somaxconn" = 65535;
"net.core.netdev_max_backlog" = 65535;
"net.ipv4.tcp_tw_reuse" = 1;
"net.ipv4.tcp_max_syn_backlog" = 20480;
"net.ipv4.tcp_max_tw_buckets" = 400000;
  • Nginx consumes about 20% CPU when doing the load tests, so the ~10% perf loss makes sense.
  • Using the unix socket connection from Nginx to PostgREST doesn't noticeably affect perf, the gains in perf are mostly due to the keepalive. This unlike the unix sockect connection from PostgREST to PostgreSQL which as shown above does noticeably affect perf.

@steve-chavez
Copy link
Member

I've also tested a t3anano standard without CPU credits. That kills performance.

t3anano standard, zero CPU Credits - standalone PostgREST nightly

  • GETSingle: 249.616041/s

Comments

  • Only about 10.6% CPU is consumed by postgrest on this case.
  • This seems in line with the baseline CPU utilization according to AWS docs. Only 5% per vCPU on a t3a.nano.

@steve-chavez
Copy link
Member

steve-chavez commented Apr 7, 2022

Edit1: Corrected the tests according to #30
Edit2: Updated the numbers with the changes discussed in #34

m5a instances

benches on m5a instances for both pg and pgrest - with nginx included.

m5a.large(50 VUs)

GETSingle - 2577.743879/s

    data_received..............: 24 MB  809 kB/s
    data_sent..................: 8.0 MB 264 kB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 77501
    http_req_blocked...........: avg=5.15µs  min=960ns   med=1.73µs  max=1.13ms   p(90)=2.32µs  p(95)=3.34µs
    http_req_connecting........: avg=2.61µs  min=0s      med=0s      max=1ms      p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=19.28ms min=1.5ms   med=18.43ms max=263.2ms  p(90)=28.68ms p(95)=32.52ms
    http_req_receiving.........: avg=42.24µs min=14.46µs med=40µs    max=5.88ms   p(90)=50.89µs p(95)=58.22µs
    http_req_sending...........: avg=11.57µs min=6.09µs  med=10.13µs max=2.66ms   p(90)=12.76µs p(95)=16.1µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=19.22ms min=1.45ms  med=18.38ms max=263.15ms p(90)=28.63ms p(95)=32.46ms
    http_reqs..................: 77501  2577.743879/s
    iteration_duration.........: avg=19.35ms min=1.56ms  med=18.5ms  max=263.27ms p(90)=28.76ms p(95)=32.59ms
    iterations.................: 77501  2577.743879/s
    vus........................: 50     min=50 max=50
    vus_max....................: 50     min=50 max=50

POSTSingle - 2516.502337/s

    data_received..............: 12 MB 402 kB/s
    data_sent..................: 36 MB 1.2 MB/s
  ✓ failed requests............: 0.00% ✓ 0    ✗ 76023
    http_req_blocked...........: avg=5.66µs  min=1.03µs  med=1.9µs   max=1.15ms   p(90)=2.71µs  p(95)=3.51µs
    http_req_connecting........: avg=2.77µs  min=0s      med=0s      max=721.09µs p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=19.56ms min=2.29ms  med=18.89ms max=192.29ms p(90)=28.51ms p(95)=31.45ms
    http_req_receiving.........: avg=41.64µs min=14.97µs med=38.58µs max=4.49ms   p(90)=50.31µs p(95)=56.99µs
    http_req_sending...........: avg=17.9µs  min=8.32µs  med=13.64µs max=1.78ms   p(90)=29.71µs p(95)=32.4µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=19.51ms min=2.23ms  med=18.83ms max=192.16ms p(90)=28.45ms p(95)=31.4ms
    http_reqs..................: 76024 2516.535439/s
    iteration_duration.........: avg=19.73ms min=2.45ms  med=19.05ms max=192.92ms p(90)=28.68ms p(95)=31.63ms
    iterations.................: 76023 2516.502337/s
    vus........................: 50    min=50 max=50
    vus_max....................: 50    min=50 max=50

POSTBulk - 1661.129334/s

    data_received..............: 8.5 MB 266 kB/s
    data_sent..................: 378 MB 12 MB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 53131
    http_req_blocked...........: avg=6.59µs  min=1.12µs  med=2.18µs  max=2.55ms  p(90)=3.64µs  p(95)=5.56µs
    http_req_connecting........: avg=2.83µs  min=0s      med=0s      max=1.55ms  p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=27.34ms min=3.37ms  med=25.67ms max=1.9s    p(90)=37.4ms  p(95)=45.06ms
    http_req_receiving.........: avg=55.16µs min=15.78µs med=43.76µs max=12.63ms p(90)=60.66µs p(95)=82.03µs
    http_req_sending...........: avg=53.07µs min=26.29µs med=50.28µs max=3.28ms  p(90)=66.29µs p(95)=82.85µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s      p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=27.24ms min=3.27ms  med=25.56ms max=1.9s    p(90)=37.3ms  p(95)=44.98ms
    http_reqs..................: 53132  1661.160599/s
    iteration_duration.........: avg=28.27ms min=4.2ms   med=26.57ms max=1.91s   p(90)=38.34ms p(95)=45.99ms
    iterations.................: 53131  1661.129334/s
    vus........................: 0      min=0  max=50
    vus_max....................: 50     min=50 max=50

m5a.xlarge(50 VUs)

GETSingle - 4430.749173/s

    data_received..............: 42 MB  1.4 MB/s
    data_sent..................: 14 MB  455 kB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 133175
    http_req_blocked...........: avg=5.24µs  min=950ns   med=1.76µs  max=2ms      p(90)=2.43µs  p(95)=3.55µs
    http_req_connecting........: avg=2.62µs  min=0s      med=0s      max=1.76ms   p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=11.18ms min=1.48ms  med=9.87ms  max=101.93ms p(90)=18.65ms p(95)=22.05ms
    http_req_receiving.........: avg=39.8µs  min=13.33µs med=36.18µs max=6.29ms   p(90)=49.96µs p(95)=57.77µs
    http_req_sending...........: avg=11.99µs min=6.15µs  med=10.22µs max=2.76ms   p(90)=13.2µs  p(95)=21.04µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=11.13ms min=1.45ms  med=9.82ms  max=101.85ms p(90)=18.59ms p(95)=22ms
    http_reqs..................: 133175 4430.749173/s
    iteration_duration.........: avg=11.25ms min=1.55ms  med=9.95ms  max=102ms    p(90)=18.72ms p(95)=22.13ms
    iterations.................: 133175 4430.749173/s
    vus........................: 50     min=50 max=50
    vus_max....................: 50     min=50 max=50

POSTSingle - 4173.195187/s

    data_received..............: 20 MB  668 kB/s
    data_sent..................: 60 MB  2.0 MB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 126409
    http_req_blocked...........: avg=5.94µs  min=1.03µs  med=1.92µs  max=3.85ms   p(90)=2.86µs  p(95)=3.87µs
    http_req_connecting........: avg=2.69µs  min=0s      med=0s      max=1.39ms   p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=11.69ms min=2.48ms  med=10.88ms max=230.07ms p(90)=17.34ms p(95)=19.72ms
    http_req_receiving.........: avg=41.11µs min=12.88µs med=34.22µs max=10.11ms  p(90)=50.01µs p(95)=58.23µs
    http_req_sending...........: avg=17.87µs min=8.77µs  med=13.71µs max=2.57ms   p(90)=29.33µs p(95)=32.86µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=11.63ms min=2.41ms  med=10.82ms max=229.97ms p(90)=17.28ms p(95)=19.66ms
    http_reqs..................: 126410 4173.2282/s
    iteration_duration.........: avg=11.86ms min=2.61ms  med=11.05ms max=230.61ms p(90)=17.51ms p(95)=19.9ms
    iterations.................: 126409 4173.195187/s
    vus........................: 50     min=50 max=50
    vus_max....................: 50     min=50 max=50

POSTBulk - 2730.889567/s

    data_received..............: 14 MB  437 kB/s
    data_sent..................: 646 MB 20 MB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 90485
    http_req_blocked...........: avg=8.16µs  min=1.13µs  med=2.22µs  max=7.5ms  p(90)=4.39µs  p(95)=7.3µs
    http_req_connecting........: avg=3.32µs  min=0s      med=0s      max=5.75ms p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=15.56ms min=4.05ms  med=14.76ms max=3.07s  p(90)=21.98ms p(95)=24.95ms
    http_req_receiving.........: avg=54.25µs min=15.94µs med=39.53µs max=7.36ms p(90)=66.51µs p(95)=103.81µs
    http_req_sending...........: avg=54.73µs min=26.03µs med=47.12µs max=8.71ms p(90)=70.41µs p(95)=92.87µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s     p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=15.45ms min=3.45ms  med=14.66ms max=3.07s  p(90)=21.87ms p(95)=24.85ms
    http_reqs..................: 90486  2730.919747/s
    iteration_duration.........: avg=16.6ms  min=4.9ms   med=15.79ms max=3.07s  p(90)=23.02ms p(95)=25.99ms
    iterations.................: 90485  2730.889567/s
    vus........................: 0      min=0  max=50
    vus_max....................: 50     min=50 max=50

m5a.2xlarge(50 VUs)

GETSingle - 7363.037795/s

    data_received..............: 69 MB  2.3 MB/s
    data_sent..................: 23 MB  756 kB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 221285
    http_req_blocked...........: avg=4.88µs  min=940ns   med=1.77µs  max=2.41ms  p(90)=2.61µs  p(95)=3.97µs
    http_req_connecting........: avg=2.18µs  min=0s      med=0s      max=1.83ms  p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=6.69ms  min=1.54ms  med=6.27ms  max=50.82ms p(90)=10.02ms p(95)=11.45ms
    http_req_receiving.........: avg=39.2µs  min=13.36µs med=33.12µs max=10.08ms p(90)=49.24µs p(95)=57.63µs
    http_req_sending...........: avg=12.51µs min=6.01µs  med=10.36µs max=1.91ms  p(90)=13.78µs p(95)=25.34µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s      p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=6.64ms  min=1.51ms  med=6.21ms  max=50.7ms  p(90)=9.97ms  p(95)=11.4ms
    http_reqs..................: 221285 7363.037795/s
    iteration_duration.........: avg=6.77ms  min=1.6ms   med=6.35ms  max=51.21ms p(90)=10.1ms  p(95)=11.54ms
    iterations.................: 221285 7363.037795/s
    vus........................: 50     min=50 max=50
    vus_max....................: 50     min=50 max=50

POSTSingle - 6725.846335/s

    data_received..............: 33 MB  1.1 MB/s
    data_sent..................: 98 MB  3.2 MB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 204532
    http_req_blocked...........: avg=5.79µs  min=970ns   med=1.95µs  max=3.07ms   p(90)=3.08µs  p(95)=4.36µs
    http_req_connecting........: avg=2.68µs  min=0s      med=0s      max=2.56ms   p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=7.14ms  min=2.35ms  med=6.87ms  max=355.73ms p(90)=9.8ms   p(95)=10.82ms
    http_req_receiving.........: avg=42.55µs min=12.85µs med=33.23µs max=9.09ms   p(90)=49.81µs p(95)=61.46µs
    http_req_sending...........: avg=18.33µs min=8.2µs   med=13.76µs max=5.02ms   p(90)=27.35µs p(95)=33.42µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=7.08ms  min=2.18ms  med=6.81ms  max=355.63ms p(90)=9.74ms  p(95)=10.76ms
    http_reqs..................: 204533 6725.879219/s
    iteration_duration.........: avg=7.32ms  min=2.6ms   med=7.05ms  max=356.22ms p(90)=9.98ms  p(95)=11.01ms
    iterations.................: 204532 6725.846335/s
    vus........................: 50     min=50 max=50
    vus_max....................: 50     min=50 max=50

POSTBulk - 3771.989959/s

    data_received..............: 21 MB  603 kB/s
    data_sent..................: 936 MB 27 MB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 130735
    http_req_blocked...........: avg=8.31µs  min=1.05µs  med=2.36µs  max=3.96ms p(90)=5.97µs  p(95)=7.9µs
    http_req_connecting........: avg=3.25µs  min=0s      med=0s      max=3.64ms p(90)=0s      p(95)=0s
  ✓ http_req_duration..........: avg=10.23ms min=2.75ms  med=9.71ms  max=4.6s   p(90)=14.2ms  p(95)=15.64ms
    http_req_receiving.........: avg=60.45µs min=16.2µs  med=38.74µs max=8.54ms p(90)=78.94µs p(95)=118.52µs
    http_req_sending...........: avg=60.23µs min=24.75µs med=44.12µs max=4.04ms p(90)=79.64µs p(95)=104.53µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s     p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=10.11ms min=2.27ms  med=9.59ms  max=4.6s   p(90)=14.1ms  p(95)=15.53ms
    http_reqs..................: 130736 3772.018811/s
    iteration_duration.........: avg=11.49ms min=4.23ms  med=10.95ms max=4.6s   p(90)=15.42ms p(95)=16.91ms
    iterations.................: 130735 3771.989959/s
    vus........................: 0      min=0  max=50
    vus_max....................: 50     min=50 max=50

@steve-chavez
Copy link
Member

steve-chavez commented Apr 7, 2022

Edit: Updated the numbers with the changes discussed in #34

Unlogged table

Using the same setup as above, but with an unlogged table: ALTER TABLE employees SET UNLOGGED.

m5a.large - POSTSingle - 2571.211303/s

    data_received..............: 12 MB 411 kB/s
    data_sent..................: 37 MB 1.2 MB/s
  ✓ failed requests............: 0.00% ✓ 0    ✗ 77568
    http_req_blocked...........: avg=5.34µs  min=1.03µs  med=1.94µs  max=1.64ms   p(90)=2.64µs  p(95)=3.61µs 
    http_req_connecting........: avg=2.39µs  min=0s      med=0s      max=1.54ms   p(90)=0s      p(95)=0s     
  ✓ http_req_duration..........: avg=3.7ms   min=1.18ms  med=3.41ms  max=113.86ms p(90)=5.45ms  p(95)=6.24ms 
    http_req_receiving.........: avg=43.52µs min=15.63µs med=38.65µs max=4.52ms   p(90)=50.01µs p(95)=57.26µs
    http_req_sending...........: avg=18.29µs min=8.39µs  med=13.91µs max=1.69ms   p(90)=29.72µs p(95)=32.23µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s     
    http_req_waiting...........: avg=3.64ms  min=1.12ms  med=3.35ms  max=113.73ms p(90)=5.38ms  p(95)=6.17ms 
    http_reqs..................: 77569 2571.244451/s
    iteration_duration.........: avg=3.86ms  min=1.31ms  med=3.57ms  max=114.37ms p(90)=5.61ms  p(95)=6.41ms 
    iterations.................: 77568 2571.211303/s
    vus........................: 10    min=10 max=10 
    vus_max....................: 10    min=10 max=10  

m5a.xlarge - POSTSingle - 4482.84471/s

    data_received..............: 22 MB  717 kB/s
    data_sent..................: 65 MB  2.1 MB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 135607
    http_req_blocked...........: avg=5.71µs  min=1.03µs  med=1.95µs  max=2.39ms   p(90)=2.88µs  p(95)=3.84µs 
    http_req_connecting........: avg=2.7µs   min=0s      med=0s      max=2.31ms   p(90)=0s      p(95)=0s     
  ✓ http_req_duration..........: avg=10.88ms min=1.51ms  med=9.82ms  max=193.34ms p(90)=17.43ms p(95)=20.37ms
    http_req_receiving.........: avg=41.26µs min=13.25µs med=34.24µs max=9.38ms   p(90)=50.35µs p(95)=59.03µs
    http_req_sending...........: avg=18.14µs min=8.31µs  med=13.86µs max=2.37ms   p(90)=29.85µs p(95)=33.24µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s     
    http_req_waiting...........: avg=10.82ms min=1.47ms  med=9.76ms  max=193.21ms p(90)=17.37ms p(95)=20.3ms 
    http_reqs..................: 135608 4482.877768/s
    iteration_duration.........: avg=11.05ms min=1.65ms  med=10ms    max=193.86ms p(90)=17.61ms p(95)=20.54ms
    iterations.................: 135607 4482.84471/s
    vus........................: 50     min=50 max=50  
    vus_max....................: 50     min=50 max=50  

m5a.2xlarge - POSTSingle - 7532.095276/s

    data_received..............: 37 MB  1.2 MB/s
    data_sent..................: 109 MB 3.6 MB/s
  ✓ failed requests............: 0.00%  ✓ 0    ✗ 228804
    http_req_blocked...........: avg=5.99µs  min=990ns   med=1.95µs  max=4.73ms   p(90)=3.1µs   p(95)=4.34µs 
    http_req_connecting........: avg=2.74µs  min=0s      med=0s      max=3.28ms   p(90)=0s      p(95)=0s     
  ✓ http_req_duration..........: avg=6.36ms  min=1.62ms  med=5.94ms  max=323.98ms p(90)=9.33ms  p(95)=10.71ms
    http_req_receiving.........: avg=42.41µs min=13.84µs med=32.81µs max=8.28ms   p(90)=49.58µs p(95)=61.86µs
    http_req_sending...........: avg=18.52µs min=9.17µs  med=13.82µs max=6.07ms   p(90)=28.52µs p(95)=33.92µs
    http_req_tls_handshaking...: avg=0s      min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s     
    http_req_waiting...........: avg=6.3ms   min=1.58ms  med=5.88ms  max=323.83ms p(90)=9.27ms  p(95)=10.65ms
    http_reqs..................: 228805 7532.128195/s
    iteration_duration.........: avg=6.55ms  min=1.82ms  med=6.13ms  max=324.47ms p(90)=9.52ms  p(95)=10.89ms
    iterations.................: 228804 7532.095276/s
    vus........................: 50     min=50 max=50  
    vus_max....................: 50     min=50 max=50  

Comments

  • An unlogged table will be truncated in case of crash + recovery but on a normal restart the data will be retained(ref).
  • Logical backups do include unlogged tables but of course no replication or PITR for them(ref).
  • Tried to increase writes by setting synchronous_commit=off(ref) instead of unlogged but that didn't seem to have a noticeable effect.

@kiwicopple kiwicopple transferred this issue from supabase/benchmarks-archive Nov 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants