Interpreting Supabase Grafana IO charts #27003
TheOtherBrian1
announced in
Troubleshooting
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
There are two primary values that matter for IO:
Each compute instance has unique IO settings. The settings at the time this was written are listed below.
Compute sizes below XL, to accommodate their limitations, have burst budgets. This is when instances are allowed to utilize 1048 Mbps of disk throughput and 3,000 IOPS for 30 minutes before returning back to their baseline behavior.
There are other metrics that indicate IO strain.
This example shows a 16XL database exhibiting severe IO strain:
Its Disk IOPS is constantly near peak capacity:
Its throughput is also high:
As a side-effect, its CPU is encumbered by heavy Busy IOWait activity:
A drop in the blue line or excessive IO usage is highly problematic as it clarifies that your database is expending more IO than it normally is intended to manage. This can be caused by
If a database exhibited some of these metrics for prolonged periods, then there are a few primary approaches:
Other useful Supabase Grafana guides:
Esoteric factors
Webhooks:
Supabase webhooks use the pg_net extension to handle requests. The
net.http_request_queue
table isn't indexed to keep write costs low. However, if you upload millions of rows to a webhook-enabled table too quickly, it can significantly increase the read costs for the extension.To check if reads are becoming expensive, run:
If you encounter this issue, you can either:
Increase your compute size to help handle the large volume of requests.
Truncate the table to clear the queue:
Beta Was this translation helpful? Give feedback.
All reactions