diff --git a/images/resources/instance-logs-sync-stream-complete.png b/images/resources/instance-logs-sync-stream-complete.png index 9ac22074..c8921992 100644 Binary files a/images/resources/instance-logs-sync-stream-complete.png and b/images/resources/instance-logs-sync-stream-complete.png differ diff --git a/resources/usage-and-billing/usage-and-billing-faq.mdx b/resources/usage-and-billing/usage-and-billing-faq.mdx index 02352bb1..67c4c20c 100644 --- a/resources/usage-and-billing/usage-and-billing-faq.mdx +++ b/resources/usage-and-billing/usage-and-billing-faq.mdx @@ -9,74 +9,69 @@ description: "Usage and billing FAQs and troubleshooting strategies." We are continuously improving the reporting and tools to help you troubleshoot usage. Please [reach out](/resources/contact-us) if you have any feedback or need help understanding or managing your usage. -# Usage / Billing Metrics FAQs +# Usage and Billing Metrics FAQs You can track usage in two ways: - - Individual instances: Visit the [Usage metrics](/usage/tools/monitoring-and-alerting#usage-metrics) workspace in the PowerSync Dashboard. - - Organization-wide usage: Navigate to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and check the **Plans & Billing** section for aggregated metrics across all instances in your current billing cycle. + - **Individual instances**: Visit the [Usage metrics](/usage/tools/monitoring-and-alerting#usage-metrics) workspace in the PowerSync Dashboard to see metrics for a specific instance. + - **Organization-wide**: Go to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and check the **Plan Usage** section for aggregated metrics across all instances in your current billing cycle. A sync operation occurs when a single row is synced from the PowerSync Service to a user device. The PowerSync Service maintains a history of operations for each row to ensure efficient streaming and data integrity. This means: - - Every change to a row (insert, update, delete) creates a new operation - - The history of operations builds up over time - - New clients need to download this entire history when they first sync - - Existing clients only download new operations since their last sync + - Every row change (insert, update, delete) creates a new operation, and this operations history accumulates over time. + - When a new client connects, it downloads the entire history on first sync. + - Existing clients only download new operations since their last sync. - As a result, sync operation counts may significantly exceed the number of actual data mutations, especially for frequently updated rows. This is normal behavior. + As a result, sync operation counts often exceed the number of actual data mutations, especially for frequently updated rows. This is normal. You can manage operations history through: - Daily automatic compacting (built into PowerSync Cloud) - - Regular defragmentation (recommended for frequently updated data) + - Regular [defragmentation](/usage/lifecycle-maintenance/compacting-buckets#defragmenting) (recommended for frequently updated data) - See the [Usage Troubleshooting](#usage-troubleshooting) section for more details on managing operations history. + See the [Usage Troubleshooting](#usage-troubleshooting) section for more details. - **Billing note:** Sync operations are not billed under the [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). Data throughput billing is based on "data synced" instead. You can still use sync operation counts for diagnostics. + **Billing note:** Sync operations are not billed under the [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). Billing for data throughput is based on "data synced" instead. You can still use sync operation counts for diagnostics. - A concurrent connection represents one client actively connected to the PowerSync Service. When a user device runs an app using PowerSync and calls `.connect()`, it establishes one long-lived connection for streaming real-time updates. + A concurrent connection is one client actively connected to the PowerSync Service. When a device calls `.connect()`, it establishes one long-lived connection for streaming real-time updates. - Some key points about concurrent connections: + Key points about concurrent connections: - - Billing is based on peak concurrent connections (highest number of simultaneous connections) during the billing cycle. - - **Billing (Pro/Team)**: 1,000 included, then $30 per 1,000 over the included amount - - The PowerSync Cloud Pro plan is limited to 3,000 concurrent connections, and the PowerSync Cloud Team plan is limited to 10,000 concurrent connections by default - - PowerSync Cloud Free plans are limited to 50 peak concurrent connections - - When connection limits are reached, new connection attempts receive a 429 HTTP response while existing connections continue syncing. The client will continuously retry failed connection attempts, after a delay. Clients should eventually be connected once connection capacity is available. + - Billing is based on peak concurrent connections, which is the highest number of simultaneous connections during the billing cycle. + - **Billing (Pro/Team)**: 1,000 connections are included, then $30 per 1,000 over the included amount. + - PowerSync Cloud Pro plan is limited to 3,000 concurrent connections. + - PowerSync Cloud Team plan is limited to 10,000 concurrent connections by default. + - PowerSync Cloud Free plans are limited to 50 peak concurrent connections. + - When limits are reached, new connection attempts receive a 429 HTTP response while existing connections continue syncing. Clients retry after a delay and should connect once capacity is available. - Data synced is now the only metric we use to measure data throughput for billing in our [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). + Data synced is the only metric used for data throughput billing in our [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). - It measures the total uncompressed size of data synced from PowerSync Service instances to client devices. If the same data is synced by multiple users, each transfer counts toward the total volume. + It measures the total uncompressed size of data synced from PowerSync Service instances to client devices. If the same data is synced by multiple users, each transfer counts toward the total. **Billing (Pro/Team)**: 30 GB included, then $1.00 per GB over the included amount. - The PowerSync Service hosts: + The PowerSync Service hosts three types of data: - 1. A current copy of the data, which should be roughly equal to the subset of your source data that is covered by your Sync Rules configuration; - 2. A history of all operations on data in buckets. This can be bigger than the source, since it includes the history, and one row can be in multiple buckets; and - 3. Data for parameter lookups. This should be fairly small in most cases. + 1. A current copy of the data, which should be roughly equal to the subset of your source data covered by your Sync Rules. + 2. A history of all operations on data in buckets, which can be larger than the source since it includes history and one row can be in multiple buckets. + 3. Data for parameter lookups, which is typically small. Because of this structure, your hosted data size may be larger than your source database size. **Billing (Pro/Team)**: 10 GB included, then $1.00 per GB over the included amount. - **Note** that the data processing billing metric has been removed in our [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). + **Note:** The data processing billing metric has been removed in our [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). - Data processing was calculated as the total uncompressed size of: - - - Data replicated from your source database(s) to PowerSync Service instances - - Data synced from PowerSync Service instances to user devices - - These values are available in your [Usage metrics](/usage/tools/monitoring-and-alerting#usage-metrics) as "Data replicated per day/hour" and "Data synced per day/hour". + Data processing was calculated as the total uncompressed size of data replicated from your source database(s) to PowerSync Service instances, plus data synced from PowerSync Service instances to user devices. These values are still available in your [Usage metrics](/usage/tools/monitoring-and-alerting#usage-metrics) as "Data replicated per day/hour" and "Data synced per day/hour". Data replicated refers to activity from your backend database (Postgres/MongoDB or MySQL database) to the PowerSync Service — this is not billed. @@ -89,22 +84,22 @@ description: "Usage and billing FAQs and troubleshooting strategies." - Navigate to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and go to the **Plans & Billing** section. Here you can view your total usage (aggregated across all projects in your organization) and upcoming invoice total for your current billing cycle. Data in this view updates once a day. + Go to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and open the **Plan Usage** section. This shows your total usage (aggregated across all projects) for your current billing cycle. Data updates once a day. - You can update your billing details in the **Plans & Billing** section of the [PowerSync Dashboard](https://dashboard.powersync.com/) at the organization level. + Update your billing details in the **Plans & Billing** section of the [PowerSync Dashboard](https://dashboard.powersync.com/) at the organization level. - You can review your historic invoices directly in the Stripe Customer Portal, by signing in with your billing email [here](https://billing.stripe.com/p/login/7sI6pU48L42cguc7ss). We may surface these in the Dashboard in the future. + Review your historic invoices in the Stripe Customer Portal by signing in with your billing email [here](https://billing.stripe.com/p/login/7sI6pU48L42cguc7ss). We may surface these in the Dashboard in the future. - Under the updated pricing for Pro and Team plans: + Under the updated pricing for Pro and Team plans, the following metrics are billed: - - Data synced: 30 GB included, then $1.00 per GB - - Peak concurrent connections: 1,000 included, then $30 per 1,000 - - Data hosted: 10 GB included, then $1.00 per GB (unchanged) + - **Data synced**: 30 GB included, then $1.00 per GB over the included amount. + - **Peak concurrent connections**: 1,000 included, then $30 per 1,000 over the included amount. + - **Data hosted**: 10 GB included, then $1.00 per GB over the included amount (unchanged from before). - Not billed: + The following metrics are not billed: - Replication operations (count) - Data replicated (per GB) @@ -118,90 +113,108 @@ description: "Usage and billing FAQs and troubleshooting strategies." If you're seeing unexpected spikes in your usage metrics, here's how to diagnose and fix common issues: -## Concurrent connections +## Common Usage Patterns + +### More Operations Than Rows + +If you're syncing significantly more operations than you have rows in your database, this usually indicates a large operations history has built up. This is common with frequently updated data. + +**Solution:** [Defragmentation](/usage/lifecycle-maintenance/compacting-buckets#defragmenting) reduces the operations history by compacting buckets. While defragmentation triggers additional sync operations for existing users, it significantly reduces operations for new installations. + +Use the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to compare total rows vs. operations synced to identify if this is affecting you. + +### Repetitive Syncing by the Same User + +If you see the same user syncing repeatedly in quick succession, this could indicate a client code issue. + +**First steps to troubleshoot:** + +1. **Check SDK version**: Ensure you're using the latest SDK version. +2. **Review client logs**: Check your client-side logs for connection issues or sync loops. +3. **Check instance logs**: Review [Instance logs](/usage/tools/monitoring-and-alerting#instance-logs) to see sync patterns and identify which users are affected. + +If you need help, [contact us](/resources/contact-us) with your logs for further diagnosis. + +## Concurrent Connections -The most common cause of seeing excessive concurrent connections is opening multiple copies of `PowerSyncDatabase`, and calling `.connect()` on each. Debug your connection handling by reviewing your code and [Instance logs](/usage/tools/monitoring-and-alerting#instance-logs). Make sure you're only opening one connection per user/session. +The most common cause of excessive concurrent connections is opening multiple copies of `PowerSyncDatabase` and calling `.connect()` on each. Debug your connection handling by reviewing your code and [Instance logs](/usage/tools/monitoring-and-alerting#instance-logs). Ensure you're only opening one connection per user/session. -## Sync operations +## Sync Operations -Sync operations are not billed in our updated pricing model. They can still be useful to diagnose spikes in data synced and to understand how your data mutations affect usage. +Sync operations are not billed in our updated pricing model, but they're useful for diagnosing spikes in data synced and understanding how data mutations affect usage. -While sync operations typically correspond to data mutations on synced rows (those in your Sync Rules), there are several scenarios that can affect your operation count: +While sync operations typically correspond to data mutations on synced rows (those in your Sync Rules), several scenarios can affect your operation count: -### Key Scenarios to Watch For +### Key Scenarios 1. **New App Installations:** - When a new user installs your app, PowerSync needs to sync the complete operations history. We help manage this by: - - Running automatic daily compacting on Cloud instances - - Providing manual defragmentation options (in the PowerSync Dashboard) + New users need to sync the complete operations history. We help manage this by running automatic daily compacting on Cloud instances and providing manual [defragmentation options](/usage/lifecycle-maintenance/compacting-buckets#defragmenting) in the PowerSync Dashboard. + 2. **Existing Users:** - While compacting and defragmenting reduces the operations history, they trigger additional sync operations for existing users. - - Want to optimize this? Check out our [defragmenting guide](/usage/lifecycle-maintenance/compacting-buckets#defragmenting) + Compacting and defragmenting reduce operations history but trigger additional sync operations for existing users. See our [defragmenting guide](/usage/lifecycle-maintenance/compacting-buckets#defragmenting) to optimize this. + 3. **Sync Rule Deployments:** - When you deploy changes to Sync Rules, PowerSync recreates the sync buckets from scratch. This has two effects: - - New app installations will sync fewer operations since the operations history is reset. - - Existing users will temporarily experience increased sync operations as they need to re-sync the updated buckets. + When you deploy changes to Sync Rules, PowerSync recreates sync buckets from scratch. New app installations sync fewer operations since the operations history is reset, but existing users temporarily experience increased sync operations as they re-sync the updated buckets. + + We're working on [incremental sync rule reprocessing](https://roadmap.powersync.com/c/85-more-efficient-sync-reprocessing), which will only reprocess buckets whose definitions have changed. - We are planning [incremental sync rule reprocessing](https://roadmap.powersync.com/c/85-more-efficient-sync-reprocessing), which will allow PowerSync to only reprocess buckets whose definitions have changed, rather than all buckets. 4. **Unsynced Columns:** - Any row update triggers a new operation in the logical replication stream, regardless of which columns changed. In other words, PowerSync tracks changes at the row level, not the column level. This means: - - Updates to columns not included in your Sync Rules still create sync operations. - - Even a no-op update like `UPDATE mytable SET id = id` generates a new operation for each affected row. + Any row update triggers a new operation in the logical replication stream, regardless of which columns changed. PowerSync tracks changes at the row level, not the column level. This means updates to columns not included in your Sync Rules still create sync operations, and even a no-op update like `UPDATE mytable SET id = id` generates a new operation for each affected row. - While selectively syncing columns helps with data access control and reducing data transfer size, it doesn't reduce the number of sync operations. + Selectively syncing columns helps with data access control and reducing data transfer size, but it doesn't reduce the number of sync operations. -## Data synced +## Data Synced -Data synced measures the total uncompressed bytes streamed from the PowerSync Service to clients. Spikes typically come from either lots of sync operations (high churn) or large rows (wide payloads), and can also occur during first-time syncs, defragmentation, or Sync Rule updates. +Data synced measures the total uncompressed bytes streamed from the PowerSync Service to clients. Spikes typically come from either many sync operations (high churn) or large rows (large payloads), and can also occur during first-time syncs, defragmentation, or Sync Rule updates. -If your spikes in data synced correspond with spikes in sync operations, also see the [Sync operations](#sync-operations) troubleshooting guidelines above. +If your spikes in data synced correspond with spikes in sync operations, also see the [Sync Operations](#sync-operations) troubleshooting guidelines above. -### Diagnose High Data Synced +### Diagnose Data Synced Spikes -1. Pinpoint when it spiked - - Use [Usage Metrics](/usage/tools/monitoring-and-alerting#usage-metrics) to find the exact hour/day of the spike. -2. Inspect instance logs for size - - In [Instance Logs](/usage/tools/monitoring-and-alerting#instance-logs), enable Metadata and search for "Sync stream complete" to see the size of data transferred and operations synced per stream. - +1. **Pinpoint when it spiked:** + Use [Usage Metrics](/usage/tools/monitoring-and-alerting#usage-metrics) to find the exact hour/day of the spike. + +2. **Inspect instance logs for size:** + In [Instance Logs](/usage/tools/monitoring-and-alerting#instance-logs), enable Metadata and search for "Sync stream complete" to see the size of data transferred and operations synced per stream. + ![](/images/resources/instance-logs-sync-stream-complete.png) - - [Contact us](/resources/contact-us) if you require a CSV export of your logs for a limited time-range. For certain scenarios these could be easier to search than the instance logs in the dashboard. -1. Compare operations vs row sizes - - If operations are high and size scales with it, you likely have tables that are being updated frequently. - - Alternatively, a large operations history built up in your database. See our [defragmenting guide](/usage/lifecycle-maintenance/compacting-buckets#defragmenting). - - If operations are moderate but size is large, your rows are likely wide (e.g. large big JSON columns). -2. Identify large payloads in your DB - - Check typical row sizes for frequently updated tables and look for large columns (e.g. long TEXT/JSON fields, embedded files). -3. Consider recent maintenance and app changes - - Defragmentation and Sync Rule deploys cause existing clients to re-sync content, temporarily increasing data synced. - - New app installs trigger initial full sync; expect higher usage when onboarding new of users. + You may need to scroll to load more logs. If you need a CSV export of your logs for a limited time-range, [contact us](/resources/contact-us). For certain scenarios, these are easier to search than the instance logs in the dashboard. + +3. **Compare operations vs row sizes:** + If operations are high and size scales with it, you likely have tables being updated frequently, or a large operations history has built up. See our [defragmenting guide](/usage/lifecycle-maintenance/compacting-buckets#defragmenting). If operations are moderate but size is large, your rows likely contain large data (e.g., large JSON columns or blobs). + +4. **Identify large payloads in your database:** + Check typical row sizes for frequently updated tables and look for large columns (e.g., long TEXT/JSON fields, embedded files). -## Data hosted +5. **Consider recent maintenance and app changes:** + Defragmentation and Sync Rule deploys cause existing clients to re-sync content, temporarily increasing data synced. New app installs trigger initial full sync, so expect higher usage when onboarding new sets of users. -Your hosted data size may be larger than your source database size, because it also includes the history of all operations on data in buckets. This can be bigger than the source, since it includes the history, and one row can be in multiple buckets. +## Data Hosted -Data hosted can temporarily spike during Sync Rule deployments and defragmentation, because buckets are reprocessed. During this window, both the previous and new bucket data may exist concurrently. +Your hosted data size may be larger than your source database size because it includes the history of all operations on data in buckets. This can be bigger than the source since it includes history, and one row can be in multiple buckets. + +Data hosted can temporarily spike during Sync Rule deployments and defragmentation because buckets are reprocessed. During this window, both the previous and new bucket data may exist concurrently. # Troubleshooting Strategies -## 1. **Identify Timing** - - Use [Usage Metrics](/usage/tools/monitoring-and-alerting#usage-metrics) to pinpoint usage spikes. -## 2. **Review Logs** - - Use [Instance Logs](/usage/tools/monitoring-and-alerting#instance-logs) to review sync service logs during the spike(s). - - Enable the **Metadata** option. - - Search for "Sync stream complete" entries (use your browser's search function) to review: - - How many operations synced - - The size of data transferred - - Which clients/users were involved - - +## 1. Identify Timing + Use [Usage Metrics](/usage/tools/monitoring-and-alerting#usage-metrics) to pinpoint usage spikes. + +## 2. Review Logs + Use [Instance Logs](/usage/tools/monitoring-and-alerting#instance-logs) to review sync service logs during the spike(s). Enable the **Metadata** option, then search for "Sync stream complete" entries (use your browser's search function) to review how many operations synced, the size of data transferred, and which clients/users were involved. + + ![](/images/resources/instance-logs-sync-stream-complete.png) -## 3. **Compare Metrics** - Use the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to compare total rows vs. operations synced to the user device. If you are seeing a much higher number of operations, you might benefit from [defragmentation](/usage/lifecycle-maintenance/compacting-buckets#defragmenting). -## 4. **Detailed Sync Operations** - - Use the [test-client](https://github.com/powersync-ja/powersync-service/blob/main/test-client/src/bin.ts)'s `fetch-operations` command with the `--raw` flag: + You may need to scroll to load more logs. If you need a CSV export of your logs for a limited time-range, [contact us](/resources/contact-us). For certain scenarios, these are easier to search than the instance logs in the dashboard. + +## 3. Compare Metrics + Use the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to compare total rows vs. operations synced to the user device. If you're seeing significantly more operations than rows, you might benefit from [defragmentation](/usage/lifecycle-maintenance/compacting-buckets#defragmenting). + +## 4. Detailed Sync Operations + Use the [test-client](https://github.com/powersync-ja/powersync-service/blob/main/test-client/src/bin.ts)'s `fetch-operations` command with the `--raw` flag: ```bash node dist/bin.js fetch-operations --raw --token your-jwt --endpoint https://12345.powersync.journeyapps.com