Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replica size / disk usage #1083

Closed
sansavision opened this issue Mar 22, 2024 · 6 comments · Fixed by #1099
Closed

Replica size / disk usage #1083

sansavision opened this issue Mar 22, 2024 · 6 comments · Fixed by #1099
Assignees
Labels
bug Something isn't working

Comments

@sansavision
Copy link

sansavision commented Mar 22, 2024

Hi
First of all what an amazing project.
Currently we are running a sync server on a digital ocean droplet with a managed postgres db on digital ocean. I have two questions.

  1. The postgres disk usage is gradually increasing in linear fashion and within a day filled up a 10 gb instance, Im surly doing something wrong, the data is itself is just a few mb.
    Looking at the Insights it seems that the replica size is growing to several gb.
    Any ideas or clues that could help me understand why this is happening ?

  2. With regards to logical replication and digital ocean managed db it seems like a prerequisite is to have the ability to create a super user, as far as i know most managed db do not allow this. The alternative is direct write. Comparing the two is there significant down side with the later ?

Copy link

linear bot commented Mar 22, 2024

@thruflo
Copy link
Contributor

thruflo commented Mar 22, 2024

Hey @sansavision, @alco is working on some PRs and docs that address (1). Re: (2) no, we recommend using direct writes mode.

@alco
Copy link
Member

alco commented Mar 22, 2024

Hey @sansavision. Thanks for raising this issue!

You're not doing anything wrong. It's actually a bug in Electric. The replication slot it creates prevents Postgres from discarding old WAL records until Electric sees a write to an electrified table. So if you have non-electrified tables that are regularly written to, the disk usage reported by Postgres will keep growing with every write. Only a write to an electrified table will give Electric a chance to let Postgres know it can discard old WAL records. Over time, this leads to a saw-tooth disk usage chart:

Screenshot from 2024-03-22 16-58-03

In practice, managed DBs exhibit some level of "background write noise" which leads to a steady disk usage growth over time due to the pileup of WAL records retained by Electric's replication slot.

We'll work on fixing before the next release.

@alco
Copy link
Member

alco commented Mar 22, 2024

2. With regards to logical replication and digital ocean managed db it seems like a prerequisite is to have the ability to create a super user, as far as i know most managed db do not allow this. The alternative is direct write. Comparing the two is there significant down side with the later ?

We haven't yet done extensive testing in the direct_writes mode but our plan is to make it the default eventually and deprecated the logical_replication write mode.

@alco alco added the bug Something isn't working label Mar 22, 2024
@sansavision
Copy link
Author

Thank you for the clarification @alco @thruflo , looking forward to the update.
On another matter, we still have a few other issues with regards to the types being generated where some are missing, i will open another issue.

@djbutler
Copy link

Only a write to an electrified table will give Electric a chance to let Postgres know it can discard old WAL records

It appears that in my case, I hit some kind of a WAL size limit and the electric service crashed ( #1089 ). This was during a series of operations that involved repeatedly writing to non-electrified tables AND updating an electrified table. So I'm confused about why the limit was reached. However, I'll see if removing the limit helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants