You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The openebs-hostpath storage class would be even more useful if it provided enforcement of the requested PV size (and resizing when it becomes necessary).
Context
I am using a containerized PostgreSQL database with openebs-hostpath for fast local disk access.
WAL archiving is active on the PostgreSQL instance, and I use pgBackrest to push the WAL archive files to S3. Normally, these files are created continuously (since my db has regular write traffic) but also moved relative quickly to remote S3 thus the local disk usage stays low.
However, if the connection to the S3 bucket breaks down, my data partition may be filled up with WAL files and eating up all storage space. (even that which was reserved for other applications/pods.).
If the PV size could be enforced, only the database would be forced to stop when it runs out of WAL space, all other pods that don't use the database could operate without interruption.
The text was updated successfully, but these errors were encountered:
Description
The openebs-hostpath storage class would be even more useful if it provided enforcement of the requested PV size (and resizing when it becomes necessary).
Context
I am using a containerized PostgreSQL database with openebs-hostpath for fast local disk access.
WAL archiving is active on the PostgreSQL instance, and I use pgBackrest to push the WAL archive files to S3. Normally, these files are created continuously (since my db has regular write traffic) but also moved relative quickly to remote S3 thus the local disk usage stays low.
However, if the connection to the S3 bucket breaks down, my data partition may be filled up with WAL files and eating up all storage space. (even that which was reserved for other applications/pods.).
If the PV size could be enforced, only the database would be forced to stop when it runs out of WAL space, all other pods that don't use the database could operate without interruption.
The text was updated successfully, but these errors were encountered: