You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
High Memory Allocation Leading to OOM Killers When Writing High RPS to PostgreSQL
Description
I'm encountering an issue where my Golang service, which processes around 10,000 requests per second (RPS), is causing the container to eventually be terminated by the OOM killer due to high memory allocation that isn't being cleared effectively.
The service is designed to write all incoming data to a PostgreSQL database. After running for a while, memory usage increases steadily, and it seems that garbage collection isn't able to keep up, eventually leading to an out-of-memory (OOM) condition.
Current Implementation
Data Handling:
Data is placed into a channel.
A worker pool is used to select data from this channel and insert it into PostgreSQL.
Database Insertion:
I'm using the pgxpool package for managing database connections with the following settings:
_, err=c.Db.Exec(context.Background(), ` INSERT INTO requestsB (id, at, data, currency, timestamp) VALUES ($1, $2, $3, $4, $5) ON CONFLICT (id) DO UPDATE SET at = EXCLUDED.at, data = EXCLUDED.data, currency = EXCLUDED.currency, timestamp = EXCLUDED.timestamp`,
request.ID, request.At, request.data, pq.Array(request.Cur), time.Now().UTC())
iferr!=nil {
returnfmt.Errorf("error inserting into requests table: %v", err)
}
Previous Attempts:
I attempted to mitigate the issue by increasing the maximum number of connections in postgresql.conf and insert after handling every request in separate goroutine .
I also tried to use tx, bulk insert and query, closing row after insert._
Observed Problem
Despite using a worker pool and attempting different approaches to manage PostgreSQL connections and inserts, memory allocation continues to grow over time, eventually leading to the container being killed by the OOM killer.
It seems that the database operations or the handling of goroutines and channels are contributing to this memory buildup, but I haven't been able to pinpoint the exact cause.
Steps to Reproduce
Deploy the Golang service with the above implementation.
Generate a steady stream of requests (~10k RPS) to be processed by the service.
Monitor memory usage over time until the container is terminated by the OOM killer.
Expected Behavior
The service should be able to sustain high RPS while managing memory effectively, avoiding excessive memory allocation and OOM conditions.
Actual Behavior
Memory usage increases steadily without being cleared, leading to an OOM condition and the container being killed.
Environment
Golang Version: 1.22.5
PostgreSQL Version: 16.4
pgxpool Version: 5.6.0
Container Environment: Docker
OS: ubuntu 22.04
Potential Solutions Considered
Adjusting PostgreSQL connection pool settings.
Investigating more efficient ways to batch inserts or optimize database interaction.
Profiling the service to identify potential memory leaks or inefficiencies in the handling of channels/goroutines.
The text was updated successfully, but these errors were encountered:
High Memory Allocation Leading to OOM Killers When Writing High RPS to PostgreSQL
Description
I'm encountering an issue where my Golang service, which processes around 10,000 requests per second (RPS), is causing the container to eventually be terminated by the OOM killer due to high memory allocation that isn't being cleared effectively.
The service is designed to write all incoming data to a PostgreSQL database. After running for a while, memory usage increases steadily, and it seems that garbage collection isn't able to keep up, eventually leading to an out-of-memory (OOM) condition.
Current Implementation
Data Handling:
Database Insertion:
I'm using the
pgxpool
package for managing database connections with the following settings:Data insertion looks like this:
Previous Attempts:
postgresql.conf
and insert after handling every request in separate goroutine .Observed Problem
Steps to Reproduce
Expected Behavior
Actual Behavior
Environment
1.22.5
16.4
5.6.0
Potential Solutions Considered
The text was updated successfully, but these errors were encountered: