You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running target-redshift for big and long streams extract,
the target is able to load all the data with a stable resources consumption.
Current Behavior
When running target-redshift==0.2.1 for one big and long stream extract on the Python Docker image python:3.7.5-buster,
the target is ultimately killed by Unix because of an OOM error.
Then, the target send a SIGPIPE to the tap, causing a BrokenPipeError when calling singer.write_message()
Expected Behavior
When running
target-redshift
for big and long streams extract,the target is able to load all the data with a stable resources consumption.
Current Behavior
When running
target-redshift==0.2.1
for one big and long stream extract on the Python Docker imagepython:3.7.5-buster
,the target is ultimately killed by Unix because of an OOM error.
Then, the target send a SIGPIPE to the tap, causing a
BrokenPipeError
when calling singer.write_message()Possible Solution
We either have something like a weird data-shape which is causing us to hang onto old pointers/refs OR there's an honest to god memory leak
Steps to Reproduce
docker build -t memory_test .
docker run -it memory_test /bin/bash
/tmp/target_config.json
file with your own credentialsnohup bash -c "/tmp/record_generator.sh | target-redshift --config /tmp/target_config.json" &
htop
Context (Environment)
I was trying to backfill a big stream (engagements from the
tap-hubspot
), but the job always failsThe text was updated successfully, but these errors were encountered: