You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently to perform efficient writes, the data is saved before being passed to ES. As Hadoop (and various libraries) perform object pooling, each entry needs to be copied otherwise the data is lost.
This causes significant memory overhead which can be alleviated by serializing early ( #3 )
The text was updated successfully, but these errors were encountered:
@CodeMomentum Could you expand a bit on the workflow and what do you mean by "other atlernatives"? Is this somehow related to bulk updates? If not, why not raise a separate issue?
Currently to perform efficient writes, the data is saved before being passed to ES. As Hadoop (and various libraries) perform object pooling, each entry needs to be copied otherwise the data is lost.
This causes significant memory overhead which can be alleviated by serializing early ( #3 )
The text was updated successfully, but these errors were encountered: