-
Notifications
You must be signed in to change notification settings - Fork 644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fundamental memory leak in Modin #98
Comments
Hi @VladVin, thanks for posting this! The problem in Modin 0.1.2 is not a memory leak, rather a result of the way that Ray + Arrow's Plasma store handle objects that are no longer referenced. We depend on both of these projects for the backend of Modin. I will explain the way that Ray and Arrow handle python objects put into memory and then explain how this will be fixed in 0.2 (and is partly fixed in current master). The way the Ray interacts with the plasma store is that it will continue adding objects to the plasma store until a new object cannot be added. This includes objects that are no longer being referenced from the Python applications. Once it encounters this, it will evict objects in FIFO order. This can be particularly problematic for most workloads in Modin. Recently @robertnishihara added a feature to Ray that allows for manual freeing of objects from the plasma object store. We will be able to use it once Ray 0.6 is released. In Modin 0.1.2 the copying/caching we do is excessive. That will add to the memory usage of each DataFrame, and in 0.1.2 there is no way to free them. We rewrote the backend in #70 and it no longer has the memory overhead you will see in 0.1.2. We also added handles to free the objects once they are no longer being referenced. This way, when Ray 0.6 is released we can just add the 2 or 3 lines of code needed. We expect this to effectively solve the high memory usage problem. To some degree, there will always be some copying, this is unavoidable with an immutable store as the backend (Plasma). However, there is a lot we can do to reduce this so you aren't using 110GB of memory for your 3GB dataset. cc @robertnishihara @pcmoritz (Feel free to correct me if I'm wrong) |
One minor comment is that the object store evicts objects in a least-recently-used order instead of FIFO. |
@devin-petersohn , @robertnishihara , thank you for the comments very much. Nice to hear that it will be fixed in the near releases |
Closing this. Feel free to reopen if the discussion should continue or if issue was not resolved. |
…by-zero pass null_div_by_zero flag to dbe
System information
pip install modin
Run twice:
Describe the problem
Modin doesn't free memory when a variable is reassigned. Concretely saying, the expected behavior is that in the process of reading a table from hard drive the memory usage grows up until the whole dataframe fits into RAM, and the memory drops down to the previous value when the variable containing the same dataframe before is rewritten. This is how Pandas (and any regular logic) works.
But in case of Modin for reading the dataframe the memory isn't freed when the variable is rewritten. Instead, it's doubled, so that any time I rerun this code in future the memory usage grows up meaning there's a memory leak somewhere.
I also tried to do some slicing with the loaded dataframe - it was expected that the memory isn't incremented when I don't copy the data, but it actually was. Here is the example:
In my table there is 4 000 000 lines with 14 columns, which takes about 3 Gb of RAM when loaded. Running the code above 50 times (to make performance test) I took all 110 Gb of RAM on my remote server.
The text was updated successfully, but these errors were encountered: