Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[R] Excessive memory usage on Windows #30265

Open
asfimport opened this issue Nov 16, 2021 · 1 comment
Open

[R] Excessive memory usage on Windows #30265

asfimport opened this issue Nov 16, 2021 · 1 comment

Comments

@asfimport
Copy link

I have the following workflow which worked on Arrow 5.0 on Windows 10 and R 4.1.2:

open_dataset(path) %>%
  select(i, j) %>%
  collect()

The dataset in path is partitioned by i and {}j{}, with 16 partitions in total, 5 million rows in each partition and each partition has several other regular columns (i.e. present in every partition). The entire dataset can be read into memory on my 16GB machine, which results in an R data.frame of around 3GB. However, on Arrow 6.0 the same operation fails, and R runs out of memory. Interestingly, this still works:

open_dataset(path) %>%
  select(i, j, x) %>%
  collect() %>%

where x is a regular column.

I cannot reproduce the same issue on Linux. Measuring the actual memory consumption with GNU time ({}--format=%Mmax{}) I get very similar figures for the first pipeline both on 5.0 and 6.0. The same is true for the second pipeline, which of course consumes slightly more memory, as expected. On Windows I don’t know of a simple method to measure maximum memory consumption but eyeballing it from Process Explorer, Arrow 5.0 needs around 0.5GB for the first example, while with Arrow 6.0 my 16GB machine becomes unresponsive, starts swapping, and depending on the circumstances, other apps might crash before R crashes with this error:


terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc 

With the second example, both versions consume roughly the same amount of memory.

With the new features in Arrow 6.0, this doesn’t work in Windows either, memory consumption shoots up into the 10s of GBs:

open_dataset(path) %>%
  distinct(i, j) %>%
  collect()

Meanwhile this works, with under 1GB memory needed:

open_dataset(path) %>%
  distinct(i, j, x) %>%
  collect()

These last two examples work without any issue on Linux, and as expected, they consume significantly less memory, as the select-then-collect examples.

Reporter: András Svraka

Note: This issue was originally created as ARROW-14727. Please see the migration documentation for further details.

@asfimport
Copy link
Author

Will Jones / @wjones127:
Hi András! I've started to work to reproduce this, though haven't yet had success. You might try using the profmem package like below, or even adapt the below script to be closer to your data such that it starts reproducing the behavior.

I tested this on Windows 10 with Arrow 6.0.0 and R 4.1.2. If you run both open_dataset() in same R session, you'll notice the one you run first having a larger number of allocations. But I consistently saw the version just selecting i and j to allocate less memory in total.

Let me know if there is some tweak to the script to make it more like your situation. Or what results you are seeing using profmem.

library(dplyr)
library(tidyr)
library(arrow)
library(purrr)
library(profmem)

path <- "test_data"

# Create big dataset
rows_per_partition <- 5e6
i_values <- letters[1:4]
j_values <- letters[1:4]

rpartition <- function(n) {
  tibble(x=rnorm(n), y=rnorm(n), z=sample(letters, size=n, replace=TRUE))
}

ds <- expand_grid(i=i_values, j=j_values) %>%
  mutate(data = rerun(n(), rpartition(n=rows_per_partition))) %>%
  unnest(c("data"))

ds %>%
  group_by(i, j) %>%
  arrow::write_dataset(path, format="parquet")


# Try 1 : partition cols only
remove(ds)
gc()

p1 <- profmem({
  ds <- open_dataset(path) %>%
    select(i, j) %>%
    collect()
})
print(p1, expr = FALSE)


# Try 2 : add another column
remove(ds)
gc()


p2 <- profmem({
  ds <- open_dataset(path) %>%
    select(i, j, x) %>%
    collect()
})
print(p2, expr = FALSE)


sum(p1$bytes, na.rm=TRUE)
# 1280025656
sum(p2$bytes, na.rm=TRUE)
# 1934404384

 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant