-
-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement one-stage group-by for data.table #239
Comments
In my case, the data is already aggregated in each chunk. Thus, with first-step-group-by by data.table, all the data will be loaded in RAM? |
Thre seems to be some weird bug when overloading Unpinning this issue for now as there is no clear way forward. |
Could you expand on this? My understanding is that you need a After looking into your approach for my_NSE = function(df, ...) {
res = df[...]
dots = match.call(expand.dots = FALSE)$...
dot_names = names(dots)
do_one_stage = TRUE
if (any(dot_names == 'by'))
by_sub = dots$by
else if (length(dots) >= 3L)
by_sub = dots[[3L]]
else
do_one_stage = FALSE
if (do_one_stage) {
sub_j = if (any(dot_names == 'j')) dots$j else dots[[2L]]
if (is.name(sub_j) && sub_j == quote(.N))
second_j = quote(.(N = sum(N)))
else
return(res) ## one_stage not found and we just return the chunked aggregation.
eval(call('[', res, j = second_j, by = by_sub))
}
else
res
}
iris.df = as.disk.frame(iris)
my_NSE(iris.df, , j = .N, by = Species)
## Species N
## <fctr> <int>
##1: setosa 50
##2: versicolor 50
##3: virginica 50 Note, this takes about 40 ms on my computer. |
The issue is that Actually, instead of a PR are you able to create a new package like |
I might be able to create a package. Would you still have a |
I think it would not if an independent package exists. It will be migrated out. |
Ok. I will create a repo this weekend and start. For now the goal is NSE equivalent of what you’ve implemented for dplyr verbs. |
No description provided.
The text was updated successfully, but these errors were encountered: