Skip to content

Commit

Permalink
version 1.0.2
Browse files Browse the repository at this point in the history
  • Loading branch information
hadley authored and cran-robot committed Aug 18, 2020
1 parent f30e462 commit 36e5287
Show file tree
Hide file tree
Showing 76 changed files with 666 additions and 497 deletions.
9 changes: 5 additions & 4 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Type: Package
Package: dplyr
Title: A Grammar of Data Manipulation
Version: 1.0.1
Version: 1.0.2
Authors@R:
c(person(given = "Hadley",
family = "Wickham",
Expand All @@ -13,7 +13,8 @@ Authors@R:
role = "aut",
comment = c(ORCID = "0000-0002-2444-4226")),
person(given = "Lionel",
family = "Henry",
family = "
Henry",
role = "aut"),
person(given = "Kirill",
family = "Müller",
Expand All @@ -38,12 +39,12 @@ Encoding: UTF-8
LazyData: yes
RoxygenNote: 7.1.1
NeedsCompilation: yes
Packaged: 2020-07-22 12:06:49 UTC; romainfrancois
Packaged: 2020-08-12 15:16:36 UTC; romainfrancois
Author: Hadley Wickham [aut, cre] (<https://orcid.org/0000-0003-4757-117X>),
Romain François [aut] (<https://orcid.org/0000-0002-2444-4226>),
Lionel Henry [aut],
Kirill Müller [aut] (<https://orcid.org/0000-0002-1416-3412>),
RStudio [cph, fnd]
Maintainer: Hadley Wickham <hadley@rstudio.com>
Repository: CRAN
Date/Publication: 2020-07-31 07:00:05 UTC
Date/Publication: 2020-08-18 12:30:02 UTC
149 changes: 75 additions & 74 deletions MD5

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ export(with_groups)
export(with_order)
export(wrap_dbplyr_obj)
import(rlang)
import(vctrs)
import(vctrs, except = data_frame)
importFrom(R6,R6Class)
importFrom(generics,intersect)
importFrom(generics,setdiff)
Expand Down
20 changes: 15 additions & 5 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
# dplyr 1.0.2

* Fixed `across()` issue where data frame columns would mask objects referred to
from `all_of()` (#5460).

* `bind_cols()` gains a `.name_repair` argument, passed to `vctrs::vec_cbind()` (#5451)

* `summarise(.groups = "rowwise")` makes a rowwise data frame even if the input data
is not grouped (#5422).

# dplyr 1.0.1

* New function `cur_data_all()` similar to `cur_data()` but includes the grouping variables (#5342).
Expand Down Expand Up @@ -61,7 +71,7 @@
* `lead()` and `lag()` are stricter about their inputs.

* Extending data frames requires that the extra class or classes are added first, not last.
Having the extact class at the end causes some vctrs operations to fail with a message like:
Having the extra class at the end causes some vctrs operations to fail with a message like:

```
Input must be a vector, not a `<data.frame/...>` object
Expand Down Expand Up @@ -267,7 +277,7 @@

* `src_mysql()`, `src_postgres()`, and `src_sqlite()` has been deprecated.
We've recommended against them for some time. Instead please use the approach
described at <http://dbplyr.tidyverse.org/>.
described at <https://dbplyr.tidyverse.org/>.

* `select_vars()`, `rename_vars()`, `select_var()`, `current_vars()` are now
deprecated (@perezp44, #4432)
Expand Down Expand Up @@ -1022,7 +1032,7 @@
This version of dplyr includes some major changes to how database connections work. By and large, you should be able to continue using your existing dplyr database code without modification, but there are two big changes that you should be aware of:

* Almost all database related code has been moved out of dplyr and into a
new package, [dbplyr](http://github.com/hadley/dbplyr/). This makes dplyr
new package, [dbplyr](https://github.com/tidyverse/dbplyr/). This makes dplyr
simpler, and will make it easier to release fixes for bugs that only affect
databases. `src_mysql()`, `src_postgres()`, and `src_sqlite()` will still
live dplyr so your existing code continues to work.
Expand All @@ -1048,7 +1058,7 @@ mtcars2
This is particularly useful if you want to perform non-SELECT queries as you can do whatever you want with `DBI::dbGetQuery()` and `DBI::dbExecute()`.
If you've implemented a database backend for dplyr, please read the [backend news](https://github.com/hadley/dbplyr/blob/master/NEWS.md#backends) to see what's changed from your perspective (not much). If you want to ensure your package works with both the current and previous version of dplyr, see `wrap_dbplyr_obj()` for helpers.
If you've implemented a database backend for dplyr, please read the [backend news](https://github.com/tidyverse/dbplyr/blob/master/NEWS.md#backends) to see what's changed from your perspective (not much). If you want to ensure your package works with both the current and previous version of dplyr, see `wrap_dbplyr_obj()` for helpers.

## UTF-8

Expand Down Expand Up @@ -1387,7 +1397,7 @@ and so these functions have been deprecated (but remain around for backward comp
* Outdated benchmarking demos have been removed (#1487).

* Code related to starting and signalling clusters has been moved out to
[multidplyr](http://github.com/hadley/multidplyr).
[multidplyr](https://github.com/tidyverse/multidplyr).

## New functions

Expand Down
62 changes: 43 additions & 19 deletions R/across.R
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,10 @@
#' to access the current column and grouping keys respectively.
#' @param ... Additional arguments for the function calls in `.fns`.
#' @param .names A glue specification that describes how to name the output
#' columns. This can use `{col}` to stand for the selected column name, and
#' `{fn}` to stand for the name of the function being applied. The default
#' (`NULL`) is equivalent to `"{col}"` for the single function case and
#' `"{col}_{fn}"` for the case where a list is used for `.fns`.
#' columns. This can use `{.col}` to stand for the selected column name, and
#' `{.fn}` to stand for the name of the function being applied. The default
#' (`NULL`) is equivalent to `"{.col}"` for the single function case and
#' `"{.col}_{.fn}"` for the case where a list is used for `.fns`.
#'
#' @returns
#' A tibble with one column for each column in `.cols` and each function in `.fns`.
Expand All @@ -60,13 +60,13 @@
#' # Use the .names argument to control the output names
#' iris %>%
#' group_by(Species) %>%
#' summarise(across(starts_with("Sepal"), mean, .names = "mean_{col}"))
#' summarise(across(starts_with("Sepal"), mean, .names = "mean_{.col}"))
#' iris %>%
#' group_by(Species) %>%
#' summarise(across(starts_with("Sepal"), list(mean = mean, sd = sd), .names = "{col}.{fn}"))
#' summarise(across(starts_with("Sepal"), list(mean = mean, sd = sd), .names = "{.col}.{.fn}"))
#' iris %>%
#' group_by(Species) %>%
#' summarise(across(starts_with("Sepal"), list(mean, sd), .names = "{col}.fn{fn}"))
#' summarise(across(starts_with("Sepal"), list(mean, sd), .names = "{.col}.fn{.fn}"))
#'
#' # c_across() ---------------------------------------------------------------
#' df <- tibble(id = 1:4, w = runif(4), x = runif(4), y = runif(4), z = runif(4))
Expand All @@ -79,7 +79,7 @@
#' @export
across <- function(.cols = everything(), .fns = NULL, ..., .names = NULL) {
key <- key_deparse(sys.call())
setup <- across_setup({{ .cols }}, fns = .fns, names = .names, key = key)
setup <- across_setup({{ .cols }}, fns = .fns, names = .names, key = key, .caller_env = caller_env())

vars <- setup$vars
if (length(vars) == 0L) {
Expand Down Expand Up @@ -148,28 +148,39 @@ c_across <- function(cols = everything()) {
vec_c(!!!cols, .name_spec = zap())
}

across_glue_mask <- function(.col, .fn, .caller_env) {
glue_mask <- env(.caller_env, .col = .col, .fn = .fn)
# TODO: we can make these bindings louder later
env_bind_active(
glue_mask, col = function() glue_mask$.col, fn = function() glue_mask$.fn
)
glue_mask
}

# TODO: The usage of a cache in `across_setup()` and `c_across_setup()` is a stopgap solution, and
# this idea should not be used anywhere else. This should be replaced by the
# next version of hybrid evaluation, which should offer a way for any function
# to do any required "set up" work (like the `eval_select()` call) a single
# time per top-level call, rather than once per group.
across_setup <- function(cols, fns, names, key) {
across_setup <- function(cols, fns, names, key, .caller_env) {
mask <- peek_mask("across()")

value <- mask$across_cache_get(key)
if (!is.null(value)) {
return(value)
}

# `across()` is evaluated in a data mask so we need to remove the
# mask layer from the quosure environment (#5460)
cols <- enquo(cols)
across_cols <- mask$across_cols()
cols <- quo_set_env(cols, data_mask_top(quo_get_env(cols), recursive = TRUE, inherit = TRUE))

vars <- tidyselect::eval_select(expr(!!cols), across_cols)
vars <- tidyselect::eval_select(cols, data = mask$across_cols())
vars <- names(vars)

if (is.null(fns)) {
if (!is.null(names)) {
names <- vec_as_names(glue(names, col = vars, fn = "1"), repair = "check_unique")
glue_mask <- across_glue_mask(.caller_env, .col = vars, .fn = "1")
names <- vec_as_names(glue(names, .envir = glue_mask), repair = "check_unique")
}

value <- list(vars = vars, fns = fns, names = names)
Expand All @@ -180,10 +191,10 @@ across_setup <- function(cols, fns, names, key) {

# apply `.names` smart default
if (is.function(fns) || is_formula(fns)) {
names <- names %||% "{col}"
names <- names %||% "{.col}"
fns <- list("1" = fns)
} else {
names <- names %||% "{col}_{fn}"
names <- names %||% "{.col}_{.fn}"
}

if (!is.list(fns)) {
Expand All @@ -206,17 +217,30 @@ across_setup <- function(cols, fns, names, key) {
}
}

names <- vec_as_names(glue(names,
col = rep(vars, each = length(fns)),
fn = rep(names_fns, length(vars))
), repair = "check_unique")
glue_mask <- glue_mask <- across_glue_mask(.caller_env,
.col = rep(vars, each = length(fns)),
.fn = rep(names_fns, length(vars))
)
names <- vec_as_names(glue(names, .envir = glue_mask), repair = "check_unique")

value <- list(vars = vars, fns = fns, names = names)
mask$across_cache_add(key, value)

value
}

# FIXME: This pattern should be encapsulated by rlang
data_mask_top <- function(env, recursive = FALSE, inherit = FALSE) {
while (env_has(env, ".__tidyeval_data_mask__.", inherit = inherit)) {
env <- env_parent(env_get(env, ".top_env", inherit = inherit))
if (!recursive) {
return(env)
}
}

env
}

c_across_setup <- function(cols, key) {
mask <- peek_mask("c_across()")

Expand Down
37 changes: 19 additions & 18 deletions R/arrange.R
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
#' columns.
#'
#' Unlike other dplyr verbs, `arrange()` largely ignores grouping; you
#' need to explicitly mention grouping variables (or use `by_group = TRUE`)
#' need to explicitly mention grouping variables (or use `.by_group = TRUE`)
#' in order to group by them, and functions of variables are evaluated
#' once per data frame, not once per group.
#'
Expand Down Expand Up @@ -116,7 +116,24 @@ arrange_rows <- function(.data, dots) {
data <- withCallingHandlers({
transmute(new_data_frame(.data), !!!quosures)
}, error = function(cnd) {
stop_arrange_transmute(cnd)
if (inherits(cnd, "dplyr:::mutate_error")) {
error_name <- cnd$error_name
index <- sub("^.*_", "", error_name)
error_expression <- cnd$error_expression

bullets <- c(
x = glue("Could not create a temporary column for `..{index}`."),
i = glue("`..{index}` is `{error_expression}`.")
)
} else {
bullets <- c(x = conditionMessage(cnd))
}

abort(c(
"arrange() failed at implicit mutate() step. ",
bullets
), class = "dplyr_error")

})

# we can't just use vec_compare_proxy(data) because we need to apply
Expand All @@ -140,19 +157,3 @@ arrange_rows <- function(.data, dots) {

exec("order", !!!unname(proxies), decreasing = FALSE, na.last = TRUE)
}

# FIXME: Temporary util until the API change from
# https://github.com/r-lib/vctrs/pull/1155 is on CRAN and we can
# depend on it
delayedAssign(
"dplyr_proxy_order",
if (env_has(ns_env("vctrs"), "vec_proxy_order")) {
vec_proxy_order
} else {
function(x, ...) vec_proxy_compare(x, ..., relax = TRUE)
}
)

# Hack to pass CRAN check with older vctrs versions where
# `vec_proxy_order()` doesn't exist
utils::globalVariables("vec_proxy_order")
7 changes: 5 additions & 2 deletions R/bind.r
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,9 @@
#' list of data frames is supplied, the labels are taken from the
#' names of the list. If no names are found a numeric sequence is
#' used instead.
#' @param .name_repair One of `"unique"`, `"universal"`, or
#' `"check_unique"`. See [vctrs::vec_as_names()] for the meaning of these
#' options.
#' @return `bind_rows()` and `bind_cols()` return the same type as
#' the first input, either a data frame, `tbl_df`, or `grouped_df`.
#' @examples
Expand Down Expand Up @@ -146,7 +149,7 @@ bind_rows <- function(..., .id = NULL) {

#' @export
#' @rdname bind
bind_cols <- function(...) {
bind_cols <- function(..., .name_repair = c("unique", "universal", "check_unique", "minimal")) {
dots <- list2(...)

dots <- squash_if(dots, vec_is_list)
Expand All @@ -156,7 +159,7 @@ bind_cols <- function(...) {
is_data_frame <- map_lgl(dots, is.data.frame)
names(dots)[is_data_frame] <- ""

out <- vec_cbind(!!!dots)
out <- vec_cbind(!!!dots, .name_repair = .name_repair)
if (!any(map_lgl(dots, is.data.frame))) {
out <- as_tibble(out)
}
Expand Down
18 changes: 10 additions & 8 deletions R/compat-purrr.R
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# nocov start - compat-purrr (last updated: rlang 0.3.0.9000)
# nocov start - compat-purrr (last updated: rlang 0.3.2.9000)

# This file serves as a reference for compatibility functions for
# purrr. They are not drop-in replacements but allow a similar style
Expand Down Expand Up @@ -30,6 +30,11 @@ map_cpl <- function(.x, .f, ...) {
map_mold(.x, .f, complex(1), ...)
}

walk <- function(.x, .f, ...) {
map(.x, .f, ...)
invisible(.x)
}

pluck <- function(.x, .f) {
map(.x, `[[`, .f)
}
Expand Down Expand Up @@ -72,15 +77,12 @@ map2_chr <- function(.x, .y, .f, ...) {
map2_cpl <- function(.x, .y, .f, ...) {
as.vector(map2(.x, .y, .f, ...), "complex")
}
walk2 <- function(.x, .y, .f, ...) {
map2(.x, .y, .f, ...)
invisible(.x)
}

args_recycle <- function(args) {
lengths <- lengths(args)
lengths <- map_int(args, length)
n <- max(lengths)

abort_if_not(all(lengths == 1L | lengths == n))
stopifnot(all(lengths == 1L | lengths == n))
to_recycle <- lengths == 1L
args[to_recycle] <- map(args[to_recycle], function(x) rep.int(x, n))

Expand All @@ -97,7 +99,7 @@ pmap <- function(.l, .f, ...) {

probe <- function(.x, .p, ...) {
if (is_logical(.p)) {
abort_if_not(length(.p) == length(.x))
stopifnot(length(.p) == length(.x))
.p
} else {
map_lgl(.x, .p, ...)
Expand Down
Loading

0 comments on commit 36e5287

Please sign in to comment.