@hadley hadley released this Jun 23, 2017 · 153 commits to master since this release

Assets 2

New features

  • dplyr support has been updated to require dplyr 0.7.0 and use dbplyr. This
    means that you can now more naturally work directly with DBI connections.
    dplyr now also uses modern BigQuery SQL which supports a broader set of
    translations. Along the way I've also fixed some SQL generation bugs (#48).

  • The DBI driver gets a new name: bigquery().

  • New insert_extract_job() make it possible to extract data and save in
    google storage (@realAkhmed, #119).

  • New insert_table() allows you to insert empty tables into a dataset.

  • All POST requests (inserts, updates, copies and query_exec) now
    take .... This allows you to add arbitrary additional data to the
    request body making it possible to use parts of the BigQuery API
    that are otherwise not exposed (#149). snake_case argument names are
    automatically converted to camelCase so you can stick consistently
    to snake case in your R code.

  • Full support for DATE, TIME, and DATETIME types (#128).

Big fixes and minor improvements

  • All bigrquery requests now have a custom user agent that specifies the
    versions of bigrquery and httr that are used (#151).

  • dbConnect() gains new use_legacy_sql, page_size, and quiet arguments
    that are passed onto query_exec(). These allow you to control query options
    at the connection level.

  • insert_upload_job() now sends data in newline-delimited JSON instead
    of csv (#97). This should be considerably faster and avoids character
    encoding issues (#45). POSIXlt columns are now also correctly
    coerced to TIMESTAMPS (#98).

  • insert_query_job() and query_exec() gain new arguments:

    • quiet = TRUE will suppress the progress bars if needed.
    • use_legacy_sql = FALSE option allows you to opt-out of the
      legacy SQL system (#124, @backlin)
  • list_tables() (#108) and list_datasets() (#141) are now paginated.
    By default they retrieve 50 items per page, and will iterate until they
    get everything.

  • list_tabledata() and query_exec() now give a nicer progress bar,
    including estimated time remaining (#100).

  • query_exec() should be considerably faster because profiling revealed that
    ~40% of the time taken by was a single line inside a function that helps
    parse BigQuery's json into an R data frame. I replaced the slow R code with
    a faster C function.

  • set_oauth2.0_cred() allows user to supply their own Google OAuth
    application when setting credentials (#130, @jarodmeng)

  • wait_for() uses now reports the query total bytes billed, which is
    more accurate because it takes into account caching and other factors.