Skip to content

Commit

Permalink
prepare release 0.15.0-beta
Browse files Browse the repository at this point in the history
  • Loading branch information
davidrabinowitz committed Apr 21, 2020
1 parent fb4bb75 commit c7cf321
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 8 deletions.
6 changes: 6 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# Release Notes

## 0.15.0-beta - 2020-04-20
* PR #150: Reading `DataFrame`s should be quicker, especially in interactive usage such in notebooks
* PR #154: Upgraded to the BigQuery Storage v1 API
* PR #146: Authentication can be done using [AccessToken](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/print-access-token)
on top of Credentials file, Credentials, and the `GOOGLE_APPLICATION_CREDENTIALS` environment variable.

## 0.14.0-beta - 2020-03-31
* Issue #96: Added Arrow as a supported format for reading from BigQuery
* Issue #130 Adding the field description to the schema metadata
Expand Down
27 changes: 20 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ repository. It can be used using the `--packages` option or the

| Scala version | Connector Artifact |
| --- | --- |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.14.0-beta` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.14.0-beta` |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.0-beta` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.15.0-beta` |

## Hello World Example

Expand Down Expand Up @@ -533,7 +533,7 @@ using the following code:
```python
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.14.0-beta")\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.0-beta")\
.getOrCreate()
df = spark.read.format("bigquery")\
.option("table","dataset.table")\
Expand All @@ -543,7 +543,7 @@ df = spark.read.format("bigquery")\
**Scala:**
```python
val spark = SparkSession.builder
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.14.0-beta")
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.0-beta")
.getOrCreate()
val df = spark.read.format("bigquery")
.option("table","dataset.table")
Expand All @@ -552,7 +552,7 @@ val df = spark.read.format("bigquery")

In case Spark cluster is using Scala 2.12 (it's optional for Spark 2.4.x,
mandatory in 3.0.x), then the relevant package is
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.14.0-beta. In
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.15.0-beta. In
order to know which Scala version is used, please run the following code:

**Python:**
Expand All @@ -576,14 +576,14 @@ To include the connector in your project:
<dependency>
<groupId>com.google.cloud.spark</groupId>
<artifactId>spark-bigquery-with-dependencies_${scala.version}</artifactId>
<version>0.14.0-beta</version>
<version>0.15.0-beta</version>
</dependency>
```

### SBT

```sbt
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.14.0-beta"
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.15.0-beta"
```

## Building the Connector
Expand Down Expand Up @@ -631,3 +631,16 @@ or
```
spark.conf.set("credentialsFile", "</path/to/key/file>")
```

Another alternative to passing the credentials, is to pass the access token used for authenticating
the API calls to the Google Cloud Platform APIs. You can get the access token by running
`gcloud auth application-default print-access-token`.

```
spark.read.format("bigquery").option("gcpAccessToken", "<acccess-token>")
```
or
```
spark.conf.set("gcpAccessToken", "<access-token>")
```

2 changes: 1 addition & 1 deletion build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ lazy val sparkVersion = "2.4.0"

lazy val commonSettings = Seq(
organization := "com.google.cloud.spark",
version := "0.14.1-beta-SNAPSHOT",
version := "0.15.0-beta",
scalaVersion := scala211Version,
crossScalaVersions := Seq(scala211Version, scala212Version)
)
Expand Down

0 comments on commit c7cf321

Please sign in to comment.