Skip to content

Commit

Permalink
prepare release 0.15.1-beta
Browse files Browse the repository at this point in the history
  • Loading branch information
davidrabinowitz committed Apr 27, 2020
1 parent 1095dfd commit b1177aa
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 8 deletions.
4 changes: 4 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Release Notes

## 0.15.1-beta - 2020-04-27
* PR #158: Users can now add the `spark.datasource.bigquery` prefix to the configuration options in order to support Spark's `--conf` command line flag
* PR #160: View materialization is performed only on action, fixing a bug where view materialization was done too early

## 0.15.0-beta - 2020-04-20
* PR #150: Reading `DataFrame`s should be quicker, especially in interactive usage such in notebooks
* PR #154: Upgraded to the BigQuery Storage v1 API
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ repository. It can be used using the `--packages` option or the

| Scala version | Connector Artifact |
| --- | --- |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.0-beta` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.15.0-beta` |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.1-beta` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.15.1-beta` |

## Hello World Example

Expand Down Expand Up @@ -533,7 +533,7 @@ using the following code:
```python
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.0-beta")\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.1-beta")\
.getOrCreate()
df = spark.read.format("bigquery")\
.option("table","dataset.table")\
Expand All @@ -543,7 +543,7 @@ df = spark.read.format("bigquery")\
**Scala:**
```python
val spark = SparkSession.builder
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.0-beta")
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.1-beta")
.getOrCreate()
val df = spark.read.format("bigquery")
.option("table","dataset.table")
Expand All @@ -552,7 +552,7 @@ val df = spark.read.format("bigquery")

In case Spark cluster is using Scala 2.12 (it's optional for Spark 2.4.x,
mandatory in 3.0.x), then the relevant package is
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.15.0-beta. In
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.15.1-beta. In
order to know which Scala version is used, please run the following code:

**Python:**
Expand All @@ -576,14 +576,14 @@ To include the connector in your project:
<dependency>
<groupId>com.google.cloud.spark</groupId>
<artifactId>spark-bigquery-with-dependencies_${scala.version}</artifactId>
<version>0.15.0-beta</version>
<version>0.15.1-beta</version>
</dependency>
```

### SBT

```sbt
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.15.0-beta"
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.15.1-beta"
```

## Building the Connector
Expand Down
2 changes: 1 addition & 1 deletion build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ lazy val sparkVersion = "2.4.0"

lazy val commonSettings = Seq(
organization := "com.google.cloud.spark",
version := "0.15.1-beta-SNAPSHOT",
version := "0.15.1-beta",
scalaVersion := scala211Version,
crossScalaVersions := Seq(scala211Version, scala212Version)
)
Expand Down

0 comments on commit b1177aa

Please sign in to comment.