Skip to content

Commit

Permalink
prepare relase 0.20.0
Browse files Browse the repository at this point in the history
  • Loading branch information
davidrabinowitz committed Apr 29, 2021
1 parent c3a752d commit ba3d741
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 10 deletions.
7 changes: 7 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
# Release Notes

## 0.20.0 - 2021-03-29
* PR #375: Added support for pseudo column support - time partitioned table now supoort the _PARTITIONTIME and _PARTITIONDATE fields
* Issue# 190: Writing data to BigQuery properly populate the field description
* Issue #265: Fixed nested conjunctions/disjunctions when using the AVRO read format
* Issue #326: Fixing netty_tcnative_windows.dll shading
* Arrow has een upgraded to version 4.0.0

## 0.19.1 - 2021-03-01
* PR #324 - Restoring version 0.18.1 dependencies due to networking issues
* BigQuery API has been upgraded to version 1.123.2
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,8 @@ The latest version of the connector is publicly available in the following links

| version | Link |
| --- | --- |
| Scala 2.11 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.11-0.19.1.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.11-0.19.1.jar)) |
| Scala 2.12 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.19.1.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.19.1.jar)) |
| Scala 2.11 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.11-0.20.0.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.11-0.20.0.jar)) |
| Scala 2.12 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.20.0.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.20.0.jar)) |

The connector is also available from the
[Maven Central](https://repo1.maven.org/maven2/com/google/cloud/spark/)
Expand All @@ -78,8 +78,8 @@ repository. It can be used using the `--packages` option or the

| version | Connector Artifact |
| --- | --- |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.19.1` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.19.1` |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.20.0` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.20.0` |

If you want to keep up with the latest version of the connector the following links can be used. Notice that for production
environments where the connector version should be pinned, one of the above links should be used.
Expand Down Expand Up @@ -694,7 +694,7 @@ using the following code:
```python
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.19.1")\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.20.0")\
.getOrCreate()
df = spark.read.format("bigquery")\
.load("dataset.table")
Expand All @@ -703,15 +703,15 @@ df = spark.read.format("bigquery")\
**Scala:**
```python
val spark = SparkSession.builder
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.19.1")
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.20.0")
.getOrCreate()
val df = spark.read.format("bigquery")
.load("dataset.table")
```

In case Spark cluster is using Scala 2.12 (it's optional for Spark 2.4.x,
mandatory in 3.0.x), then the relevant package is
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.19.1. In
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.20.0. In
order to know which Scala version is used, please run the following code:

**Python:**
Expand All @@ -735,14 +735,14 @@ To include the connector in your project:
<dependency>
<groupId>com.google.cloud.spark</groupId>
<artifactId>spark-bigquery-with-dependencies_${scala.version}</artifactId>
<version>0.19.1</version>
<version>0.20.0</version>
</dependency>
```

### SBT

```sbt
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.19.1"
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.20.0"
```

## Building the Connector
Expand Down
2 changes: 1 addition & 1 deletion build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ lazy val nettyTcnativeVersion = "2.0.34.Final"

lazy val commonSettings = Seq(
organization := "com.google.cloud.spark",
version := "0.19.2-SNAPSHOT",
version := "0.20.0",
scalaVersion := scala211Version,
crossScalaVersions := Seq(scala211Version, scala212Version)
)
Expand Down

0 comments on commit ba3d741

Please sign in to comment.