Skip to content

Commit

Permalink
[SPARK-31351][DOC] Migration Guide Auditing for Spark 3.0 Release
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?
This PR is to audit the migration guides in Spark 3.0 release:

- correct the grammar errors
- clean up some items
- replace HTML table by markdown table

### Why are the changes needed?
N/A

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Screenshot:

![screencapture-127-0-0-1-4000-sql-migration-guide-html-2020-04-04-21_36_29](https://user-images.githubusercontent.com/11567269/78467043-9477d800-76bd-11ea-8ab0-3d51ea5e9fa5.png)
![Screen Shot 2020-04-04 at 9 28 13 PM](https://user-images.githubusercontent.com/11567269/78467045-98a3f580-76bd-11ea-9e4b-927bf12e683a.png)
![Screen Shot 2020-04-04 at 9 28 02 PM](https://user-images.githubusercontent.com/11567269/78467046-98a3f580-76bd-11ea-8ea3-9f13cb8d200b.png)
![Screen Shot 2020-04-04 at 9 21 40 PM](https://user-images.githubusercontent.com/11567269/78467047-993c8c00-76bd-11ea-8c29-91afc68eb590.png)

Closes #28125 from gatorsmile/updateMigrationGuide3.0.

Authored-by: gatorsmile <gatorsmile@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
  • Loading branch information
gatorsmile authored and HyukjinKwon committed Apr 8, 2020
1 parent 0fc859b commit a3d8394
Show file tree
Hide file tree
Showing 5 changed files with 164 additions and 315 deletions.
6 changes: 3 additions & 3 deletions docs/core-migration-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ license: |

- The `org.apache.spark.ExecutorPlugin` interface and related configuration has been replaced with
`org.apache.spark.plugin.SparkPlugin`, which adds new functionality. Plugins using the old
interface need to be modified to extend the new interfaces. Check the
interface must be modified to extend the new interfaces. Check the
[Monitoring](monitoring.html) guide for more details.

- Deprecated method `TaskContext.isRunningLocally` has been removed. Local execution was removed and it always has returned `false`.
Expand All @@ -35,6 +35,6 @@ license: |

- Deprecated method `AccumulableInfo.apply` have been removed because creating `AccumulableInfo` is disallowed.

- Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark writes event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.
- Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark wrote the event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.

- A new protocol for fetching shuffle blocks is used. It's recommended that external shuffle services be upgraded when running Spark 3.0 apps. Old external shuffle services can still be used by setting the configuration `spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into errors with messages like `IllegalArgumentException: Unexpected message type: <number>`.
- A new protocol for fetching shuffle blocks is used. It's recommended that external shuffle services be upgraded when running Spark 3.0 apps. You can still use old external shuffle services by setting the configuration `spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into errors with messages like `IllegalArgumentException: Unexpected message type: <number>`.
31 changes: 31 additions & 0 deletions docs/css/main.css
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,37 @@
Author's custom styles
========================================================================== */

table {
margin: 15px 0;
padding: 0;
}

table tr {
border-top: 1px solid #cccccc;
background-color: white;
margin: 0;
padding: 0;
}

table tr:nth-child(2n) {
background-color: #F1F4F5;
}

table tr th {
font-weight: bold;
border: 1px solid #cccccc;
text-align: left;
margin: 0;
padding: 6px 13px;
}

table tr td {
border: 1px solid #cccccc;
text-align: left;
margin: 0;
padding: 6px 13px;
}

.navbar .brand {
height: 50px;
width: 110px;
Expand Down
78 changes: 18 additions & 60 deletions docs/pyspark-migration-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,67 +27,25 @@ Many items of SQL migration can be applied when migrating PySpark to higher vers
Please refer [Migration Guide: SQL, Datasets and DataFrame](sql-migration-guide.html).

## Upgrading from PySpark 2.4 to 3.0
- In Spark 3.0, PySpark requires a pandas version of 0.23.2 or higher to use pandas related functionality, such as `toPandas`, `createDataFrame` from pandas DataFrame, and so on.

- Since Spark 3.0, PySpark requires a Pandas version of 0.23.2 or higher to use Pandas related functionality, such as `toPandas`, `createDataFrame` from Pandas DataFrame, etc.

- Since Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as `pandas_udf`, `toPandas` and `createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.

- In PySpark, when creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, the builder was trying to update the `SparkConf` of the existing `SparkContext` with configurations specified to the builder, but the `SparkContext` is shared by all `SparkSession`s, so we should not update them. Since 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a `SparkSession`.

- In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting Pandas.Series to Arrow array during serialization. Arrow will raise errors when detecting unsafe type conversion like overflow. Setting `spark.sql.execution.pandas.convertToArrowArraySafely` to true can enable it. The default setting is false. PySpark's behavior for Arrow versions is illustrated in the table below:
<table class="table">
<tr>
<th>
<b>PyArrow version</b>
</th>
<th>
<b>Integer Overflow</b>
</th>
<th>
<b>Floating Point Truncation</b>
</th>
</tr>
<tr>
<td>
version < 0.11.0
</td>
<td>
Raise error
</td>
<td>
Silently allows
</td>
</tr>
<tr>
<td>
version > 0.11.0, arrowSafeTypeConversion=false
</td>
<td>
Silent overflow
</td>
<td>
Silently allows
</td>
</tr>
<tr>
<td>
version > 0.11.0, arrowSafeTypeConversion=true
</td>
<td>
Raise error
</td>
<td>
Raise error
</td>
</tr>
</table>

- Since Spark 3.0, `createDataFrame(..., verifySchema=True)` validates `LongType` as well in PySpark. Previously, `LongType` was not verified and resulted in `None` in case the value overflows. To restore this behavior, `verifySchema` can be set to `False` to disable the validation.

- Since Spark 3.0, `Column.getItem` is fixed such that it does not call `Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, the indexing operator should be used.
For example, `map_col.getItem(col('id'))` should be replaced with `map_col[col('id')]`.

- As of Spark 3.0 `Row` field names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable `PYSPARK_ROW_FIELD_SORTING_ENABLED` to "true" for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.
- In Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as `pandas_udf`, `toPandas` and `createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.

- In PySpark, when creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, the builder was trying to update the `SparkConf` of the existing `SparkContext` with configurations specified to the builder, but the `SparkContext` is shared by all `SparkSession`s, so we should not update them. In 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a `SparkSession`.

- In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting `pandas.Series` to an Arrow array during serialization. Arrow raises errors when detecting unsafe type conversions like overflow. You enable it by setting `spark.sql.execution.pandas.convertToArrowArraySafely` to `true`. The default setting is `false`. PySpark behavior for Arrow versions is illustrated in the following table:

| PyArrow version | Integer overflow | Floating point truncation |
| ---------------- | ---------------- | ------------------------- |
| 0.11.0 and below | Raise error | Silently allows |
| \> 0.11.0, arrowSafeTypeConversion=false | Silent overflow | Silently allows |
| \> 0.11.0, arrowSafeTypeConversion=true | Raise error | Raise error |

- In Spark 3.0, `createDataFrame(..., verifySchema=True)` validates `LongType` as well in PySpark. Previously, `LongType` was not verified and resulted in `None` in case the value overflows. To restore this behavior, `verifySchema` can be set to `False` to disable the validation.

- In Spark 3.0, `Column.getItem` is fixed such that it does not call `Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, the indexing operator should be used. For example, `map_col.getItem(col('id'))` should be replaced with `map_col[col('id')]`.

- As of Spark 3.0, `Row` field names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable `PYSPARK_ROW_FIELD_SORTING_ENABLED` to `true` for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.

## Upgrading from PySpark 2.3 to 2.4

Expand Down
Loading

0 comments on commit a3d8394

Please sign in to comment.