Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Update scalafmt-core from 3.7.14 to 3.7.15 #800

Merged
merged 3 commits into from
Oct 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions .git-blame-ignore-revs
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,6 @@ a834cf94453ed2f3ab1b87818c2fd124fe87fa2a

# Scala Steward: Reformat with scalafmt 3.7.11
11269e71a3460ae21f2a96ac8416c0bdd3f1f3b0

# Scala Steward: Reformat with scalafmt 3.7.15
17f6ce5807fb3a91938824a285e30f786adea570
2 changes: 1 addition & 1 deletion .scalafmt.conf
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version = 3.7.14
version = 3.7.15
style = default
runner.dialect=scala212
maxColumn = 120
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ class ExcelDataSource extends DataSourceV2 with ReadSupport with WriteSupport wi
/* The string that represents the format that this data source provider uses */
override def shortName(): String = "excel"

/** Creates a {@link DataSourceReader} to scan the data from this data source.
/** Creates a {@@linkDataSourceReader} to scan the data from this data source.
*
* If this method fails (by throwing an exception), the action will fail and no Spark job will be submitted.
*
Expand All @@ -55,7 +55,7 @@ class ExcelDataSource extends DataSourceV2 with ReadSupport with WriteSupport wi
override def createReader(options: DataSourceOptions): DataSourceReader =
new ExcelDataSourceReader(sparkSession, options.asMap.asScala.toMap, options.paths.toSeq, None)

/** Creates a {@link DataSourceReader} to scan the data from this data source.
/** Creates a {@@linkDataSourceReader} to scan the data from this data source.
*
* If this method fails (by throwing an exception), the action will fail and no Spark job will be submitted.
*
Expand All @@ -67,14 +67,14 @@ class ExcelDataSource extends DataSourceV2 with ReadSupport with WriteSupport wi
override def createReader(schema: StructType, options: DataSourceOptions): DataSourceReader =
new ExcelDataSourceReader(sparkSession, options.asMap.asScala.toMap, options.paths.toSeq, Some(schema))

/** Creates an optional {@link DataSourceWriter} to save the data to this data source. Data sources can return None if
/** Creates an optional {@@linkDataSourceWriter} to save the data to this data source. Data sources can return None if
* there is no writing needed to be done according to the save mode.
*
* If this method fails (by throwing an exception), the action will fail and no Spark job will be submitted.
*
* @param writeUUID
* A unique string for the writing job. It's possible that there are many writing jobs running at the same time,
* and the returned {@link DataSourceWriter} can use this job id to distinguish itself from other jobs.
* and the returned {@@linkDataSourceWriter} can use this job id to distinguish itself from other jobs.
* @param schema
* the schema of the data to be written.
* @param mode
Expand Down Expand Up @@ -160,14 +160,14 @@ class ExcelDataSourceReader(
_pushedFilters
}

/** Returns the filters that are pushed to the data source via {@link #pushFilters(Filter[])}.
/** Returns the filters that are pushed to the data source via {@@link#pushFilters(Filter[])} .
*
* There are 3 kinds of filters:
* 1. pushable filters which don't need to be evaluated again after scanning. 2. pushable filters which still need
* to be evaluated after scanning, e.g. parquet row group filter. 3. non-pushable filters. Both case 1 and 2
* should be considered as pushed filters and should be returned by this method.
*
* It's possible that there is no filters in the query and {@link #pushFilters(Filter[])} is never called, empty
* It's possible that there is no filters in the query and {@@link#pushFilters(Filter[])} is never called, empty
* array should be returned for this case.
*/
override def pushedFilters(): Array[Filter] = _pushedFilters
Expand All @@ -177,7 +177,7 @@ class ExcelDataSourceReader(
* Implementation should try its best to prune the unnecessary columns or nested fields, but it's also OK to do the
* pruning partially, e.g., a data source may not be able to prune nested fields, and only prune top-level columns.
*
* Note that, data source readers should update {@link DataSourceReader#readSchema()} after applying column pruning.
* Note that, data source readers should update {@@linkDataSourceReader#readSchema()} after applying column pruning.
*/
override def pruneColumns(requiredSchema: StructType): Unit = {
_requiredSchema = Some(requiredSchema)
Expand Down Expand Up @@ -216,7 +216,7 @@ class ExcelDataSourceReader(
}
}

/** Returns a list of {@link InputPartition}s. Each {@link InputPartition} is responsible for creating a data reader
/** Returns a list of {@@linkInputPartition} s. Each {@@linkInputPartition} is responsible for creating a data reader
* to output data of one RDD partition. The number of input partitions returned here is the same as the number of RDD
* partitions this scan outputs.
*
Expand Down