Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-36526][SQL] DSV2 Index Support: Add supportsIndex interface #33754

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.sql.connector.catalog.index;

import java.util.Map;
import java.util.Properties;

import org.apache.spark.annotation.Evolving;
import org.apache.spark.sql.catalyst.analysis.IndexAlreadyExistsException;
import org.apache.spark.sql.catalyst.analysis.NoSuchIndexException;
import org.apache.spark.sql.catalyst.analysis.NoSuchTableException;
import org.apache.spark.sql.connector.catalog.CatalogPlugin;
import org.apache.spark.sql.connector.catalog.Identifier;
import org.apache.spark.sql.connector.expressions.NamedReference;

/**
* Catalog methods for working with index
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we refine the classdoc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thanks

*
* @since 3.3.0
*/
@Evolving
public interface SupportsIndex extends CatalogPlugin {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we follow SupportsPartitionManagement and make it extends Table?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm, index has a unique name. DROP INDEX does not need a table.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, not all databases make index name globally unique, see https://www.w3schools.com/sql/sql_ref_drop_index.asp

I think we can still make SupportsIndex extends Table, if the SQL syntax is DROP INDEX index_name ON table_name;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After thinking more about it, I think DROP INDEX index_name ON [TABLE] table_name is better, as it's more consistent with the CREATE INDEX syntax.

This is also more flexible: the index name only need to be unique within the table.


/**
* Creates an index.
*
* @param indexName the name of the index to be created
* @param indexType the IndexType of the index to be created
* @param table the table on which index to be created
* @param columns the columns on which index to be created
* @param columnProperties the properties of the columns on which index to be created
* @param properties the properties of the index to be created
* @throws IndexAlreadyExistsException If the index already exists (optional)
* @throws UnsupportedOperationException If create index is not a supported operation
*/
void createIndex(String indexName,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For partitioned table, do we plan to support index creation on table level (for all partitions), or on individual partition level?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is up to the data source implementation. I think it makes more sense at file level (each data file has an index file).

Copy link
Contributor

@LuciferYang LuciferYang Aug 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to support index creation on individual partition level.

For the existing data in the production environment, if only support index creation on table level, it is likely to be an impossible job for users.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I think I didn't explain this clear: the index creation is actually done at the underlying data source. It's up to the data source's implementation on which level the index is created. For the implementation in file based data source, I believe the index is created at file level, not at table level or partition level.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your explain

String indexType,
Identifier table,
NamedReference[] columns,
Map<NamedReference, Properties>[] columnProperties,
Properties properties)
throws IndexAlreadyExistsException, UnsupportedOperationException;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UnsupportedOperationException is not a checked java exception, we don't need to put it in the throws clause.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed


/**
* drops the index with the given name.
*
* @param indexName the name of the index to be dropped.
* @return true if the index is dropped
* @throws NoSuchIndexException If the index does not exist (optional)
* @throws UnsupportedOperationException If drop index is not a supported operation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

*/
boolean dropIndex(String indexName) throws NoSuchIndexException, UnsupportedOperationException;

/**
* Checks whether an index exists.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Checks whether an index exists.
* Checks whether an index exists in this table.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

*
* @param indexName the name of the index
* @return true if the index exists, false otherwise
*/
boolean indexExists(String indexName);

/**
* Lists all the indexes in a table.
*
* @param table the table to be checked on for indexes
* @throws NoSuchTableException
*/
TableIndex[] listIndexes(Identifier table) throws NoSuchTableException;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.sql.connector.catalog.index;

import org.apache.spark.annotation.Evolving;
import org.apache.spark.sql.connector.catalog.Identifier;
import org.apache.spark.sql.connector.expressions.NamedReference;

import java.util.Collections;
import java.util.Map;
import java.util.Properties;

/**
* Index in a table
*
* @since 3.3.0
*/
@Evolving
public final class TableIndex {
private String indexName;
private String indexType;
private Identifier table;
private NamedReference[] columns;
private Map<NamedReference, Properties> columnProperties = Collections.emptyMap();
private Properties properties;

public TableIndex(
String indexName,
String indexType,
Identifier table,
NamedReference[] columns,
Map<NamedReference, Properties> columnProperties,
Properties properties) {
this.indexName = indexName;
this.indexType = indexType;
this.table = table;
this.columns = columns;
this.columnProperties = columnProperties;
this.properties = properties;
}

/**
* @return the Index name.
*/
String indexName() { return indexName; }

/**
* @return the indexType of this Index.
*/
String indexType() { return indexType; }

/**
* @return the table this Index is on.
*/
Identifier table() { return table; }

/**
* @return the column(s) this Index is on. Could be multi columns (a multi-column index).
*/
NamedReference[] columns() { return columns; }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually what pandas API on Spark implemented ... probably we should migrate to DSv2 eventually in the very far future ... cc @xinrong-databricks @ueshin @itholic FYI


/**
* @return the map of column and column property map.
*/
Map<NamedReference, Properties> columnProperties() { return columnProperties; }

/**
* Returns the index properties.
*/
Properties properties() {
return properties;
}

Properties columnProperties(NamedReference column) { return columnProperties.get(column); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need this API? people can just get all the column properties as a map and do whatever they want to.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed. Thanks!

}
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,6 @@ class PartitionsAlreadyExistException(message: String) extends AnalysisException

class FunctionAlreadyExistsException(db: String, func: String)
extends AnalysisException(s"Function '$func' already exists in database '$db'")

class IndexAlreadyExistsException(indexName: String, table: Identifier)
extends AnalysisException(s"Index '$indexName' already exists in table ${table.quoted}")
Original file line number Diff line number Diff line change
Expand Up @@ -95,3 +95,6 @@ class NoSuchPartitionsException(message: String) extends AnalysisException(messa

class NoSuchTempFunctionException(func: String)
extends AnalysisException(s"Temporary function '$func' not found")

class NoSuchIndexException(indexName: String)
extends AnalysisException(s"Index '$indexName' not found")