Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-28209][CORE][SHUFFLE] Proposed new shuffle writer API #25007

Closed
wants to merge 33 commits into from
Closed
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
1957e82
[SPARK-25299] Introduce the new shuffle writer API (#5) (#520)
mccheah Mar 20, 2019
857552a
[SPARK-25299] Local shuffle implementation of the shuffle writer API …
mccheah Apr 3, 2019
d13037f
[SPARK-25299] Make UnsafeShuffleWriter use the new API (#536)
mccheah Apr 17, 2019
8f5fb60
[SPARK-25299] Use the shuffle writer plugin for the SortShuffleWriter…
mccheah Apr 15, 2019
e17c7ea
[SPARK-25299] Shuffle locations api (#517)
mccheah Apr 19, 2019
3f0c131
[SPARK-25299] Move shuffle writers back to being given specific parti…
mccheah Apr 19, 2019
f982df7
[SPARK-25299] Don't set map status twice in bypass merge sort shuffle…
mccheah Apr 19, 2019
6891197
[SPARK-25299] Propose a new NIO transfer API for partition writing. (…
mccheah May 24, 2019
7b44ed2
Remove shuffle location support.
mccheah Jun 27, 2019
df75f1f
Remove changes to UnsafeShuffleWriter
mccheah Jun 27, 2019
a8558af
Revert changes for SortShuffleWriter
mccheah Jun 27, 2019
806d7bb
Revert a bunch of other stuff
mccheah Jun 27, 2019
3167030
More reverts
mccheah Jun 27, 2019
70f59db
Set task contexts in failing test
mccheah Jun 28, 2019
3083d86
Fix style
mccheah Jun 28, 2019
4c3d692
Check for null on the block manager as well.
mccheah Jun 28, 2019
2421c92
Add task attempt id in the APIs
mccheah Jul 1, 2019
982f207
Address comments
mccheah Jul 8, 2019
594d1e2
Fix style
mccheah Jul 8, 2019
66aae91
Address comments.
mccheah Jul 12, 2019
8b432f9
Merge remote-tracking branch 'origin/master' into spark-shuffle-write…
mccheah Jul 17, 2019
9f597dd
Address comments.
mccheah Jul 18, 2019
86c1829
Restructure test
mccheah Jul 18, 2019
a7885ae
Add ShuffleWriteMetricsReporter to the createMapOutputWriter API.
mccheah Jul 19, 2019
9893c6c
Add more documentation
mccheah Jul 19, 2019
cd897e7
REfactor reading records from file in test
mccheah Jul 19, 2019
9f17b9b
Address comments
mccheah Jul 24, 2019
e53a001
Code tags
mccheah Jul 24, 2019
56fa450
Add some docs
mccheah Jul 24, 2019
b8b7b8d
Change mockito format in BypassMergeSortShuffleWriterSuite
mccheah Jul 25, 2019
2d29404
Remove metrics from the API.
mccheah Jul 29, 2019
06ea01a
Address more comments.
mccheah Jul 29, 2019
7dceec9
Args per line
mccheah Jul 30, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions core/src/main/java/org/apache/spark/api/shuffle/ShuffleDataIO.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.api.shuffle;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if it is proper to add the interfaces to here o.a.s.api? Looks like most of the things under the api package are related to rdd functions. How about this package o.a.s.shuffle.api?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1


import org.apache.spark.annotation.Experimental;

/**
* :: Experimental ::
* An interface for launching Shuffle related components
*
* @since 3.0.0
*/
@Experimental
public interface ShuffleDataIO {
ShuffleExecutorComponents executor();
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.api.shuffle;

import org.apache.spark.annotation.Experimental;

/**
* :: Experimental ::
* An interface for building shuffle support for Executors
*
* @since 3.0.0
*/
@Experimental
public interface ShuffleExecutorComponents {
void initializeExecutor(String appId, String execId);

ShuffleWriteSupport writes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should have a doc. At the very least, I'd mention that its called once per ShuffleMapTask

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.api.shuffle;

import java.io.IOException;

import org.apache.spark.annotation.Experimental;

/**
* :: Experimental ::
* An interface for creating and managing shuffle partition writers
*
* @since 3.0.0
*/
@Experimental
public interface ShuffleMapOutputWriter {
ShufflePartitionWriter getPartitionWriter(int partitionId) throws IOException;

void commitAllPartitions() throws IOException;

void abort(Throwable error) throws IOException;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these should have some more docs. Eg. at least saying one of these is created for the output of each ShuffleMapTask, and that the "partition" being referenced here is the reduce partition, so getPartitionedWriter will get called once per reduce partition

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.api.shuffle;

import java.io.IOException;
import java.io.OutputStream;

import org.apache.spark.annotation.Experimental;

/**
* :: Experimental ::
* An interface for giving streams / channels for shuffle writes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should we omit "channel"? there's nothing else in the API referencing it

*
* @since 3.0.0
*/
@Experimental
public interface ShufflePartitionWriter {

/**
* Opens and returns an underlying {@link OutputStream} that can write bytes to the underlying
* data store.
*/
OutputStream openStream() throws IOException;

/**
* Get the number of bytes written by this writer's stream returned by {@link #openStream()}.
*/
long getNumBytesWritten();
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.api.shuffle;

import java.io.IOException;

import org.apache.spark.annotation.Experimental;

/**
* :: Experimental ::
* An interface for deploying a shuffle map output writer
*
* @since 3.0.0
*/
@Experimental
public interface ShuffleWriteSupport {
Copy link

@gcz2022 gcz2022 Jul 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since ShuffleWriteSupport only contains 1 function, why do we add this layer(Also, xxSupport is ambiguous)?
Could we make ShuffleExecutorComponents.writes return a ShuffleMapOutputWriter directly?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found another reason to remove this Write/ReadSupport layer, in ReadSupport, although not proposed yet : P, it straightway contains partition level functions: https://github.com/palantir/spark/blob/62c2664f1f298889357c6ebeb9b6f08962c94ceb/core/src/main/java/org/apache/spark/api/shuffle/ShuffleReadSupport.java#L31-L38. However for WriteSupport path this is what 1-level-lower layer would do(partition-level writer).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's important to keep interfaces minimal, and to keep each interface responsible for a single set of functionality. Since ShuffleExecutorComponents is eventually also going to support lifecycle operations - particularly stopExecutor - I'd like to keep that method separate from the methods that create the writers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there might be a world where we can coalesce ShuffleWriteSupport and ShuffleReadSupport into a more generic ShuffleIO - although we'd probably want to reconsider the naming of the interface at the root of the plugin tree, which is currently ShuffleDataIO.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean that ShuffleWriteSupport can contain more semantics/fields than ShuffleMapOutputWriter? Can you name some of them? Maybe because the name xxSupport is ambiguous I think all in xxSupport can be put in xxer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this a bit more. One motivation to separation write APIs from read APIs is to pass in only the subsection of the plugin tree that is applicable in each case. So I only want to pass write-specific functionality to SortShuffleWriter, and only pass read-specific shuffle functionality to the reader side.

But I hold this conviction pretty loosely. We can change this - let me know.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mccheah Yeah seperating write from read is pretty natural. Here we think maybe remove the WriteSupport layer, only the ShuffleMapOutputWriter is enough, and on read side, the current ReadSupport can be replaced with ShuffleBlockReader

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doesn't separate the concerns as I described. The only layer above ShuffleWriteSupport is ShuffleExecutorComponents. If we remove ShuffleWriteSupport, we make ShuffleExecutorComponents responsible for createMapOutputWriter. But presumably ShuffleExecutorComponents would also have createPartitionReader, meaning now ShuffleExecutorComponents passed to BypassMergeSortShuffleWriter will now have both read and write methods accessible in the writer code.

I would like BypassMergeSortShuffleWriter to only be able to call createShuffleMapOutputWriter, and nothing else.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the MapOutputWriter be created before passing to BypassMergeSortShuffleWriter(MapOutputWriter as a construction parameter)?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment left here @mccheah

ShuffleMapOutputWriter createMapOutputWriter(
int shuffleId,
int mapId,
mccheah marked this conversation as resolved.
Show resolved Hide resolved
int numPartitions) throws IOException;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.api.shuffle;

import java.io.IOException;

import org.apache.spark.annotation.Experimental;

/**
* :: Experimental ::
* Indicates that partition writers can transfer bytes directly from input byte channels to
* output channels that stream data to the underlying shuffle partition storage medium.
* <p>
* This API is separated out for advanced users because it only needs to be used for
* specific low-level optimizations. The idea is that the returned channel can transfer bytes
* from the input file channel out to the backing storage system without copying data into
* memory.
* <p>
* Most shuffle plugin implementations should use {@link ShufflePartitionWriter} instead.
*
* @since 3.0.0
*/
@Experimental
public interface SupportsTransferTo extends ShufflePartitionWriter {

/**
* Opens and returns a {@link TransferrableWritableByteChannel} for transferring bytes from
* input byte channels to the underlying shuffle data store.
*/
TransferrableWritableByteChannel openTransferrableChannel() throws IOException;

/**
* Returns the number of bytes written either by this writer's output stream opened by
* {@link #openStream()} or the byte channel opened by {@link #openTransferrableChannel()}.
*/
@Override
long getNumBytesWritten();
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.api.shuffle;

import java.io.Closeable;
import java.io.IOException;

import java.nio.channels.FileChannel;
import java.nio.channels.WritableByteChannel;
import org.apache.spark.annotation.Experimental;

/**
* :: Experimental ::
* Represents an output byte channel that can copy bytes from input file channels to some
* arbitrary storage system.
* <p>
* This API is provided for advanced users who can transfer bytes from a file channel to
* some output sink without copying data into memory. Most users should not need to use
* this functionality; this is primarily provided for the built-in shuffle storage backends
* that persist shuffle files on local disk.
* <p>
* For a simpler alternative, see {@link ShufflePartitionWriter}.
*
* @since 3.0.0
*/
@Experimental
public interface TransferrableWritableByteChannel extends Closeable {

/**
* Copy all bytes from the source readable byte channel into this byte channel.
*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

though you mention "copy all", its probably worth repeating in this comment that this differs from FileChannel.transferTo(), in that this will block until all bytes have been transferred

* @param source File to transfer bytes from. Do not call anything on this channel other than
* {@link FileChannel#transferTo(long, long, WritableByteChannel)}.
* @param transferStartPosition Start position of the input file to transfer from.
* @param numBytesToTransfer Number of bytes to transfer from the given source.
*/
void transferFrom(FileChannel source, long transferStartPosition, long numBytesToTransfer)
throws IOException;
}
Loading