New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-16960][runtime] Add PipelinedRegion interface #11647
Conversation
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 94bb5d5 (Mon Apr 06 12:00:29 UTC 2020) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
flink-runtime/src/main/java/org/apache/flink/runtime/topology/PipelinedRegion.java
Show resolved
Hide resolved
* | ||
* @return Iterable over pipelined regions in this topology | ||
*/ | ||
default Iterable<PipelinedRegion<VID, RID, V, R>> getAllPipelinedRegions() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's ugly. In the SchedulingStrategy, one will have to write:
private final SchedulingTopology<?, ?> schedulingTopology;
...
Iterable<? extends PipelinedRegion<ExecutionVertexID, IntermediateResultPartitionID, ?, ?>> allPipelinedRegions = schedulingTopology.getAllPipelinedRegions();
PipelinedRegion<ExecutionVertexID, IntermediateResultPartitionID, ?, ?> next = allPipelinedRegions.iterator().next();
to iterate over the pipelined regions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can have an ExecutionPipelinedRegion
interface which inherits the PipelinedRegion
interface.
Similar to Topology
-> SchedulingTopology
.
The SchedulingStrategy
should operate on ExecutionPipelinedRegion
instead.
Note that we already have a LogicalPipelinedRegion
and I think we should also rework it. Then this method does not need to be default since LogicalTopology
can also return logical regions (we already have this method implementation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed an update. Here is an example on how to use the API:
final SchedulingPipelinedRegion<?, ?> pr = schedulingTopology.getAllPipelinedRegions().iterator().next();
final SchedulingExecutionVertex<?, ?> vertex = pr.getVertex(null);
final Iterable<? extends SchedulingResultPartition<?, ?>> consumedResults = vertex.getConsumedResults();
final IntermediateDataSetID resultId = consumedResults.iterator().next().getResultId();
I am not too fond of the wildcard type parameters. One way to get rid of them is to remove the generically typed Topology
hierarchy. If code duplication of the pipelined region computation is a concern, the best idea I have at the moment is to translate the different topologies into an abstract graph structure on which we can run a connected components algorithm. The downsides of this approach are:
- must maintain two translation algorithms (for logical and execution level)
- performance penalty due to additional graph traversal
See below for an PoC (untested):
import org.apache.flink.shaded.curator4.com.google.common.graph.Graph;
...
public final class PipelinedRegionComputeUtil2 {
public static Graph<SchedulingExecutionVertex2> toGraph(SchedulingTopology2 topology) {
final MutableGraph<SchedulingExecutionVertex2> graph = GraphBuilder.directed()
.allowsSelfLoops(false)
.build();
for (SchedulingExecutionVertex2 consumer : topology.getVertices()) {
graph.addNode(consumer);
for (SchedulingResultPartition2 resultPartition : consumer.getConsumedResults()) {
if (!resultPartition.getResultType().isPipelined()) {
continue;
}
for (SchedulingExecutionVertex2 producer : resultPartition.getConsumers()) {
graph.putEdge(producer, consumer);
}
}
}
return graph;
}
public static <V> Set<Set<V>> connectedComponents(Graph<V> graph) {
final Map<V, Set<V>> vertexToRegion = new IdentityHashMap<>();
for (V vertex : graph.nodes()) {
Set<V> currentRegion = new HashSet<>();
currentRegion.add(vertex);
vertexToRegion.put(vertex, currentRegion);
for (V producer : graph.predecessors(vertex)) {
final Set<V> producerRegion = vertexToRegion.get(producer);
producerRegion.add(producer);
if (currentRegion != producerRegion) {
final Set<V> smallerSet;
final Set<V> largerSet;
if (currentRegion.size() < producerRegion.size()) {
smallerSet = currentRegion;
largerSet = producerRegion;
} else {
smallerSet = producerRegion;
largerSet = currentRegion;
}
for (V v : smallerSet) {
vertexToRegion.put(v, largerSet);
}
largerSet.addAll(smallerSet);
currentRegion = largerSet;
}
}
}
return uniqueRegions(vertexToRegion);
}
private static <V> Set<Set<V>> uniqueRegions(final Map<V, Set<V>> vertexToRegion) {
final Set<Set<V>> distinctRegions = Collections.newSetFromMap(new IdentityHashMap<>());
distinctRegions.addAll(vertexToRegion.values());
return distinctRegions;
}
}
non-generic topology classes:
/**
* Topology of {@link SchedulingExecutionVertex}.
*/
public interface SchedulingTopology2 {
Iterable<SchedulingExecutionVertex2> getVertices();
/**
* Looks up the {@link SchedulingExecutionVertex} for the given {@link ExecutionVertexID}.
*
* @param executionVertexId identifying the respective scheduling vertex
* @return Optional containing the respective scheduling vertex or none if the vertex does not exist
*/
Optional<SchedulingExecutionVertex2> getVertex(ExecutionVertexID executionVertexId);
/**
* Looks up the {@link SchedulingExecutionVertex} for the given {@link ExecutionVertexID}.
*
* @param executionVertexId identifying the respective scheduling vertex
* @return The respective scheduling vertex
* @throws IllegalArgumentException If the vertex does not exist
*/
default SchedulingExecutionVertex2 getVertexOrThrow(ExecutionVertexID executionVertexId) {
return getVertex(executionVertexId).orElseThrow(
() -> new IllegalArgumentException("can not find vertex: " + executionVertexId));
}
/**
* Looks up the {@link SchedulingResultPartition} for the given {@link IntermediateResultPartitionID}.
*
* @param intermediateResultPartitionId identifying the respective scheduling result partition
* @return Optional containing the respective scheduling result partition or none if the partition does not exist
*/
Optional<SchedulingResultPartition> getResultPartition(IntermediateResultPartitionID intermediateResultPartitionId);
/**
* Looks up the {@link SchedulingResultPartition} for the given {@link IntermediateResultPartitionID}.
*
* @param intermediateResultPartitionId identifying the respective scheduling result partition
* @return The respective scheduling result partition
* @throws IllegalArgumentException If the partition does not exist
*/
default SchedulingResultPartition getResultPartitionOrThrow(IntermediateResultPartitionID intermediateResultPartitionId) {
return getResultPartition(intermediateResultPartitionId).orElseThrow(
() -> new IllegalArgumentException("can not find partition: " + intermediateResultPartitionId));
}
}
/**
* Scheduling representation of {@link ExecutionVertex}.
*/
public interface SchedulingExecutionVertex2 {
ExecutionVertexID getId();
Iterable<SchedulingResultPartition2> getConsumedResults();
Iterable<SchedulingResultPartition2> getProducedResults();
/**
* Gets the state of the execution vertex.
*
* @return state of the execution vertex
*/
ExecutionState getState();
/**
* Get {@link InputDependencyConstraint}.
*
* @return input dependency constraint
*/
InputDependencyConstraint getInputDependencyConstraint();
}
/**
* Representation of {@link IntermediateResultPartition}.
*/
public interface SchedulingResultPartition2 {
IntermediateResultPartitionID getId();
ResultPartitionType getResultType();
SchedulingExecutionVertex2 getProducer();
Iterable<SchedulingExecutionVertex2> getConsumers();
/**
* Gets id of the intermediate result.
*
* @return id of the intermediate result
*/
IntermediateDataSetID getResultId();
/**
* Gets the {@link ResultPartitionState}.
*
* @return result partition state
*/
ResultPartitionState getState();
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The POC should work. And I think it's fine to maintain 2 translation algorithms for logical and execution graphs.
The main concern is the time to translate the graph and the GC caused by it, especially for large scale jobs.
- For jobs with hundreds of millions of edges (e.g. a 10000x10000 map reduce), it will take tens of seconds to build the ExecutionGraph. And it might take another tens of seconds to create a translated graph.
- Besides that, hundreds of millions of temporary edge instances must be created and used at the same time in the translated graph. This may result in more JM memory requirement otherwise OOM might happen. And it may also cause GC issues since they are not needed anymore after the regions are built.
* Base topology for all logical and execution topologies. | ||
* A topology consists of {@link Vertex} and {@link Result}. | ||
*/ | ||
public interface SimpleTopology<VID extends VertexID, RID extends ResultID, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about name it as BaseTopology
?
I feel that in Flink Simple
are usually used for a simple implementation of interfaces, such as SimpleSlotProvider, SimpleCounter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
/** | ||
* Pipelined region on logical level, i.e., {@link JobVertex} level. | ||
*/ | ||
public interface ILogicalPipelinedRegion<V extends LogicalVertex<V, R>, R extends LogicalResult<V, R>> extends PipelinedRegion<JobVertexID, IntermediateDataSetID, V, R> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe name the interface as LogicalPipelinedRegion
to be aligned with other topology related interfaces (without a prefix 'I')?
We can rename currently LogicalPipelinedRegion
to be DefaultLogicalPipelinedRegion
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
What is the purpose of the change
This adds the
PipelinedRegion
interface toorg.apache.flink.runtime.topology.Topology
.Brief change log
Verifying this change
This change is a trivial rework / code cleanup without any test coverage.
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: (yes / no)Documentation