Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-16960][runtime] Add PipelinedRegion interface #11647

Closed
wants to merge 2 commits into from

Conversation

GJL
Copy link
Member

@GJL GJL commented Apr 6, 2020

What is the purpose of the change

This adds the PipelinedRegion interface to org.apache.flink.runtime.topology.Topology.

Brief change log

  • See commit

Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@GJL GJL requested a review from zhuzhurk April 6, 2020 11:58
@flinkbot
Copy link
Collaborator

flinkbot commented Apr 6, 2020

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 94bb5d5 (Mon Apr 06 12:00:29 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Apr 6, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

*
* @return Iterable over pipelined regions in this topology
*/
default Iterable<PipelinedRegion<VID, RID, V, R>> getAllPipelinedRegions() {
Copy link
Member Author

@GJL GJL Apr 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ugly. In the SchedulingStrategy, one will have to write:

private final SchedulingTopology<?, ?> schedulingTopology;
...
Iterable<? extends PipelinedRegion<ExecutionVertexID, IntermediateResultPartitionID, ?, ?>> allPipelinedRegions = schedulingTopology.getAllPipelinedRegions();
PipelinedRegion<ExecutionVertexID, IntermediateResultPartitionID, ?, ?> next = allPipelinedRegions.iterator().next();

to iterate over the pipelined regions.

Copy link
Contributor

@zhuzhurk zhuzhurk Apr 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can have an ExecutionPipelinedRegion interface which inherits the PipelinedRegion interface.
Similar to Topology -> SchedulingTopology.
The SchedulingStrategy should operate on ExecutionPipelinedRegion instead.

Note that we already have a LogicalPipelinedRegion and I think we should also rework it. Then this method does not need to be default since LogicalTopology can also return logical regions (we already have this method implementation).

Copy link
Member Author

@GJL GJL Apr 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed an update. Here is an example on how to use the API:

final SchedulingPipelinedRegion<?, ?> pr = schedulingTopology.getAllPipelinedRegions().iterator().next();
final SchedulingExecutionVertex<?, ?> vertex = pr.getVertex(null);
final Iterable<? extends SchedulingResultPartition<?, ?>> consumedResults = vertex.getConsumedResults();
final IntermediateDataSetID resultId = consumedResults.iterator().next().getResultId();

I am not too fond of the wildcard type parameters. One way to get rid of them is to remove the generically typed Topology hierarchy. If code duplication of the pipelined region computation is a concern, the best idea I have at the moment is to translate the different topologies into an abstract graph structure on which we can run a connected components algorithm. The downsides of this approach are:

  • must maintain two translation algorithms (for logical and execution level)
  • performance penalty due to additional graph traversal

See below for an PoC (untested):

import org.apache.flink.shaded.curator4.com.google.common.graph.Graph;
...
public final class PipelinedRegionComputeUtil2 {

	public static Graph<SchedulingExecutionVertex2> toGraph(SchedulingTopology2 topology) {
		final MutableGraph<SchedulingExecutionVertex2> graph = GraphBuilder.directed()
			.allowsSelfLoops(false)
			.build();

		for (SchedulingExecutionVertex2 consumer : topology.getVertices()) {
			graph.addNode(consumer);

			for (SchedulingResultPartition2 resultPartition : consumer.getConsumedResults()) {

				if (!resultPartition.getResultType().isPipelined()) {
					continue;
				}

				for (SchedulingExecutionVertex2 producer : resultPartition.getConsumers()) {
					graph.putEdge(producer, consumer);
				}
			}
		}

		return graph;
	}

	public static <V> Set<Set<V>> connectedComponents(Graph<V> graph) {
		final Map<V, Set<V>> vertexToRegion = new IdentityHashMap<>();

		for (V vertex : graph.nodes()) {
			Set<V> currentRegion = new HashSet<>();
			currentRegion.add(vertex);
			vertexToRegion.put(vertex, currentRegion);

			for (V producer : graph.predecessors(vertex)) {
				final Set<V> producerRegion = vertexToRegion.get(producer);
				producerRegion.add(producer);

				if (currentRegion != producerRegion) {
					final Set<V> smallerSet;
					final Set<V> largerSet;
					if (currentRegion.size() < producerRegion.size()) {
						smallerSet = currentRegion;
						largerSet = producerRegion;
					} else {
						smallerSet = producerRegion;
						largerSet = currentRegion;
					}
					for (V v : smallerSet) {
						vertexToRegion.put(v, largerSet);
					}
					largerSet.addAll(smallerSet);
					currentRegion = largerSet;
				}
			}
		}

		return uniqueRegions(vertexToRegion);
	}

	private static <V> Set<Set<V>> uniqueRegions(final Map<V, Set<V>> vertexToRegion) {
		final Set<Set<V>> distinctRegions = Collections.newSetFromMap(new IdentityHashMap<>());
		distinctRegions.addAll(vertexToRegion.values());
		return distinctRegions;
	}
}

non-generic topology classes:


/**
 * Topology of {@link SchedulingExecutionVertex}.
 */
public interface SchedulingTopology2 {

	Iterable<SchedulingExecutionVertex2> getVertices();

	/**
	 * Looks up the {@link SchedulingExecutionVertex} for the given {@link ExecutionVertexID}.
	 *
	 * @param executionVertexId identifying the respective scheduling vertex
	 * @return Optional containing the respective scheduling vertex or none if the vertex does not exist
	 */
	Optional<SchedulingExecutionVertex2> getVertex(ExecutionVertexID executionVertexId);

	/**
	 * Looks up the {@link SchedulingExecutionVertex} for the given {@link ExecutionVertexID}.
	 *
	 * @param executionVertexId identifying the respective scheduling vertex
	 * @return The respective scheduling vertex
	 * @throws IllegalArgumentException If the vertex does not exist
	 */
	default SchedulingExecutionVertex2 getVertexOrThrow(ExecutionVertexID executionVertexId) {
		return getVertex(executionVertexId).orElseThrow(
				() -> new IllegalArgumentException("can not find vertex: " + executionVertexId));
	}

	/**
	 * Looks up the {@link SchedulingResultPartition} for the given {@link IntermediateResultPartitionID}.
	 *
	 * @param intermediateResultPartitionId identifying the respective scheduling result partition
	 * @return Optional containing the respective scheduling result partition or none if the partition does not exist
	 */
	Optional<SchedulingResultPartition> getResultPartition(IntermediateResultPartitionID intermediateResultPartitionId);

	/**
	 * Looks up the {@link SchedulingResultPartition} for the given {@link IntermediateResultPartitionID}.
	 *
	 * @param intermediateResultPartitionId identifying the respective scheduling result partition
	 * @return The respective scheduling result partition
	 * @throws IllegalArgumentException If the partition does not exist
	 */
	default SchedulingResultPartition getResultPartitionOrThrow(IntermediateResultPartitionID intermediateResultPartitionId) {
		return getResultPartition(intermediateResultPartitionId).orElseThrow(
				() -> new IllegalArgumentException("can not find partition: " + intermediateResultPartitionId));
	}
}
/**
 * Scheduling representation of {@link ExecutionVertex}.
 */
public interface SchedulingExecutionVertex2 {

	ExecutionVertexID getId();

	Iterable<SchedulingResultPartition2> getConsumedResults();

	Iterable<SchedulingResultPartition2> getProducedResults();

	/**
	 * Gets the state of the execution vertex.
	 *
	 * @return state of the execution vertex
	 */
	ExecutionState getState();

	/**
	 * Get {@link InputDependencyConstraint}.
	 *
	 * @return input dependency constraint
	 */
	InputDependencyConstraint getInputDependencyConstraint();
}
/**
 * Representation of {@link IntermediateResultPartition}.
 */
public interface SchedulingResultPartition2 {

	IntermediateResultPartitionID getId();

	ResultPartitionType getResultType();

	SchedulingExecutionVertex2 getProducer();

	Iterable<SchedulingExecutionVertex2> getConsumers();

	/**
	 * Gets id of the intermediate result.
	 *
	 * @return id of the intermediate result
	 */
	IntermediateDataSetID getResultId();

	/**
	 * Gets the {@link ResultPartitionState}.
	 *
	 * @return result partition state
	 */
	ResultPartitionState getState();
}

Copy link
Contributor

@zhuzhurk zhuzhurk Apr 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The POC should work. And I think it's fine to maintain 2 translation algorithms for logical and execution graphs.
The main concern is the time to translate the graph and the GC caused by it, especially for large scale jobs.

  • For jobs with hundreds of millions of edges (e.g. a 10000x10000 map reduce), it will take tens of seconds to build the ExecutionGraph. And it might take another tens of seconds to create a translated graph.
  • Besides that, hundreds of millions of temporary edge instances must be created and used at the same time in the translated graph. This may result in more JM memory requirement otherwise OOM might happen. And it may also cause GC issues since they are not needed anymore after the regions are built.

* Base topology for all logical and execution topologies.
* A topology consists of {@link Vertex} and {@link Result}.
*/
public interface SimpleTopology<VID extends VertexID, RID extends ResultID,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about name it as BaseTopology?
I feel that in Flink Simple are usually used for a simple implementation of interfaces, such as SimpleSlotProvider, SimpleCounter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

/**
* Pipelined region on logical level, i.e., {@link JobVertex} level.
*/
public interface ILogicalPipelinedRegion<V extends LogicalVertex<V, R>, R extends LogicalResult<V, R>> extends PipelinedRegion<JobVertexID, IntermediateDataSetID, V, R> {
Copy link
Contributor

@zhuzhurk zhuzhurk Apr 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe name the interface as LogicalPipelinedRegion to be aligned with other topology related interfaces (without a prefix 'I')?
We can rename currently LogicalPipelinedRegion to be DefaultLogicalPipelinedRegion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

@zhuzhurk zhuzhurk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants