Skip to content

Conversation

@HuangXingBo
Copy link
Contributor

@HuangXingBo HuangXingBo commented Mar 11, 2020

What is the purpose of the change

This pull request will optimize the result of FlattenRowCoder and ArrowCoder to generator to eliminate unnecessary function calls

Brief change log

  • Add PassThroughLengthPrefixCoderImpl and PassThroughLengthPrefixCoder
  • Change the result of FlattenRowCoder and ArrowCoder to generator
  • Change the func of Operations to map

Verifying this change

This change added tests and can be verified as follows:

  • It is performance improvement feature, so current test is enough

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (no)
  • If yes, how is the feature documented? (not applicable)

How does this patch test

Test Code

@udf(input_types=[DataTypes.INT(False)], result_type=DataTypes.INT(False))
def inc(x):
    return x

t_env.register_function("inc", inc)

# num_rows = 100000000
num_rows = 100000
num_columns = 10

select_list = ["inc(c%s)" % i for i in range(num_columns)]
t_env.register_table_sink(
    "sink",
    PrintTableSink(
        ["c%s" % i for i in range(num_columns)],
        [DataTypes.INT(False)] * num_columns))

t_env.from_table_source(MultiRowColumnTableSource(num_rows, num_columns)) \
    .select(','.join(select_list)) \
    .insert_into("sink")

beg_time = time.time()
t_env.execute("perf_test")
print("consume time: " + str(time.time() - beg_time))

Test Results

num rows, num colums | Consume Time(Before) | Consume Time(After)
10kw,1 | 711s | 441s
10kw,10 | 1454s | 1221s

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 9cb59ec (Wed Mar 11 08:01:22 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.

Details
The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Mar 11, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

…wCoder to generator to eliminate unnecessary function calls
data_out_stream = self.data_out_stream
for value in iter_value:
self.write_null_mask(value, data_out_stream)
for i in range(self._filed_count):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you rename _filed_count to _field_count? It's not directly related to this PR, however it would be great if we can correct the typo as it's used in this PR.

while in_stream.size() > 0:
yield self.create_result(in_stream, nested)

def create_result(self, in_stream: create_InputStream, nested: bool) -> List:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to _decode_one_row_from_stream?

arrays = [create_array(cols[i], self._schema.types[i]) for i in range(0, len(self._schema))]
return pa.RecordBatch.from_arrays(arrays, self._schema)

def _create_result(self, in_stream: create_InputStream) -> List:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to _decode_one_row_from_stream


def _create_result(self, in_stream: create_InputStream) -> List:
self._resettable_io.set_input_bytes(in_stream.read_all(True))
# there is only arrow batch in the underlying input stream
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you help to correct the comments here?
there is only arrow batch -> there is only one arrow batch

self._value_coder.encode_to_stream(value, out, nested)

def decode_from_stream(self, in_stream: create_InputStream, nested: bool) -> Any:
return self._value_coder.decode_from_stream(in_stream, False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

decode_from_stream(in_stream, False) -> decode_from_stream(in_stream, nested) ?

return 'ArrowCoder[%s]' % self._schema


class CustomLengthPrefixCoder(LengthPrefixCoder):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about rename it to PassThroughLengthPrefixCoder to reflect that it does nothing for the prefixed length?


class CustomLengthPrefixCoder(LengthPrefixCoder):
"""
CustomLengthPrefixCoder will replace LengthPrefixCoder in Beam for performance optimization.
Copy link
Contributor

@dianfu dianfu Mar 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update the comment as following?
Coder which doesn't prefix the length of the encoded object as the length prefix will be handled by the wrapped value coder.

…wCoder to generator to eliminate unnecessary function calls-fix-1
@HuangXingBo
Copy link
Contributor Author

Thanks a lot for @dianfu review, I have addressed the comments at the latest commit.

…wCoder to generator to eliminate unnecessary function calls-fix-2
Copy link
Contributor

@dianfu dianfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HuangXingBo Thanks for the update. LGTM.

@dianfu dianfu merged commit d6038cc into apache:master Mar 13, 2020
@dianfu dianfu changed the title [FLINK-16524][python] Optimize the result of FlattenRowCoder and ArrowCoder to generator to eliminate unnecessary function calls [FLINK-16524][python] Optimize the execution of Python UDF to use generator to eliminate unnecessary function calls Mar 13, 2020
liuzhixing1006 pushed a commit to liuzhixing1006/flink that referenced this pull request Mar 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants