Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-14584][python] Support complex data types in Python user-defined functions #10086

Closed
wants to merge 11 commits into from

Conversation

HuangXingBo
Copy link
Contributor

What is the purpose of the change

This pr supports ArrayType, MapType, MultisetType, DecimalType In Python UDF

Brief change log

(for example:)

  • Add DecimalSerializer, BigDecSerializer, BinaryArraySerializer, BinaryMapSerializer
  • Add DecimalCoder, ArrayCoder, MapCoder, MultisetCoder, CollectionCoder
  • fix the bug in CharCoder and TinyintCoder

Verifying this change

This change added tests and can be verified as follows:

  • BigDecSerializerTest, BinaryArraySerializerTest, BinaryMapSerializerTest,DecimalSerializerTest
  • test_array_coder, test_map_coder, test_multiset_coder, test_decimal_coder in coders_test_common.py
  • test_all_data_types in test_udf.py

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (no)
  • If yes, how is the feature documented? (not applicable)

@flinkbot
Copy link
Collaborator

flinkbot commented Nov 5, 2019

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 2746453 (Tue Nov 05 08:48:00 UTC 2019)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Nov 5, 2019

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build

Copy link
Contributor

@WeiZhong94 WeiZhong94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HuangXingBo Thanks for your PR! It looks good overall, here is a few comments.

def decode_from_stream(self, in_stream, nested):
size = in_stream.read_bigendian_int32()
elements = [self._elem_coder.decode_from_stream(in_stream, nested)
if not not in_stream.read_byte() else None for _ in range(size)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove "not not"?

for _ in range(size):
key = self._key_coder.decode_from_stream(in_stream, nested)
is_null = not not in_stream.read_byte()
if is_null:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use in_stream.read_byte() directly or bool(in_stream.read_byte())?

return map_value

def __repr__(self):
return 'MapCoderImpl[%s]' % ' : '.join([str(self._key_coder), str(self._value_coder)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use repr() instead of str()?


def encode_to_stream(self, value, out_stream, nested):
dict_value = self.multiset_to_dict(value)
out_stream.write_bigendian_int32(len(dict_value))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part is duplicated with MapCoderImpl, can we reuse it?

class DecimalCoderImpl(StreamCoderImpl):

def __init__(self, precision, scale):
decimal.getcontext().prec = precision
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should hold a individual context object here and replace current context at the beginning of encode/decode and restore users' context at the end of encode/decode?

self._elem_coder = elem_coder
super(ArrayCoder, self).__init__(elem_coder)

def _impl_coder(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about override _create_impl directly?


def test_decimal_coder(self):
from decimal import Decimal
coder = DecimalCoder()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about test with different precision?

* Currently Python doesn't support BinaryArray natively, so we can't use BaseArraySerializer in blink directly.
*/
@Internal
public class BinaryArraySerializer<K> extends BaseArraySerializer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we extends BaseArraySerializer, it seems the type parameter "K" is unnecessary?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name "BinaryArraySerializer" is not accurate, maybe "PythonBaseArraySerializer" is better?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense.What about naming it BaseArraySerializer?

* Currently Python doesn't support BinaryMap natively, so we can't use BaseArraySerializer in blink directly.
*/
@Internal
public class BinaryMapSerializer<K, V> extends BaseMapSerializer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@HuangXingBo
Copy link
Contributor Author

Thanks a lot for @WeiZhong94 review. I have addressed comments at the latest commit.

Copy link
Contributor

@hequn8128 hequn8128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HuangXingBo Thanks a lot for the PR. Some quick feedback below. Will leave more later.

'multiset_param is wrong value %s !' % multiset_param
return multiset_param

def create_multiset_func():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not good to use udf to create input data. Because, the data will be given to multiset_func directly within Python, i.e., the serialize method of the java serializer can not be tested.

.insert_into("Results")
self.t_env.execute("test")
actual = source_sink_utils.results()
self.assert_equals(actual,
["1,null,1,true,32767,-2147483648,1.23,1.98932,"
"[102, 108, 105, 110, 107],pyflink,2014-09-13"])
"[102, 108, 105, 110, 107],pyflink,2014-09-13,"
"[1, 2, 3],{1=flink, 2=pyflink},{1=2, 2=1},"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For multiset, we output a map? What's the behavior of java/scala?
I think maybe we can support multiset in python later if we find there is a need. For python, there are no python types corresponding to multiset. Furthermore, a user can somehow use a map type to achieve multiset type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.Neither array, set, or map can represent the multiset very conveniently in Python.We can support support multiset in python later if we find a good structure to express multiset.

@@ -70,6 +89,43 @@ public static TypeSerializer toBlinkTypeSerializer(LogicalType logicalType) {
return logicalType.accept(new LogicalTypeToProtoTypeConverter());
}

/**
* Convert LogicalType to conversion class.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This class is only used for flink planner. Maybe change the comment to: "Convert LogicalType to conversion class for flink planner"?

public TypeSerializer visit(ArrayType arrayType) {
LogicalType elementType = arrayType.getElementType();
TypeSerializer<?> elementTypeSerializer = elementType.accept(this);
Class<?> elementClass = LogicalTypeToConversionClassConverter.INSTANCE.visit(elementType);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems we don't need to add the LogicalTypeToConversionClassConverter class. We can use TypeConversions to convert the LogicalType to the array TypeInformation and then convert to the serializer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we disscussed offline, logicalType does not contain class information, so we can't use LogicalTypeToConversionClassConverter to get the correct conversion class information of DateType, TimeType,TimestampType and ArrayType

def __init__(self, elem_coder):
self._elem_coder = elem_coder

def encode_to_stream(self, value, out_stream, nested):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is more efficient to use null mask to handle null values rather than adding a boolean value for every element. However, it is the behavior of Java.

@@ -319,6 +319,30 @@ def date_func(date_param):
'date_param is wrong value %s !' % date_param
return date_param

def array_func(array_param):
assert array_param == [1, 2, 3], \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we test nested array?

}

@Override
public BaseArray deserialize(BaseArray reuse, DataInputView source) throws IOException {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we reuse the BaseArray here?

/**
* {@link TypeSerializerSnapshot} for {@link BaseArraySerializer}.
*/
public static final class BaseArraySerializerSnapshot implements TypeSerializerSnapshot<BaseArray> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can reuse the BaseArraySerializerSnapshot in the base class.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method testSnapshotConfigurationAndReconfigure of SerializerTestBase will test the class of serializer, so we can't reuse BaseArraySerializerSnapshot directly.

Copy link
Contributor

@hequn8128 hequn8128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HuangXingBo Some more comments.


@Override
public void serialize(BaseMap map, DataOutputView target) throws IOException {
BinaryMap binaryMap = (BinaryMap) map;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The BaseMap map can be a GenericMap. Take a look at the public BaseMap copy(BaseMap from) method in the Base class(org.apache.flink.table.runtime.typeutils.BaseMapSerializer)

}

@Override
protected BaseMap[] getTestData() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also test GenericMap?

* {@link TypeSerializerSnapshot} for {@link BaseArraySerializer}.
*/
public static final class BaseArraySerializerSnapshot implements TypeSerializerSnapshot<BaseArray> {
private static final int CURRENT_VERSION = 3;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current version to 1?

* {@link TypeSerializerSnapshot} for {@link BaseMapSerializer}.
*/
public static final class BaseMapSerializerSnapshot implements TypeSerializerSnapshot<BaseMap> {
private static final int CURRENT_VERSION = 3;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Version 1?

* for performance reasons in Python deserialization.
*/
@Internal
public class DecimalSerializer extends TypeSerializer<Decimal> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extends from the blink DecimalSerializer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DecimalSerializer is final class, so we can't extends from the blink DecimalSerializer.

* performance reasons in Python deserialization.
*/
@Internal
public class BigDecSerializer extends TypeSerializerSingleton<BigDecimal> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extends from the BigDecSerializer in flink-core?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

t = self.t_env.from_elements(
[(1, None, 1, True, 32767, -2147483648, 1.23, 1.98932,
bytearray(b'flink'), 'pyflink', datetime.date(2014, 9, 13))],
bytearray(b'flink'), 'pyflink', datetime.date(2014, 9, 13),
[1, 2, 3], {1: 'flink', 2: 'pyflink'}, decimal.Decimal('1000000000000000000.05'))],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test decimal with (38,18)?

@HuangXingBo
Copy link
Contributor Author

Thanks a lot for @hequn8128 review, I have addressed the comments at latest commit.

@hequn8128
Copy link
Contributor

@HuangXingBo Thanks a lot for the update. Will merge this once travis passed.

hequn8128 pushed a commit to hequn8128/flink that referenced this pull request Dec 6, 2019
@hequn8128 hequn8128 closed this in f208354 Dec 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants