Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-14922][SPARK-17732][SQL]ALTER TABLE DROP PARTITION should support comparators #19691

Closed
wants to merge 14 commits into from

Conversation

DazhuangSu
Copy link

What changes were proposed in this pull request?

This pr is inspired by @dongjoon-hyun.

quote from #15704 :

What changes were proposed in this pull request?
This PR aims to support comparators, e.g. '<', '<=', '>', '>=', again in Apache Spark 2.0 for backward compatibility.
Spark 1.6
scala> sql("CREATE TABLE sales(id INT) PARTITIONED BY (country STRING, quarter STRING)") res0: org.apache.spark.sql.DataFrame = [result: string]
scala> sql("ALTER TABLE sales DROP PARTITION (country < 'KR')") res1: org.apache.spark.sql.DataFrame = [result: string]
Spark 2.0
scala> sql("CREATE TABLE sales(id INT) PARTITIONED BY (country STRING, quarter STRING)") res0: org.apache.spark.sql.DataFrame = []
scala> sql("ALTER TABLE sales DROP PARTITION (country < 'KR')")
org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input '<' expecting {')', ','}(line 1, pos 42)
After this PR, it's supported.
How was this patch tested?
Pass the Jenkins test with a newly added testcase.

#16036 points out that if we use int literal in DROP PARTITION will fail after patching #15704.
The reason of this failing in #15704 is that AlterTableDropPartitionCommand tells BinayComparison and EqualTo with following code:

private def isRangeComparison(expr: Expression): Boolean = {

expr.find(e => e.isInstanceOf[BinaryComparison] && !e.isInstanceOf[EqualTo]).isDefined
}

This PR resolve this problem by telling a drop condition when parsing sqls.

How was this patch tested?

New testcase introduced from #15704

@gatorsmile
Copy link
Member

cc @dongjoon-hyun

@dongjoon-hyun
Copy link
Member

Thank you for pinging me, @gatorsmile .

@gatorsmile
Copy link
Member

ok to test

@SparkQA
Copy link

SparkQA commented Nov 14, 2017

Test build #83828 has finished for PR 19691 at commit 85fdb46.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 14, 2017

Test build #83831 has finished for PR 19691 at commit f18caeb.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 14, 2017

Test build #83832 has finished for PR 19691 at commit f79c6f4.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@DazhuangSu
Copy link
Author

Jenkins, retest this please

@SparkQA
Copy link

SparkQA commented Nov 14, 2017

Test build #83838 has finished for PR 19691 at commit 8728d3b.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 14, 2017

Test build #83839 has finished for PR 19691 at commit 9832ec5.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@DazhuangSu
Copy link
Author

@gatorsmile @dongjoon-hyun
Could you give me some advice please?

@gatorsmile
Copy link
Member

ok to test

expression(pVal) match {
case EqualNullSafe(_, _) =>
throw new ParseException("'<=>' operator is not allowed in partition specification.", ctx)
case cmp @ BinaryComparison(UnresolvedAttribute(name :: Nil), constant: Literal) =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still the same question here. Constant has to be in the right side?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hive supports them only on the right side. So it makes sense to have the same here I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we support the right-side only, it seems be useful to print explicit error messages like left-side literal not supported ....?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gatorsmile
Copy link
Member

@dongjoon-hyun @maropu @mgaido91 Could you review this PR? I think this command is a pretty useful to end users.

@SparkQA
Copy link

SparkQA commented Apr 8, 2018

Test build #89023 has finished for PR 19691 at commit 9832ec5.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@maropu
Copy link
Member

maropu commented Apr 8, 2018

retest this please

@maropu
Copy link
Member

maropu commented Apr 8, 2018

ok

throw new ParseException("Invalid partition filter specification", ctx)
}
}
if(parts.isEmpty) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't be better to return the Seq[Expression] as it is? Later we need it like that (in listPartitionsByFilter ) and in this way we can avoid using null which is a good thing too...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why aren't we returning parts? this if seems pretty useless

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right. I will change this.

}
}.distinct

if (normalizedSpecs.isEmpty && partitionSet.isEmpty) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can,t we just return partitionSet ++ normalizedSpecs ? I think it is wrong to use intersect, we should drop all of them, shouldn't we?

Copy link
Author

@DazhuangSu DazhuangSu Apr 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mgaido91 I tried this command in hive. And hive only dropped the intersection of two partition filter.

case EqualNullSafe(_, _) =>
throw new ParseException("'<=>' operator is not allowed in partition specification.", ctx)
case cmp @ BinaryComparison(UnresolvedAttribute(name :: Nil), constant: Literal) =>
cmp.withNewChildren(Seq(AttributeReference(name, StringType)(), constant))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ok to pass all the type of literals here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either way, we might need tests for non int-literal cases.

case EqualNullSafe(_, _) =>
throw new ParseException("'<=>' operator is not allowed in partition specification.", ctx)
case cmp @ BinaryComparison(UnresolvedAttribute(name :: Nil), constant: Literal) =>
cmp.withNewChildren(Seq(AttributeReference(name, StringType)(), constant))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the partition column is not of String type?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I'll work on this these days.

@SparkQA
Copy link

SparkQA commented Apr 8, 2018

Test build #89029 has finished for PR 19691 at commit 9832ec5.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@mgaido91
Copy link
Contributor

@DazhuangSu are you still working on this?

@DazhuangSu
Copy link
Author

@mgaido91 Sorry, a little busy recently.
pr is almost ready. Will update soon.

@mgaido91
Copy link
Contributor

thanks @DazhuangSu

case cmp @ BinaryComparison(UnresolvedAttribute(name :: Nil), constant: Literal) =>
cmp
case bc @ BinaryComparison(constant: Literal, _) =>
throw new ParseException("Literal " + constant
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use s"" and this can be a 1-line statement

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I was careless. Will fix this.

throw new ParseException("Literal " + constant
+ " is supported only on the rigth-side.", ctx)
case _ =>
throw new ParseException("Invalid partition filter specification", ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be useful to output to the user which expression was invalid and wh

* Create a partition specification map without optional values
* and a partition filter specification.
*/
protected def visitPartition(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we avoid this method? I find it quite confusing (I mean it is a bit weird to return a tuple with a Map and a Seq of different things....) We can add a new parameter to AlterTableDropPartitionCommand and use the other two method directly...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to add a new parameter to AlterTableDropPartitionCommand earlier, but it was kind hard.
thinking about a sql below:

DROP PARTITION(partitionVal1, expression1), PARTITION(partitionVal2, expression2)

all of the partitions need to be dropped are:
(partitionVal1 intersect expression1) union (partitionVal2 intersect expression2)

using one tuple is to telling us that the partitionVal1 and expression1 are from the same partitionSpec and we should use intersect.
Also, different tuples means (partitionVal1 intersect expression1) and (partitionVal2 intersect expression2) are from different partitionSpec and we should use union.

if we don't use tuple, it's would be difficult to tell the different occasions and it would be difficult to decide between intersect and union when partitionVal1 meet expression1/expression2

Any ideas to replace this tuple?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean now. Yes, I have no better idea indeed. Thanks.


val partitions = catalog.listPartitionsByFilter(
table.identifier, Seq(parts)).map(_.spec)
if (partitions.isEmpty && !ifExists) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this check here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are two occasions if we get a empty seq here from expression filters in one DROP PARTITION sql.

  1. there is at least one filter but there is no partitions for the filter.
  2. there is no filters

if we don't add this check, this may be confusing later.
because in the first occasion, we should use the intersect with normalizedPartitionSpec
but in the second occasion, we shouldn't use intersect because that will return a empty result.

add this check and we can treat them with different ways

  1. regardless of normalizedPartitionSpec and throw an exception directly
  2. return Seq.empty

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, I don't really get what you mean. If we have no filters we are returning and empty Seq (the check is at line 539). So here we are in the case 1, ie. there is a filter and it returns no partitions. If we avoid this if, my understanding is that we return partitions - which is empty - to partitionSet. Then toDrop also would be empty. The result is that we call dropPartitions with an empty Seq and it will throw the AnalysisException (instead of doing it here). So I think this is useless. Am I wrong?

PS all these operations are becoming quite complex as inline statements. I think that creating some methods for handling the different parts could improve readability. What do you think?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let me explain these two occasions more clearly. two sqls for example(the useless_expression means there are no partitions for the expression):
ALTER TABLE DROP PARTITION(partitionVal1, useless_expression)
ALTER TABLE DROP PARTITION(partitionVal1)

the first sql should drop partition partitionVal1 intersect useless_expression, and it's empty.
the second sql should drop partition partitionVal1

if we return Seq.empty to partitonSet for both sqls, it will be impossible to tell between them later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but in the first case toDrop would be empty, in the second case it would contain partitionVal1. So when it is passed later to dropPartitions, this method checks if it is empty or not.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little confusing. if we return Seq.empty for both cases to partitionSet. then the code will both go to line 570.
how can we return empty for the first case to toDrop and return partitionVal1 for the second case at this line.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I see now.... well this is getting very involved... can we split the cases in different methods? I think we can have the 4 cases like:

 if (partition._1.isEmpty && !partition._2.isEmpty){
  // extract from partition._2
} else if (!partition._1.isEmpty && partition._2.isEmpty) {
  // extract from partition._2
} else if (!partition._1.isEmpty && !partition._2.isEmpty) {
  // intersect
} else {
  // return empty seq
}

Maybe with some comments to explain when each of these cases can happen. Thanks.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I will make these codes more readable.

}
val dataType = table.partitionSchema.apply(attrName).dataType
expr.withNewChildren(Seq(AttributeReference(attrName, dataType)(),
Cast(Literal(value.toString), dataType)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to cast a new Literal? can't we just use constant?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

constant's dataType may be different with the partition's dataType.
the difference may cause problems for the expression to compare them later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we throw an AnalysisException if they have different datatype? I think converting something to string and back to the desired datatype is not a good approach and it may cause issues.
@gatorsmile what do you think?

Copy link
Author

@DazhuangSu DazhuangSu May 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can't throw AnalysisException for all the situations. e.g.
CREATE TABLE tbl_x (a INT) PARTITIONED BY (p LONG)
ALTER TABLE tbl_x DROP PARTITION (p >= 1)
In this case, the partition's dataType is LONG for sure. But the constant's dataType is INT

I think it's reasonable to support this situation at least.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, but this case definitely doesn't need to go trough converting to string and creating back a string literal and casting to long. I think that the cast is automatically performed, or if it is not, we can just add the cast on the incoming constant. Do you agree?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I get your point.
I just run a quick test. it threw a exception "java.lang.RuntimeException: Unsupported literal type class org.apache.spark.unsafe.types.UTF8String" at this line when I run:
ALTER TABLE table_a PARTITION(a < 'test')
so there is one line change in literals.scala needed.
the method def apply(v: Any): Literal (literals.scala: line 52) only support String not UTF8String for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, that change is not needed. Why creating a new literal from the value. We can use the parsed literal. We don't have to change the Literal class.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using the parsed constant and if we don't cast it to partition's dataType. it will throw an exception

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
at scala.runtime.BoxesRunTime.unboxToLong(BoxesRunTime.java:105)
at scala.math.Ordering$Long$.compare(Ordering.scala:264)
at scala.math.Ordering$class.gteq(Ordering.scala:91)
at scala.math.Ordering$Long$.gteq(Ordering.scala:264)
at org.apache.spark.sql.catalyst.expressions.GreaterThanOrEqual.nullSafeEval(predicates.scala:710)
at org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:423)

for the case
CREATE TABLE tbl_x (a INT) PARTITIONED BY (p LONG)
ALTER TABLE tbl_x DROP PARTITION (p >= 1)
that I mentioned above

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, but what about Cast(constant, dataType)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol. you are right. I will update

@SparkQA
Copy link

SparkQA commented May 30, 2018

Test build #91308 has finished for PR 19691 at commit 182449b.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented May 31, 2018

Test build #91352 has finished for PR 19691 at commit d725fc9.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jun 1, 2018

Test build #91393 has finished for PR 19691 at commit defc9f1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jun 5, 2018

Test build #91473 has finished for PR 19691 at commit 6b18939.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

}
val dataType = table.partitionSchema.apply(attrName).dataType
expr.withNewChildren(Seq(AttributeReference(attrName, dataType)(),
Cast(constant, dataType)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we add the cast only when needed, ie. dataType != constant.dataType?

extractFromPartitionSpec(partition._1, table, resolver)
} else if (!partition._1.isEmpty && !partition._2.isEmpty) {
// This drop condition has both partitionSpecs and expressions.
extractFromPartitionFilter(partition._2, catalog, table, resolver).intersect(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this may be quite inefficient if we have a lot if partitions. What about converting the partitionSpec is EqualsTo expressions and add them as conditions? It would be great IMO if we can achieve this by enforcing in the syntax that we have either all partitionSpecs or all expressions. So if we have all partition = value, we have a partitionSpec, while if at least one is a comparison different from =, we have all expressions (including the =s). What do you think?

Copy link
Author

@DazhuangSu DazhuangSu Jun 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree. And the hard part may be how to convert a partitionSpec to an EqualsTo.
I think it's better to let the AstBuilder to handle this. If so, we may have to have two AlterTableDropPartitionCommand instances in ddl.scala, one for all partitionSpec and one for all expression.
But it maybe a bit weird.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why? Isn't it enough something like:

((partitionVal (',' partitionVal)*) | (expression (',' expression)*))

?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean how to define AlterTableDropPartitionCommand better in ddl.scala. need to handle both
AlterTableDropPartitionCommand( tableName: TableIdentifier, partitions: Seq[Seq[Expression]], ifExists: Boolean, purge: Boolean, retainData: Boolean)
and
AlterTableDropPartitionCommand( tableName: TableIdentifier, partitions: Seq[TablePartitionSpec], ifExists: Boolean, purge: Boolean, retainData: Boolean)
Maybe telling the different cases inside the method?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can (must) just have a single: AlterTableDropPartitionCommand( tableName: TableIdentifier, partitionSpecs: Seq[TablePartitionSpec], partitionExprs: Seq[Seq[Expression]], ifExists: Boolean, purge: Boolean, retainData: Boolean). Indeed, we might have something like:

alter table foo drop partition (year=2017, month=12), partition(year=2018, month < 3);

where we have both a partition spec and an expression specification.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi, @mgaido91 there is one problem after I changed the syntax,
when i run sql DROP PARTITION (p >=2) it throws
org.apache.spark.sql.AnalysisException: cannot resolve 'p' given input columns: []
I'm trying to find a way to figure it out.

By the way, is a syntax like ((partitionVal (',' partitionVal)*) | (expression (',' expression)*)) legal? Because I wrote a antlr4 syntax test, but it didn't work as I supposed.

Besides, I was wrong that day. I think the if conditions won't be inefficient if there is a lot of partitions. it maybe inefficient if there are a lot of dropPartitionSpec which I don't think can happen easily.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DazhuangSu sorry I missed your last comment somehow.

Why do you say it would not be inefficient if you have a lot of partitions?I think it would be! Imagine that you partition per year and day. And you want to get the first 6 months of this year. The spec would be something like (year = 2018, day < 2018-07-01). Imagine we have a 10 years history. With the current implementation, we would get back basically all the the partitions from the filter, ie. roughly 3.650 and then it will intersect those. Anyway, my understanding is that such a case would not even work properly, as it would try drop the intersect of:

Seq(Seq("year"-> "2018", "day" -> "2018-01-01", ...)).intersect(Seq(Map("year"->"2018")))

which would result in an empty Seq, so we would drop nothing. Moreover, I saw no test for this case in the tests. Can we add tests for this use case and can we add support for it if my understanding that it is not working is right? Thanks

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mgaido91 I understand your point, yes it would be inefficient. I will work on this soon

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you @DazhuangSu

@HyukjinKwon
Copy link
Member

ok to test

@SparkQA
Copy link

SparkQA commented Jul 16, 2018

Test build #93052 has finished for PR 19691 at commit 6b18939.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@MKervo
Copy link

MKervo commented Aug 20, 2018

Could someone merge this please ? :)

@maropu
Copy link
Member

maropu commented Aug 21, 2018

@DazhuangSu Can you resolve the conflict?

@DazhuangSu
Copy link
Author

@maropu ok

@maropu
Copy link
Member

maropu commented Aug 29, 2018

@HyukjinKwon can you trigger again?

@mgaido91
Copy link
Contributor

@DazhuangSu are you still working on this? There is this comment and also another nit which need to be addressed from the last review... Meanwhile I am not sure if someone else has other comments on this.

@HyukjinKwon
Copy link
Member

ok to test

@HyukjinKwon
Copy link
Member

Could anyone take over this then?

@maropu
Copy link
Member

maropu commented Aug 30, 2018

@DazhuangSu Are u there?

@SparkQA
Copy link

SparkQA commented Aug 30, 2018

Test build #95451 has finished for PR 19691 at commit 6b18939.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@mgaido91
Copy link
Contributor

if @DazhuangSu is not active anymore on this I can take it over, but let's wait for his answer.

@DazhuangSu
Copy link
Author

DazhuangSu commented Aug 30, 2018

@mgaido91
Sorry guys. little busy recently.
I will resolve the failed tests this weekend first.

@maropu
Copy link
Member

maropu commented Sep 4, 2018

@DazhuangSu still busy?

@DazhuangSu
Copy link
Author

@maropu
Sorry. I don't really have much time this month.
I can close this pr and somebody can continue on this problem.

@maropu
Copy link
Member

maropu commented Sep 5, 2018

ok @mgaido91 can u take this over?

@mgaido91
Copy link
Contributor

mgaido91 commented Sep 5, 2018

@DazhuangSu @maropu sure, thanks, I'll submit a PR for this soon. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
8 participants