Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] [seatunnel-connector-flink-fake] Type is not supported: BigInteger #2114

Closed
3 tasks done
gleiyu opened this issue Jul 2, 2022 · 2 comments · Fixed by #2118
Closed
3 tasks done

[Bug] [seatunnel-connector-flink-fake] Type is not supported: BigInteger #2114

gleiyu opened this issue Jul 2, 2022 · 2 comments · Fixed by #2118
Assignees
Labels
bug connectors-v1 SeaTunnel connectors, include sink, source

Comments

@gleiyu
Copy link
Contributor

gleiyu commented Jul 2, 2022

Search before asking

  • I had searched in the issues and found no similar issues.

What happened

TableException throws when run seatunnel config below.
if i remove result_table_name parameter , the job is running well

SeaTunnel Version

2.1.2

SeaTunnel Config

{
  "env": {
    "job.name": "dev_sample"
  },
  "source": [
    {
      "mock_data_schema": [
        {
          "name": "col_3",
          "type": "bigint"
        }
      ],
      "mock_data_size": "3000",
      "plugin_name": "FakeSource",
      "result_table_name": "table_54"
    }
  ],
  "transform": [],
  "sink": [
    {
      "limit": "10000",
      "plugin_name": "ConsoleSink",
      "source_table_name": "table_54"
    }
  ]
}

Running Command

LocalFlinkExample

Error Exception

ERROR Seatunnel: Exception StackTrace:java.lang.RuntimeException: Execute Flink task error
	at org.apache.seatunnel.core.flink.command.FlinkTaskExecuteCommand.execute(FlinkTaskExecuteCommand.java:84)
	at org.apache.seatunnel.core.base.Seatunnel.run(Seatunnel.java:39)
	at org.apache.seatunnel.example.flink.LocalFlinkExample.main(LocalFlinkExample.java:40)
Caused by: org.apache.flink.table.api.TableException: Type is not supported: BigInteger
	at org.apache.flink.table.calcite.FlinkTypeFactory$.org$apache$flink$table$calcite$FlinkTypeFactory$$typeInfoToSqlTypeName(FlinkTypeFactory.scala:389)
	at org.apache.flink.table.calcite.FlinkTypeFactory.createTypeFromTypeInfo(FlinkTypeFactory.scala:67)
	at org.apache.flink.table.calcite.FlinkTypeFactory$$anonfun$buildLogicalRowType$1.apply(FlinkTypeFactory.scala:212)
	at org.apache.flink.table.calcite.FlinkTypeFactory$$anonfun$buildLogicalRowType$1.apply(FlinkTypeFactory.scala:202)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.apache.flink.table.calcite.FlinkTypeFactory.buildLogicalRowType(FlinkTypeFactory.scala:202)
	at org.apache.flink.table.calcite.FlinkTypeFactory.buildLogicalRowType(FlinkTypeFactory.scala:185)
	at org.apache.flink.table.catalog.QueryOperationCatalogViewTable.lambda$createCalciteTable$3(QueryOperationCatalogViewTable.java:69)
	at org.apache.flink.table.catalog.QueryOperationCatalogViewTable.getRowType(QueryOperationCatalogViewTable.java:114)
	at org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:130)
	at org.apache.calcite.prepare.CalciteCatalogReader.getTableForMember(CalciteCatalogReader.java:229)
	at org.apache.calcite.prepare.CalciteCatalogReader.getTableForMember(CalciteCatalogReader.java:83)
	at org.apache.calcite.tools.RelBuilder.scan(RelBuilder.java:1094)
	at org.apache.calcite.tools.RelBuilder.scan(RelBuilder.java:1123)
	at org.apache.flink.table.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:284)
	at org.apache.flink.table.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:140)
	at org.apache.flink.table.operations.CatalogQueryOperation.accept(CatalogQueryOperation.java:68)
	at org.apache.flink.table.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:137)
	at org.apache.flink.table.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:118)
	at org.apache.flink.table.operations.utils.QueryOperationDefaultVisitor.visit(QueryOperationDefaultVisitor.java:92)
	at org.apache.flink.table.operations.CatalogQueryOperation.accept(CatalogQueryOperation.java:68)
	at org.apache.flink.table.calcite.FlinkRelBuilder.tableOperation(FlinkRelBuilder.scala:121)
	at org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:568)
	at org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:552)
	at org.apache.flink.table.api.bridge.java.internal.BatchTableEnvironmentImpl.toDataSet(BatchTableEnvironmentImpl.scala:106)
	at org.apache.seatunnel.flink.util.TableUtil.tableToDataSet(TableUtil.java:50)
	at org.apache.seatunnel.flink.batch.FlinkBatchExecution.fromSourceTable(FlinkBatchExecution.java:105)
	at org.apache.seatunnel.flink.batch.FlinkBatchExecution.start(FlinkBatchExecution.java:70)
	at org.apache.seatunnel.core.flink.command.FlinkTaskExecuteCommand.execute(FlinkTaskExecuteCommand.java:81)
	... 2 more

Flink or Spark Version

flink :1.13

Java or Scala Version

1.8

Screenshots

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@ruanwenjun
Copy link
Member

You need to use BigInteger rather than bigint

@ruanwenjun
Copy link
Member

You need to use BigInteger rather than bigint

Sorry, I check the doc again, you can use both bigint and biginteger, there is a bug with fake source.

@ruanwenjun ruanwenjun added the connectors-v1 SeaTunnel connectors, include sink, source label Jul 3, 2022
@gleiyu gleiyu closed this as completed Jul 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug connectors-v1 SeaTunnel connectors, include sink, source
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants