You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had searched in the issues and found no similar issues.
What happened
I am developing a multi-table synchronization plugin and currently running unit tests. I have noticed that configurations that work in the Zeta engine do not pass the test cases in Spark mode.
Here is my configuration:
There seems to be an issue with my configuration. I shouldn't have configured two identical indices, but this configuration runs without issues in the Zeta engine, while it throws an error in Spark mode.
I believe that if configuring two identical tables has a valid use case in user-defined partitioning scenarios, then Spark needs to be changed. Alternatively, if it is not valid, then the Zeta engine needs to be modified.
This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs.
Search before asking
What happened
I am developing a multi-table synchronization plugin and currently running unit tests. I have noticed that configurations that work in the Zeta engine do not pass the test cases in Spark mode.
Here is my configuration:
There seems to be an issue with my configuration. I shouldn't have configured two identical indices, but this configuration runs without issues in the Zeta engine, while it throws an error in Spark mode.
I believe that if configuring two identical tables has a valid use case in user-defined partitioning scenarios, then Spark needs to be changed. Alternatively, if it is not valid, then the Zeta engine needs to be modified.
SeaTunnel Version
2.3.8 dev
SeaTunnel Config
Running Command
Error Exception
Zeta or Flink or Spark Version
No response
Java or Scala Version
No response
Screenshots
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: