-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kafka-connect-hdfs upon thrift server,instead of hive metastore #116
Comments
@lakeofsand I believe this enhancement proposal is now obsolete given that we have the JDBC Sink Connector that can do this directly. Feel free to reopen if you are talking about something other than the thrift server for Spark |
It is not exactly same with "JDBC Sink connector". It need support to sync-with-hive with spark thrift server,not hive metastore service. |
@lakeofsand the spark thrift server is akin to the hiveserver2 implementation and as such has no state to sync http://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbcodbc-server I'm not sure what the current implementation is lacking, but if you can lay out an example then that would be helpful. |
Sorry for my poor explanation... Let say in this way: Now 'kafka-connect-hdfs' use class 'HiveMetastore' to do hive actions, for example add partitions when new data come in. It relys on 'org.apache.hadoop.hive.metastore.*',and need a hive metastore service in the cluster. In our spark 1.6 cluster, there is no hive metastore service. We need deploy a new one just for 'kafka-connect-hdfs'. That's unworthy and heavily. So we add a thin implement 'Hive2Thrift' just upon "java.sql.", it can do same thing,but only need include standard 'java.sql.', and a spark thrift server. I am not a expect,but in our spark cluster,really unworthy to deploy a heavily hivemestore service. |
@lakeofsand so are you suggesting an architectural change here to remove the HiveMetastore dependency of the connector for those HDFS instances that have no Hive service associated with them? I'll reopen this but I think we need more details here because that's a pretty non-trivial change. |
Maybe no need an 'architectural change'.
} |
But i can't find a appropriate way to override 'alterSchema()' |
In some spark cluster,there will no hive metastore deployed, but only a thrift server upon spark engine.
We should consider to support kafka-connect-hdfs in this scenario.
I try to modify locally,with not so much change,it works well.
(but so far,schema change is a litter difficult.)
The text was updated successfully, but these errors were encountered: