Getting issue in using spark connector #5653
-
I am using Running Spark version 3.3.1 and nebula-spark-connector-3.0.0.jar. I have placed the jar file in SPARK_CLASSPATH. Code - Error - |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 32 replies
-
@wey-gu - Would you please help me? |
Beta Was this translation helpful? Give feedback.
-
The released version of spark connector doesn't support spark 3 yet, please use this one instead:
@Nicole00 shall we release a version with spark3 support ASAP? |
Beta Was this translation helpful? Give feedback.
-
It says : java.util.NoSuchElementException: key not found: operateType df = self.ctxt.spark_session.read.format(
"com.vesoft.nebula.connector.NebulaDataSource").option(
"type", "vertex").option(
"spaceName", self.ctxt.infra_config.space).option(
"label", "cases").option(
"returnCols", "*").option(
"metaAddress", self.ctxt.infra_config.meta_svc).option(
"partitionNumber", 1).option(
"user", self.ctxt.infra_config.username).option(
"passwd", self.ctxt.infra_config.password).option(
"operateType", "read").load() could you try this with I didn't get time to try out pyspark with spark-3, where there are slightly different options needed. Also, for reader from metad/storaged, user/password is not needed as those are for graphd authentication. |
Beta Was this translation helpful? Give feedback.
-
issue created: vesoft-inc/nebula-spark-connector#114 |
Beta Was this translation helpful? Give feedback.
-
Hi @wey-gu, I tried your suggestion, now getting another error. Just for your information, we are running PySpark on one Cluster and Nebula is running in another cluster.
File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 184, in load |
Beta Was this translation helpful? Give feedback.
-
@wey-gu - I tried write operation with graphd and metad but getting following error. Traceback (most recent call last):
|
Beta Was this translation helpful? Give feedback.
Could you please add
overwrite
write mode forableProvider implementation com.vesoft.nebula.connector.NebulaDataSource cannot be written with ErrorIfExists mode, please use Append or Overwrite modes instead.
?https://sparkbyexamples.com/spark/spark-write-modes-explained