Description
when add the config "spark.gluten.sql.native.writer.enabled true",and an error occurred:"The file path is not local when writing data with parquet format in velox runtime!"
Does velox not support writing hive table on HDFS now?
void VeloxParquetDatasource::init(const std::unordered_map<std::string, std::string>& sparkConfs) { if (strncmp(filePath_.c_str(), "file:", 5) == 0) { sink_ = dwio::common::FileSink::create(filePath_, {.pool = pool_.get()}); } else { throw std::runtime_error("The file path is not local when writing data with parquet format in velox runtime!"); } ......... }
Description
when add the config "spark.gluten.sql.native.writer.enabled true",and an error occurred:"The file path is not local when writing data with parquet format in velox runtime!"
Does velox not support writing hive table on HDFS now?
void VeloxParquetDatasource::init(const std::unordered_map<std::string, std::string>& sparkConfs) { if (strncmp(filePath_.c_str(), "file:", 5) == 0) { sink_ = dwio::common::FileSink::create(filePath_, {.pool = pool_.get()}); } else { throw std::runtime_error("The file path is not local when writing data with parquet format in velox runtime!"); } ......... }