Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documented the need for hadoop config after insert-segment-to-db #3402

Merged
merged 1 commit into from Aug 28, 2016

Conversation

ashishawasthi
Copy link
Contributor

Default file-system in 'core-site.xml' is required after running insert-segment-to-db when segments are stored in HDFS, since it saves all segment paths as relative URLs.

While running insert-segment-to-db, class io.druid.storage.hdfs.HdfsDataSegmentFinder changes the absolute path to relative.

While loading the segments using relative path, io.druid.storage.hdfs.HdfsDataSegmentPuller.getSegmentFiles calls path.getFileSystem(config), which (in absence of "fs.defaultFS" in Hadoop conf, in org.apache.hadoop.fs.FileSystem) assumes it to be a file on local disc, even when type in segment metadata is set to hdfs.

@fjy
Copy link
Contributor

fjy commented Aug 28, 2016

👍

@ashishawasthi do you mind filling out the CLA located at http://druid.io/community/cla.html

@ashishawasthi
Copy link
Contributor Author

@fjy submitted CLA

@fjy fjy added this to the 0.9.2 milestone Aug 28, 2016
@fjy fjy merged commit 6b40bf8 into apache:master Aug 28, 2016
seoeun25 pushed a commit to seoeun25/incubator-druid that referenced this pull request Feb 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants