New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to stripe the data location over multiple locations #1356
Comments
hi,@kimchy, i was wondering after set the data location to multiple locations,can i change them lately,and does these location containing the same copy? |
You can change them later, but requires to restart the node. Each location does not share the same copy, its striped ala RAID 0. |
What is the expected failure mode if a disk dies or otherwise becomes inaccessible? Will ES continue to write to the remaining volumes? Will the data on the failed node be recognized and recovered by the cluster? |
This functionality is quite interesting, because it can potentially improve IO throughput of ES on machines with several disks. But there is lack of documentation on this. What is the pattern of the distribution between the locations? Is one shard splited over them? Or one shard can only go to one data.path? |
According to v2.0 breaking changes, specific shard goes to certain data path. |
Yes that's correctly. A shard will be entirely on one data path. Multiple shards are distributed across different data paths. |
Allow to stripe the data location over multiple locations. The striping is simple, placing whole files in one of the locations, and deciding where to place the file based on the location with greatest free space. Note, there is no multiple copies of the same data, in that, its similar to RAID 0. Though simple, it should provide a good solution for people that don't want to mess with raids and the like. Here is how it is configured:
Or the in an array format:
The text was updated successfully, but these errors were encountered: