Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disk-based shard allocation does not relocate shards unless path.data is absolute #45176

Closed
DaveCTurner opened this issue Aug 3, 2019 · 3 comments · Fixed by #45179
Closed
Assignees
Labels
>bug :Core/Infra/Settings Settings infrastructure and APIs :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes)

Comments

@DaveCTurner
Copy link
Contributor

If path.data is set to a relative path then clusterInfo.getDataPath(shardRouting) also returns a relative path. However, getDiskUsage() returns an absolute path. This mismatch effectively disables the disk-based shard allocator at this condition:

if (dataPath == null || usage.getPath().equals(dataPath) == false) {

@DaveCTurner DaveCTurner added >bug :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) labels Aug 3, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

@jasontedor
Copy link
Member

jasontedor commented Aug 4, 2019

I think that we should be normalizing all the paths to absolutely normalized paths in the environment at startup. Adding the core/infra label too.

@jasontedor jasontedor added the :Core/Infra/Settings Settings infrastructure and APIs label Aug 4, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Core/Infra/Settings Settings infrastructure and APIs :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants