New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NoSuchFileException: /opt/fonsview/3RD/elasticsearch/data/stsc_p2p/nodes/0/indices/prs_sysinfo_20161011/3/translog/translog.ckp #20854
Comments
what version are you running on and what lead to this failure? do you have logs you can provide? |
version:2.2.0
|
I think we fixed this in 2.3 or 2.3.1 - can you upgrade to the latest and see if the index recovers? |
OK, I will try. |
@s1monw I have upgraded to the 2.4.0,but there comes a new problem.
I have read https://discuss.elastic.co/t/risk-associated-with-action-write-consistency-and-index-recovery-initial-shards-for-cluster-recovery-with-a-single-node/50211 and set "index.recovery.initial_shards" : 1,but it didn't help. |
what is the problem? these shards are unassigned but should assign at some point? How many unassigned shards do you have. Do they initialize? |
the problem is that many shards are unassigned. My cluster has only one node.
yes,these shards are primary shards which should assign at my node.
I have 495 shards unassigned. How to see whether they are initialized? |
they should initialize one after another. You have 4 initializing and that is the default value for curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.node_concurrent_recoveries" : 10
}
}' |
I tried it, but it didn't help.
And there are persitent logging as below:
|
do you see any exceptions in the log files? |
I have emailed the log to you, please help me see what's wrong. |
@lizhecao it looks like your shards are recovering, just slowly. there are no exceptions in what you showed above. |
how long will it take for a shard recovery, I have waited for a long time. |
Are there any methods to speed it up? @clintongormley @s1monw |
@lizhecao are you still seeing this issue? If so please provide details and reopen the issue |
@colings86 3q for help. but now I can't provider details because the environment that time has been lost now. my solution is to copy the translog.ckp as the lost ckp file |
I have met an issue like #Broken translog on most indexes like NoSuchFileException elasticsearch/data/dev-cluster/nodes/0/indices/logstash-2016.01.04/2/translog/translog-226.ckp #16495
but I can't understand how to solve it without upgrade? what's the meaning of copy and paste the ckp file?any can show me what to do in detail?
The text was updated successfully, but these errors were encountered: