New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IndexShardGatewayRecoveryException: [<index name>][4] failed to fetch index version after copying it over #4798
Comments
the shard might be retried in order to recover. Are you using multiple data path locations by any chance? there is a bug fixed in 0.90.10 in regards to it #4674 |
yep, im using multiple data path's |
Side note, you didn't have to delete the shard, you could have just deleted the |
I deleted all |
Hi I get this error even after deleting the segments.gen files and restarting. This is the exception found in my log.
|
Looks like you're trying to read a newer index with an older version of Elasticsearch. This is not supported |
Hi, im getting below error while starting the titan. [2019-02-18 05:50:06,396][WARN ][cluster.action.shard ] [Balthakk] [titan][0] sending failed shard for [titan][0], node[SYu95jcuTMWZiIigBLDoaA], [P], s[INITIALIZING], indexUUID [otSBLuRsTpWQffWOpuvesQ], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[titan][0] failed to fetch index version after copying it over]; nested: IndexShardGatewayRecoveryException[[titan][0] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_1hsgx_es090_0.tip, _1gvnj_es090_0.doc, _1hsi1.si, _1hsi1_es090_0.pos, _1hsh7.si, _1hsh7_es090_0.doc, _td5b_12u.del, _1eo61_es090_0.tim, _1hcvy_es090_0.pos, _td5b_es090_0.tim, _1hsgx_Lucene45_0.dvm, _1hshh_Lucene45_0.dvm, _1hsg3.nvm, _16h2e_es090_0.doc, _1hshr_es090_0.blm, _1hsg3_Lucene45_0.dvm, _1hsg3.fnm, _1hsih.si, _1hsi1.fnm, _1hsg3.fdt, _1hsi1_es090_0.doc, _1hcvy_es090_0.blm, _1hsig.si, _1hshh_es090_0.tip, _1hcvy_es090_0.doc, _1hsi1_es090_0.blm, _1eo61_Lucene45_0.dvm, _td5b.fdt, _1hsic.cfs, _1hsif.cfe, _1hsh7_es090_0.pos, _16h2e_es090_0.blm, _1g5kq.fnm, _1hshh_es090_0.blm, _1hsif.cfs, _1hsig.cfe, _1hsg3_Lucene45_0.dvd, _1hsgx.si, _1g5kq.fdt, _1hsh7_Lucene45_0.dvm, _1hsg3.fdx, _1hsi1.fdx, _1hsh7.fdx, _td5b_es090_0.blm, _td5b_es090_0.tip, _1eo61.si, _16h2e.fnm, _1g5kq_es090_0.blm, _1hcvy.si, _1hcvy.nvm, _1hsg3_es090_0.blm, _1gvnj_es090_0.blm, _1g5kq_Lucene45_0.dvm, _1gvnj_6d.del, _1hsie.cfs, _1hcvy_Lucene45_0.dvd, _1hsh7_es090_0.tim, _1hshh_es090_0.tim, _1hshr_Lucene45_0.dvm, segments.gen, _1gvnj_es090_0.tip, _1hsi1_Lucene45_0.dvd, _1gvnj.fnm, _1hshh.fdx, _td5b_Lucene45_0.dvm, _1eo61_Lucene45_0.dvd, _1hsib.fnm, _1eo61_g8.del, _1hsih.cfe, _1eo61.nvd, _1gvnj.si, _1hsgx.nvd, _1hsib_Lucene45_0.dvd, _16h2e_es090_0.tim, _1gvnj_Lucene45_0.dvd, _1hsib.fdx, _1hsgx_es090_0.pos, _1hsgx_es090_0.blm, _1hshh_es090_0.doc, _16h2e_13x.del, _1hcvy_4b.del, _1eo61.fdt, _1hsib_es090_0.tim, _1hsih.cfs, _16h2e.si, _1hsgx.nvm, _1hsib.si, _1hshr_Lucene45_0.dvd, _1gvnj.fdx, _1hsif.si, _1hsie.cfe, _td5b.nvd, _1hshh.fnm, _td5b.fnm, _1hsic.si, _1hsic.cfe, _16h2e.fdx, _1hsib.nvm, _1hshr_es090_0.tip, _1hsg3_es090_0.doc, _1gvnj_es090_0.tim, _16h2e.nvd, _1g5kq.nvm, _1hshh.si, _1hsh7_es090_0.blm, _1hshh_es090_0.pos, _1hsh7_es090_0.tip, _1hshr.fdt, _1hsgx_es090_0.tim, _1hsi1.nvd, _1hshh.nvd, _1gvnj.nvm, _1hcvy_Lucene45_0.dvm, _1hcvy.nvd, _1eo61_es090_0.pos, _16h2e.fdt, _1hsh7.fdt, _1eo61_es090_0.tip, _1hsid.cfs, _1hsgx.fdx, _1hsib_es090_0.tip, _1hsg3_es090_0.tip, _1eo61.fnm, _1hshr_es090_0.doc, _1hshr_es090_0.pos, _1hshh.fdt, _1hsh7.fnm, _16h2e.nvm, _1hsi1_es090_0.tim, _1hsib_es090_0.pos, _1hsib_Lucene45_0.dvm, _1g5kq_es090_0.doc, _1hsgx_Lucene45_0.dvd, _1gvnj.fdt, _1hshh_Lucene45_0.dvd, _1eo61.nvm, _1hcvy_es090_0.tip, _1hsgx.fdt, _1hsi1.nvm, _1hshr_es090_0.tim, _1hcvy_es090_0.tim, _td5b.nvm, _1hsg3.si, _1hsgx.fnm, _1hcvy.fdt, _1eo61_es090_0.blm, _1hsi1.fdt, _1hsib.fdt, _1hsh7.nvm, _1g5kq_es090_0.tip, _1hsg3_es090_0.tim, _1g5kq.si, _1hsie.si, _1hshr.si, _16h2e_es090_0.tip, _16h2e_Lucene45_0.dvm, _1hsi1_es090_0.tip, _1gvnj_Lucene45_0.dvm, _1hsib_es090_0.blm, _td5b.fdx, _1hsi1_Lucene45_0.dvm, _1g5kq_Lucene45_0.dvd, _1hsgx_es090_0.doc, write.lock, _1g5kq.nvd, _td5b_es090_0.pos, _1hshr.fdx, _td5b.si, _1hsig.cfs, _1g5kq_es090_0.pos, _1hshr.fnm, _16h2e_es090_0.pos, _1g5kq_es090_0.tim, _1g5kq_9e.del, _16h2e_Lucene45_0.dvd, _1eo61.fdx, _1hshr.nvm, _1hsh7_Lucene45_0.dvd, _1hsid.cfe, _1hshr.nvd, _td5b_Lucene45_0.dvd, _1eo61_es090_0.doc, _1gvnj.nvd, _1hsh7.nvd, _1hsg3.nvd, _td5b_es090_0.doc, _1hsg3_es090_0.pos, _1g5kq.fdx, _1hsib_es090_0.doc, _1hcvy.fdx, _1hshh.nvm, _1hsid.si, _1hsib.nvd, _1hcvy.fnm, _1gvnj_es090_0.pos]]; nested: FileNotFoundException[segments_wzv]; ]] Any help much appropriated. Thanks, |
I started getting this warning/error after full cluster restart:
the problem i have with this warning is, that it never stops (obviously it goes into some infinite loop trying to recover broken shard and it cant)
i have logging level set to WARN, so my log files are growing very fast (this warning is logged like 10 times per second on every node)
context:
The text was updated successfully, but these errors were encountered: