Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexShardGatewayRecoveryException: [<index name>][4] failed to fetch index version after copying it over #4798

Closed
karol-gwaj opened this issue Jan 19, 2014 · 7 comments

Comments

@karol-gwaj
Copy link

I started getting this warning/error after full cluster restart:

[2014-01-19 19:15:15,239][WARN ][cluster.action.shard     ] [<node name>] [<index name>][4] sending failed shard for [<index name>][4], node[FgQg7A4HRdSuCKK-EHjNdQ], [P], s[INITIALIZING], indexUUID [gRKAbB7AQYGhgZRfi6pgzQ], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[<index name>][4] failed to fetch index version after copying it over]; nested: IndexShardGatewayRecoveryException[[<index name>][4] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_7cn.fdx, _agr_es090_0.tim, _agr.nvm, _b4g.nvd, _b4g.fdt, _7cn.nvd, _7cn_es090_0.pos, _b4g.fdx, _agr.nvd, _b4g_es090_0.pay, _b4g_es090_0.pos, _b4g.nvm, _b4g.si, _checksums-1390157207161, _91s_es090_0.tim, _b4g_es090_0.doc, _7cn.nvm, _7cn_es090_0.tip, _agr_es090_0.pos, _7cn.si, _7cn_es090_0.blm, _7cn.fnm, _agr_es090_0.pay, _7cn_12.del, _b4g_es090_0.tip, _91s.si, _91s.nvm, _agr.fnm, _agr_es090_0.doc, _7cn_es090_0.pay, _91s_es090_0.tip, _agr.fdt, _91s_es090_0.blm, _agr.fdx, _agr_es090_0.tip, _7cn_es090_0.doc, _91s.fdt, segments_7i, _91s_d.del, _91s_es090_0.doc, _7cn_es090_0.tim, segments.gen, _91s_es090_0.pay, _agr.si, _7cn.fdt, _91s.fdx, _91s.fnm, _b4g_es090_0.tim, _b4g.fnm, _agr_d.del, _b4g_es090_0.blm, _91s.nvd, _agr_es090_0.blm, _91s_es090_0.pos]]; nested: FileNotFoundException[segments_7k]; ]]
[2014-01-19 19:15:15,437][WARN ][indices.cluster          ] [<node name>] [<index name>][4] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [<index name>][4] failed to fetch index version after copying it over
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:136)
        at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [<index name>][4] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_7cn.fdx, _agr_es090_0.tim, _agr.nvm, _b4g.nvd, _b4g.fdt, _7cn.nvd, _7cn_es090_0.pos, _b4g.fdx, _agr.nvd, _b4g_es090_0.pay, _b4g_es090_0.pos, _b4g.nvm, _b4g.si, _checksums-1390157207161, _91s_es090_0.tim, _b4g_es090_0.doc, _7cn.nvm, _7cn_es090_0.tip, _agr_es090_0.pos, _7cn.si, _7cn_es090_0.blm, _7cn.fnm, _agr_es090_0.pay, _7cn_12.del, _b4g_es090_0.tip, _91s.si, _91s.nvm, _agr.fnm, _agr_es090_0.doc, _7cn_es090_0.pay, _91s_es090_0.tip, _agr.fdt, _91s_es090_0.blm, _agr.fdx, _agr_es090_0.tip, _7cn_es090_0.doc, _91s.fdt, segments_7i, _91s_d.del, _91s_es090_0.doc, _7cn_es090_0.tim, segments.gen, _91s_es090_0.pay, _agr.si, _7cn.fdt, _91s.fdx, _91s.fnm, _b4g_es090_0.tim, _b4g.fnm, _agr_d.del, _b4g_es090_0.blm, _91s.nvd, _agr_es090_0.blm, _91s_es090_0.pos]
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:115)
        ... 4 more
Caused by: java.io.FileNotFoundException: segments_7k
        at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:469)
        at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324)
        at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
        at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
        at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:114)
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:106)
        ... 4 more
[2014-01-19 19:15:15,438][WARN ][cluster.action.shard     ] [<node name>] [<index name>][4] sending failed shard for [<index name>][4], node[FgQg7A4HRdSuCKK-EHjNdQ], [P], s[INITIALIZING], indexUUID [gRKAbB7AQYGhgZRfi6pgzQ], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[<index name>][4] failed to fetch index version after copying it over]; nested: IndexShardGatewayRecoveryException[[<index name>][4] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_7cn.fdx, _agr_es090_0.tim, _agr.nvm, _b4g.nvd, _b4g.fdt, _7cn.nvd, _7cn_es090_0.pos, _b4g.fdx, _agr.nvd, _b4g_es090_0.pay, _b4g_es090_0.pos, _b4g.nvm, _b4g.si, _checksums-1390157207161, _91s_es090_0.tim, _b4g_es090_0.doc, _7cn.nvm, _7cn_es090_0.tip, _agr_es090_0.pos, _7cn.si, _7cn_es090_0.blm, _7cn.fnm, _agr_es090_0.pay, _7cn_12.del, _b4g_es090_0.tip, _91s.si, _91s.nvm, _agr.fnm, _agr_es090_0.doc, _7cn_es090_0.pay, _91s_es090_0.tip, _agr.fdt, _91s_es090_0.blm, _agr.fdx, _agr_es090_0.tip, _7cn_es090_0.doc, _91s.fdt, segments_7i, _91s_d.del, _91s_es090_0.doc, _7cn_es090_0.tim, segments.gen, _91s_es090_0.pay, _agr.si, _7cn.fdt, _91s.fdx, _91s.fnm, _b4g_es090_0.tim, _b4g.fnm, _agr_d.del, _b4g_es090_0.blm, _91s.nvd, _agr_es090_0.blm, _91s_es090_0.pos]]; nested: FileNotFoundException[segments_7k]; ]]
[2014-01-19 19:15:15,512][WARN ][indices.cluster          ] [<node name>] [<index name>][4] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [<index name>][4] failed to fetch index version after copying it over
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:136)
        at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [<index name>][4] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_7cn.fdx, _agr_es090_0.tim, _agr.nvm, _b4g.nvd, _b4g.fdt, _7cn.nvd, _7cn_es090_0.pos, _b4g.fdx, _agr.nvd, _b4g_es090_0.pay, _b4g_es090_0.pos, _b4g.nvm, _b4g.si, _checksums-1390157207161, _91s_es090_0.tim, _b4g_es090_0.doc, _7cn.nvm, _7cn_es090_0.tip, _agr_es090_0.pos, _7cn.si, _7cn_es090_0.blm, _7cn.fnm, _agr_es090_0.pay, _7cn_12.del, _b4g_es090_0.tip, _91s.si, _91s.nvm, _agr.fnm, _agr_es090_0.doc, _7cn_es090_0.pay, _91s_es090_0.tip, _agr.fdt, _91s_es090_0.blm, _agr.fdx, _agr_es090_0.tip, _7cn_es090_0.doc, _91s.fdt, segments_7i, _91s_d.del, _91s_es090_0.doc, _7cn_es090_0.tim, segments.gen, _91s_es090_0.pay, _agr.si, _7cn.fdt, _91s.fdx, _91s.fnm, _b4g_es090_0.tim, _b4g.fnm, _agr_d.del, _b4g_es090_0.blm, _91s.nvd, _agr_es090_0.blm, _91s_es090_0.pos]
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:115)
        ... 4 more
Caused by: java.io.FileNotFoundException: segments_7k
        at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:469)
        at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324)
        at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
        at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
        at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:114)
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:106)
        ... 4 more
[2014-01-19 19:15:15,515][WARN ][cluster.action.shard     ] [<node name>] [<index name>][4] sending failed shard for [<index name>][4], node[FgQg7A4HRdSuCKK-EHjNdQ], [P], s[INITIALIZING], indexUUID [gRKAbB7AQYGhgZRfi6pgzQ], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[<index name>][4] failed to fetch index version after copying it over]; nested: IndexShardGatewayRecoveryException[[<index name>][4] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_7cn.fdx, _agr_es090_0.tim, _agr.nvm, _b4g.nvd, _b4g.fdt, _7cn.nvd, _7cn_es090_0.pos, _b4g.fdx, _agr.nvd, _b4g_es090_0.pay, _b4g_es090_0.pos, _b4g.nvm, _b4g.si, _checksums-1390157207161, _91s_es090_0.tim, _b4g_es090_0.doc, _7cn.nvm, _7cn_es090_0.tip, _agr_es090_0.pos, _7cn.si, _7cn_es090_0.blm, _7cn.fnm, _agr_es090_0.pay, _7cn_12.del, _b4g_es090_0.tip, _91s.si, _91s.nvm, _agr.fnm, _agr_es090_0.doc, _7cn_es090_0.pay, _91s_es090_0.tip, _agr.fdt, _91s_es090_0.blm, _agr.fdx, _agr_es090_0.tip, _7cn_es090_0.doc, _91s.fdt, segments_7i, _91s_d.del, _91s_es090_0.doc, _7cn_es090_0.tim, segments.gen, _91s_es090_0.pay, _agr.si, _7cn.fdt, _91s.fdx, _91s.fnm, _b4g_es090_0.tim, _b4g.fnm, _agr_d.del, _b4g_es090_0.blm, _91s.nvd, _agr_es090_0.blm, _91s_es090_0.pos]]; nested: FileNotFoundException[segments_7k]; ]]
[2014-01-19 19:15:15,549][WARN ][indices.cluster          ] [<node name>] [<index name>][4] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [<index name>][4] failed to fetch index version after copying it over
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:136)
        at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [<index name>][4] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_7cn.fdx, _agr_es090_0.tim, _agr.nvm, _b4g.nvd, _b4g.fdt, _7cn.nvd, _7cn_es090_0.pos, _b4g.fdx, _agr.nvd, _b4g_es090_0.pay, _b4g_es090_0.pos, _b4g.nvm, _b4g.si, _checksums-1390157207161, _91s_es090_0.tim, _b4g_es090_0.doc, _7cn.nvm, _7cn_es090_0.tip, _agr_es090_0.pos, _7cn.si, _7cn_es090_0.blm, _7cn.fnm, _agr_es090_0.pay, _7cn_12.del, _b4g_es090_0.tip, _91s.si, _91s.nvm, _agr.fnm, _agr_es090_0.doc, _7cn_es090_0.pay, _91s_es090_0.tip, _agr.fdt, _91s_es090_0.blm, _agr.fdx, _agr_es090_0.tip, _7cn_es090_0.doc, _91s.fdt, segments_7i, _91s_d.del, _91s_es090_0.doc, _7cn_es090_0.tim, segments.gen, _91s_es090_0.pay, _agr.si, _7cn.fdt, _91s.fdx, _91s.fnm, _b4g_es090_0.tim, _b4g.fnm, _agr_d.del, _b4g_es090_0.blm, _91s.nvd, _agr_es090_0.blm, _91s_es090_0.pos]
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:115)
        ... 4 more
Caused by: java.io.FileNotFoundException: segments_7k
        at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:469)
        at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324)
        at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
        at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
        at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:114)
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:106)
        ... 4 more

the problem i have with this warning is, that it never stops (obviously it goes into some infinite loop trying to recover broken shard and it cant)
i have logging level set to WARN, so my log files are growing very fast (this warning is logged like 10 times per second on every node)

context:

  • es version 0.90.9
  • 3 nodes cluster
  • 10 shards per index with 2 replicas
  • local gateway
@kimchy
Copy link
Member

kimchy commented Jan 19, 2014

the shard might be retried in order to recover. Are you using multiple data path locations by any chance? there is a bug fixed in 0.90.10 in regards to it #4674

@karol-gwaj
Copy link
Author

yep, im using multiple data path's
as for now i fixed this error manually (by deleting broken shard)
but just in case i will upgrade my cluster to 0.90.10 and we will see if it will happen again
thx

@kimchy
Copy link
Member

kimchy commented Jan 19, 2014

Side note, you didn't have to delete the shard, you could have just deleted the segments.gen files in it

@kimchy kimchy closed this as completed Jan 19, 2014
@dieend
Copy link

dieend commented Oct 20, 2014

I deleted all segments.gen files, and it worked. Thanks.

@Bowrna
Copy link

Bowrna commented Dec 17, 2014

Hi

I get this error even after deleting the segments.gen files and restarting.

This is the exception found in my log.

17-12-2014 15:30:31" "WARNING" "30" "[ES_NODE_NAME] [812843][0] failed to start shard" "org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [812843][0] failed to fetch index version after copying it over
    at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:136)
    at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [812843][0] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_k.si, segments_u, _m.si, segments.gen, _m.cfe, _checksums-1418381938786, _k.cfs, _m.cfs, _k.cfe, write.lock]
    at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:115)
    ... 4 more
Caused by: org.apache.lucene.index.IndexFormatTooNewException: Format version is not supported (resource: MMapIndexInput(path=""/home/likewise-open/ZOHOCORP/bowrna-1819/elasticsearch/elasticSearchData/alarmcentral/nodes/0/indices/812843/0/index/segments.gen"")): -3 (needs to be between -2 and -2)
    at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:782)
    at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
    at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
    at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:114)
    at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:106)
    ... 4 more

@clintongormley
Copy link

@Bowrna

IndexFormatTooNewException

Looks like you're trying to read a newer index with an older version of Elasticsearch. This is not supported

@ghost
Copy link

ghost commented Feb 18, 2019

Hi,

im getting below error while starting the titan.

[2019-02-18 05:50:06,396][WARN ][cluster.action.shard ] [Balthakk] [titan][0] sending failed shard for [titan][0], node[SYu95jcuTMWZiIigBLDoaA], [P], s[INITIALIZING], indexUUID [otSBLuRsTpWQffWOpuvesQ], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[titan][0] failed to fetch index version after copying it over]; nested: IndexShardGatewayRecoveryException[[titan][0] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_1hsgx_es090_0.tip, _1gvnj_es090_0.doc, _1hsi1.si, _1hsi1_es090_0.pos, _1hsh7.si, _1hsh7_es090_0.doc, _td5b_12u.del, _1eo61_es090_0.tim, _1hcvy_es090_0.pos, _td5b_es090_0.tim, _1hsgx_Lucene45_0.dvm, _1hshh_Lucene45_0.dvm, _1hsg3.nvm, _16h2e_es090_0.doc, _1hshr_es090_0.blm, _1hsg3_Lucene45_0.dvm, _1hsg3.fnm, _1hsih.si, _1hsi1.fnm, _1hsg3.fdt, _1hsi1_es090_0.doc, _1hcvy_es090_0.blm, _1hsig.si, _1hshh_es090_0.tip, _1hcvy_es090_0.doc, _1hsi1_es090_0.blm, _1eo61_Lucene45_0.dvm, _td5b.fdt, _1hsic.cfs, _1hsif.cfe, _1hsh7_es090_0.pos, _16h2e_es090_0.blm, _1g5kq.fnm, _1hshh_es090_0.blm, _1hsif.cfs, _1hsig.cfe, _1hsg3_Lucene45_0.dvd, _1hsgx.si, _1g5kq.fdt, _1hsh7_Lucene45_0.dvm, _1hsg3.fdx, _1hsi1.fdx, _1hsh7.fdx, _td5b_es090_0.blm, _td5b_es090_0.tip, _1eo61.si, _16h2e.fnm, _1g5kq_es090_0.blm, _1hcvy.si, _1hcvy.nvm, _1hsg3_es090_0.blm, _1gvnj_es090_0.blm, _1g5kq_Lucene45_0.dvm, _1gvnj_6d.del, _1hsie.cfs, _1hcvy_Lucene45_0.dvd, _1hsh7_es090_0.tim, _1hshh_es090_0.tim, _1hshr_Lucene45_0.dvm, segments.gen, _1gvnj_es090_0.tip, _1hsi1_Lucene45_0.dvd, _1gvnj.fnm, _1hshh.fdx, _td5b_Lucene45_0.dvm, _1eo61_Lucene45_0.dvd, _1hsib.fnm, _1eo61_g8.del, _1hsih.cfe, _1eo61.nvd, _1gvnj.si, _1hsgx.nvd, _1hsib_Lucene45_0.dvd, _16h2e_es090_0.tim, _1gvnj_Lucene45_0.dvd, _1hsib.fdx, _1hsgx_es090_0.pos, _1hsgx_es090_0.blm, _1hshh_es090_0.doc, _16h2e_13x.del, _1hcvy_4b.del, _1eo61.fdt, _1hsib_es090_0.tim, _1hsih.cfs, _16h2e.si, _1hsgx.nvm, _1hsib.si, _1hshr_Lucene45_0.dvd, _1gvnj.fdx, _1hsif.si, _1hsie.cfe, _td5b.nvd, _1hshh.fnm, _td5b.fnm, _1hsic.si, _1hsic.cfe, _16h2e.fdx, _1hsib.nvm, _1hshr_es090_0.tip, _1hsg3_es090_0.doc, _1gvnj_es090_0.tim, _16h2e.nvd, _1g5kq.nvm, _1hshh.si, _1hsh7_es090_0.blm, _1hshh_es090_0.pos, _1hsh7_es090_0.tip, _1hshr.fdt, _1hsgx_es090_0.tim, _1hsi1.nvd, _1hshh.nvd, _1gvnj.nvm, _1hcvy_Lucene45_0.dvm, _1hcvy.nvd, _1eo61_es090_0.pos, _16h2e.fdt, _1hsh7.fdt, _1eo61_es090_0.tip, _1hsid.cfs, _1hsgx.fdx, _1hsib_es090_0.tip, _1hsg3_es090_0.tip, _1eo61.fnm, _1hshr_es090_0.doc, _1hshr_es090_0.pos, _1hshh.fdt, _1hsh7.fnm, _16h2e.nvm, _1hsi1_es090_0.tim, _1hsib_es090_0.pos, _1hsib_Lucene45_0.dvm, _1g5kq_es090_0.doc, _1hsgx_Lucene45_0.dvd, _1gvnj.fdt, _1hshh_Lucene45_0.dvd, _1eo61.nvm, _1hcvy_es090_0.tip, _1hsgx.fdt, _1hsi1.nvm, _1hshr_es090_0.tim, _1hcvy_es090_0.tim, _td5b.nvm, _1hsg3.si, _1hsgx.fnm, _1hcvy.fdt, _1eo61_es090_0.blm, _1hsi1.fdt, _1hsib.fdt, _1hsh7.nvm, _1g5kq_es090_0.tip, _1hsg3_es090_0.tim, _1g5kq.si, _1hsie.si, _1hshr.si, _16h2e_es090_0.tip, _16h2e_Lucene45_0.dvm, _1hsi1_es090_0.tip, _1gvnj_Lucene45_0.dvm, _1hsib_es090_0.blm, _td5b.fdx, _1hsi1_Lucene45_0.dvm, _1g5kq_Lucene45_0.dvd, _1hsgx_es090_0.doc, write.lock, _1g5kq.nvd, _td5b_es090_0.pos, _1hshr.fdx, _td5b.si, _1hsig.cfs, _1g5kq_es090_0.pos, _1hshr.fnm, _16h2e_es090_0.pos, _1g5kq_es090_0.tim, _1g5kq_9e.del, _16h2e_Lucene45_0.dvd, _1eo61.fdx, _1hshr.nvm, _1hsh7_Lucene45_0.dvd, _1hsid.cfe, _1hshr.nvd, _td5b_Lucene45_0.dvd, _1eo61_es090_0.doc, _1gvnj.nvd, _1hsh7.nvd, _1hsg3.nvd, _td5b_es090_0.doc, _1hsg3_es090_0.pos, _1g5kq.fdx, _1hsib_es090_0.doc, _1hcvy.fdx, _1hshh.nvm, _1hsid.si, _1hsib.nvd, _1hcvy.fnm, _1gvnj_es090_0.pos]]; nested: FileNotFoundException[segments_wzv]; ]]
[2019-02-18 05:50:06,396][WARN ][cluster.action.shard ] [Balthakk] [titan][0] received shard failed for [titan][0], node[SYu95jcuTMWZiIigBLDoaA], [P], s[INITIALIZING], indexUUID [otSBLuRsTpWQffWOpuvesQ], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[titan][0] failed to fetch index version after copying it over]; nested: IndexShardGatewayRecoveryException[[titan][0] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_1hsgx_es090_0.tip, _1gvnj_es090_0.doc, _1hsi1.si, _1hsi1_es090_0.pos, _1hsh7.si, _1hsh7_es090_0.doc, _td5b_12u.del, _1eo61_es090_0.tim, _1hcvy_es090_0.pos, _td5b_es090_0.tim, _1hsgx_Lucene45_0.dvm, _1hshh_Lucene45_0.dvm, _1hsg3.nvm, _16h2e_es090_0.doc, _1hshr_es090_0.blm, _1hsg3_Lucene45_0.dvm, _1hsg3.fnm, _1hsih.si, _1hsi1.fnm, _1hsg3.fdt, _1hsi1_es090_0.doc, _1hcvy_es090_0.blm, _1hsig.si, _1hshh_es090_0.tip, _1hcvy_es090_0.doc, _1hsi1_es090_0.blm, _1eo61_Lucene45_0.dvm, _td5b.fdt, _1hsic.cfs, _1hsif.cfe, _1hsh7_es090_0.pos, _16h2e_es090_0.blm, _1g5kq.fnm, _1hshh_es090_0.blm, _1hsif.cfs, _1hsig.cfe, _1hsg3_Lucene45_0.dvd, _1hsgx.si, _1g5kq.fdt, _1hsh7_Lucene45_0.dvm, _1hsg3.fdx, _1hsi1.fdx, _1hsh7.fdx, _td5b_es090_0.blm, _td5b_es090_0.tip, _1eo61.si, _16h2e.fnm, _1g5kq_es090_0.blm, _1hcvy.si, _1hcvy.nvm, _1hsg3_es090_0.blm, _1gvnj_es090_0.blm, _1g5kq_Lucene45_0.dvm, _1gvnj_6d.del, _1hsie.cfs, _1hcvy_Lucene45_0.dvd, _1hsh7_es090_0.tim, _1hshh_es090_0.tim, _1hshr_Lucene45_0.dvm, segments.gen, _1gvnj_es090_0.tip, _1hsi1_Lucene45_0.dvd, _1gvnj.fnm, _1hshh.fdx, _td5b_Lucene45_0.dvm, _1eo61_Lucene45_0.dvd, _1hsib.fnm, _1eo61_g8.del, _1hsih.cfe, _1eo61.nvd, _1gvnj.si, _1hsgx.nvd, _1hsib_Lucene45_0.dvd, _16h2e_es090_0.tim, _1gvnj_Lucene45_0.dvd, _1hsib.fdx, _1hsgx_es090_0.pos, _1hsgx_es090_0.blm, _1hshh_es090_0.doc, _16h2e_13x.del, _1hcvy_4b.del, _1eo61.fdt, _1hsib_es090_0.tim, _1hsih.cfs, _16h2e.si, _1hsgx.nvm, _1hsib.si, _1hshr_Lucene45_0.dvd, _1gvnj.fdx, _1hsif.si, _1hsie.cfe, _td5b.nvd, _1hshh.fnm, _td5b.fnm, _1hsic.si, _1hsic.cfe, _16h2e.fdx, _1hsib.nvm, _1hshr_es090_0.tip, _1hsg3_es090_0.doc, _1gvnj_es090_0.tim, _16h2e.nvd, _1g5kq.nvm, _1hshh.si, _1hsh7_es090_0.blm, _1hshh_es090_0.pos, _1hsh7_es090_0.tip, _1hshr.fdt, _1hsgx_es090_0.tim, _1hsi1.nvd, _1hshh.nvd, _1gvnj.nvm, _1hcvy_Lucene45_0.dvm, _1hcvy.nvd, _1eo61_es090_0.pos, _16h2e.fdt, _1hsh7.fdt, _1eo61_es090_0.tip, _1hsid.cfs, _1hsgx.fdx, _1hsib_es090_0.tip, _1hsg3_es090_0.tip, _1eo61.fnm, _1hshr_es090_0.doc, _1hshr_es090_0.pos, _1hshh.fdt, _1hsh7.fnm, _16h2e.nvm, _1hsi1_es090_0.tim, _1hsib_es090_0.pos, _1hsib_Lucene45_0.dvm, _1g5kq_es090_0.doc, _1hsgx_Lucene45_0.dvd, _1gvnj.fdt, _1hshh_Lucene45_0.dvd, _1eo61.nvm, _1hcvy_es090_0.tip, _1hsgx.fdt, _1hsi1.nvm, _1hshr_es090_0.tim, _1hcvy_es090_0.tim, _td5b.nvm, _1hsg3.si, _1hsgx.fnm, _1hcvy.fdt, _1eo61_es090_0.blm, _1hsi1.fdt, _1hsib.fdt, _1hsh7.nvm, _1g5kq_es090_0.tip, _1hsg3_es090_0.tim, _1g5kq.si, _1hsie.si, _1hshr.si, _16h2e_es090_0.tip, _16h2e_Lucene45_0.dvm, _1hsi1_es090_0.tip, _1gvnj_Lucene45_0.dvm, _1hsib_es090_0.blm, _td5b.fdx, _1hsi1_Lucene45_0.dvm, _1g5kq_Lucene45_0.dvd, _1hsgx_es090_0.doc, write.lock, _1g5kq.nvd, _td5b_es090_0.pos, _1hshr.fdx, _td5b.si, _1hsig.cfs, _1g5kq_es090_0.pos, _1hshr.fnm, _16h2e_es090_0.pos, _1g5kq_es090_0.tim, _1g5kq_9e.del, _16h2e_Lucene45_0.dvd, _1eo61.fdx, _1hshr.nvm, _1hsh7_Lucene45_0.dvd, _1hsid.cfe, _1hshr.nvd, _td5b_Lucene45_0.dvd, _1eo61_es090_0.doc, _1gvnj.nvd, _1hsh7.nvd, _1hsg3.nvd, _td5b_es090_0.doc, _1hsg3_es090_0.pos, _1g5kq.fdx, _1hsib_es090_0.doc, _1hcvy.fdx, _1hshh.nvm, _1hsid.si, _1hsib.nvd, _1hcvy.fnm, _1gvnj_es090_0.pos]]; nested: FileNotFoundException[segments_wzv]; ]]
[2019-02-18 05:50:06,400][WARN ][indices.cluster ] [Balthakk] [titan][2] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [titan][2] failed to fetch index version after copying it over
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:135)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [titan][2] shard allocated for local recovery (post api), should exist, but doesn't, current files: [_1g1al_es090_0.pos, _1gsgh.nvm, _11x35_p8.del, _11x35_es090_0.doc, _1fqze_es090_0.tip, _1fqze_Lucene45_0.dvd, _6wiw_es090_0.doc, _6wiw_es090_0.blm, _1gsfm.fdx, _1fqze.si, _1gsgq_es090_0.blm, _1gsgh.fdt, _1gsgq.si, _imcl_rm.del, _1gsha_es090_0.pos, _1gshk.fnm, _1gpnq_es090_0.blm, _1gsha.si, _1gsha_Lucene45_0.dvm, _1atf8.nvd, _1gsha.fdx, _imcl_es090_0.blm, _imcl_es090_0.tip, _1gshk.fdx, _1atf8_es090_0.doc, _imcl_es090_0.doc, _1g1al.fnm, _1gsha_es090_0.blm, _11x35_es090_0.blm, _1gsha.fnm, _11x35_es090_0.tip, _1gpnq_es090_0.tip, _6wiw_es090_0.tim, _1gsgh.nvd, _1g1al_es090_0.doc, _1gpnq.si, _1gsh0_es090_0.tip, _imcl.nvm, _1fqze.fdt, _1gsfm.fnm, _1gpnq.fnm, _11x35_es090_0.tim, _6wiw.fdt, _11x35_es090_0.pos, _1g1al.nvm, _1gshk_es090_0.doc, _1gsgh.fdx, _imcl.fnm, _1gsh0.nvd, _1gpnq_Lucene45_0.dvm, segments.gen, _1gsh0.fnm, _1gsha_Lucene45_0.dvd, _1gpnq_es090_0.pos, _1gsfm_es090_0.tim, _1atf8_es090_0.tim, _1gsfm.nvd, _1gshk_Lucene45_0.dvd, _1gpnq.fdx, _1gshk.fdt, _1atf8_es090_0.tip, _1gsgq_Lucene45_0.dvd, _1atf8_Lucene45_0.dvm, _1gsgh_Lucene45_0.dvm, _1atf8.nvm, _1gsgh.si, _1g1al_Lucene45_0.dvd, _11x35.fdt, _1fqze.fdx, _1gsha_es090_0.tim, _1atf8_es090_0.blm, _imcl_es090_0.tim, _1gpnq.nvd, _6wiw.si, _1gsfm_Lucene45_0.dvd, _1gsgq_es090_0.tim, _1gsfm_es090_0.tip, _1gsh0.fdx, _imcl.fdt, _1gsgh.fnm, _1gshk.nvm, _1fqze.nvd, _1atf8_Lucene45_0.dvd, _1gshk_Lucene45_0.dvm, _1g1al_es090_0.tim, _11x35.nvm, _1gsfm_es090_0.blm, _1g1al.si, _imcl.nvd, _11x35.nvd, _1gsh0.si, _1fqze_es090_0.doc, _1g1al.nvd, write.lock, _1gsgq_Lucene45_0.dvm, _1gsh0.fdt, _1gsh0_es090_0.doc, _1gshk.si, _1fqze_es090_0.pos, _imcl_Lucene45_0.dvm, _1gsfm_es090_0.doc, _1fqze.nvm, _1g1al.fdx, _1g1al_es090_0.blm, _1gsh0_es090_0.pos, _imcl.fdx, _1atf8.si, _1gsha.fdt, _1gshk_es090_0.tim, _1gpnq.nvm, _1gsh0_es090_0.blm, _1fqze_7m.del, _1gsgq.nvm, _1gshk_es090_0.blm, _1gsgh_es090_0.tim, _6wiw.fdx, _1fqze.fnm, _1fqze_Lucene45_0.dvm, _1gsh0_Lucene45_0.dvm, _1gpnq_es090_0.tim, _6wiw_es090_0.tip, _1gsgq.fdt, _1atf8.fdt, _1gpnq_Lucene45_0.dvd, _1gsfm.nvm, _11x35.fnm, _1gsgh_Lucene45_0.dvd, _1gsgq.nvd, _1g1al_Lucene45_0.dvm, _1gsgh_es090_0.blm, _imcl_es090_0.pos, _1gshk.nvd, _11x35.fdx, _1gsgq.fdx, _1atf8_es090_0.pos, _imcl_Lucene45_0.dvd, _1gsh0_es090_0.tim, _1gsfm.fdt, _1gsgq_es090_0.doc, _6wiw.nvm, _1gsh0.nvm, _11x35_Lucene45_0.dvm, _1gshk_es090_0.pos, _1gsgh_es090_0.doc, _6wiw_es090_0.pos, _1fqze_es090_0.blm, _1gsfm_es090_0.pos, _1g1al.fdt, _1gshk_es090_0.tip, _11x35_Lucene45_0.dvd, _1gsgh_es090_0.pos, _1fqze_es090_0.tim, _1gsfm.si, _1g1al_43.del, _1gsgq.fnm, _11x35.si, _6wiw_Lucene45_0.dvm, _1gsfm_Lucene45_0.dvm, _imcl.si, _1atf8.fdx, _6wiw.fnm, _1atf8.fnm, _1gsfm_1.del, _1gsha_es090_0.doc, _6wiw_g8.del, _1gsha.nvm, _1atf8_oo.del, _1gsgq_es090_0.pos, _6wiw_Lucene45_0.dvd, _1gsha.nvd, _1gsha_es090_0.tip, _1gsh0_Lucene45_0.dvd, _1gsgq_es090_0.tip, _1g1al_es090_0.tip, _1gpnq.fdt, _1gpnq_es090_0.doc, _1gsgh_es090_0.tip, _6wiw.nvd]
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:114)
... 4 more
Caused by: java.io.FileNotFoundException: segments_wjr
at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:513)
at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:114)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:329)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:416)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:864)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:710)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:412)
at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:121)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:105)
... 4 more

Any help much appropriated.

Thanks,
Mani

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants