Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshot restore failed after a successful snapshot with ES 5.1.1 #22513

Closed
JeffreyZZ opened this issue Jan 10, 2017 · 1 comment · Fixed by #22577
Closed

Snapshot restore failed after a successful snapshot with ES 5.1.1 #22513

JeffreyZZ opened this issue Jan 10, 2017 · 1 comment · Fixed by #22577
Assignees
Labels
>bug :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs

Comments

@JeffreyZZ
Copy link

Repo Step

  1. Download Elasticsearch 5.1.1
  2. Install the latest snapshot plugin with command .\elasticsearch-plugin.bat install repository-azure
  3. Add the following Azure Storage configurations in elasticserch.yml
    cloud.azure.storage.my_account1.account: xxx
    cloud.azure.storage.my_account1.key: xxx
    cloud.azure.storage.my_account1.default: true
  4. Start Elasticsearch with elasticsearch.bat
  5. Index a single document like the following
    PUT twitter/tweet/1
    {
    "user" : "kimchy",
    "post_date" : "2009-11-15T14:12:12",
    "message" : "trying out Elasticsearch"
    }
  6. Create a repository
    PUT /_snapshot/test-20170111
    {
    "type": "azure",
    "settings": {
    "account": "my_account1",
    "container": "test-20170111"
    }
    }
  7. Take a snapshot
    PUT /_snapshot/test-20170111/backup01?wait_for_completion=true
  8. After snapshot is done, then delete all the indices on the cluster
  9. Restore the snapshot
    POST /_snapshot/test-20170111/backup01/_restore?wait_for_completion=true

Expected
Restore succeeds and the cluster has one index named twitter, which contains one document.

Actual
Restored failed with the following error messages in log.
[2017-01-09T15:56:51,920][WARN ][o.e.i.c.IndicesClusterStateService] [MRed28V] [[twitter][2]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [twitter][2]: Recovery failed on {MRed28V}{MRed28V9SVCOhAKpfxh01Q}{qty_A70hSGisRYnY2jT4VA}{127.0.0.1}{127.0.0.1:9300}
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1512) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.1.1.jar:5.1.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:300) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: restore failed
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:406) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: failed to restore snapshot [backup05/Zl8a5zPGSR6NsKbHIegyDw]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:914) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: Failed to recover index
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1600) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:912) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.apache.lucene.index.CorruptIndexException: verification failed (hardware problem?) : expected=1ck4qa4 actual=null footer=null writtenLength=0 expectedLength=130 (resource=name [segments_1], length [130], checksum [1ck4qa4], writtenBy [5.0.0]) (resource=VerifyingIndexOutput(segments_1))
at org.elasticsearch.index.store.Store$LuceneVerifyingIndexOutput.verify(Store.java:1120) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.store.Store.verify(Store.java:450) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restoreFile(BlobStoreRepository.java:1662) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1597) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:912) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
[2017-01-09T15:56:51,926][WARN ][o.e.c.a.s.ShardStateAction] [MRed28V] [twitter][2] received shard failed for shard id [[twitter][2]], allocation id [Ac1iHjlcSjuT61MdlaIz5w], primary term [0], message [failed recovery], failure [RecoveryFailedException[[twitter][2]: Recovery failed on {MRed28V}{MRed28V9SVCOhAKpfxh01Q}{qty_A70hSGisRYnY2jT4VA}{127.0.0.1}{127.0.0.1:9300}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [backup05/Zl8a5zPGSR6NsKbHIegyDw]]; nested: IndexShardRestoreFailedException[Failed to recover index]; nested: CorruptIndexException[verification failed (hardware problem?) : expected=1ck4qa4 actual=null footer=null writtenLength=0 expectedLength=130 (resource=name [segments_1], length [130], checksum [1ck4qa4], writtenBy [5.0.0]) (resource=VerifyingIndexOutput(segments_1))]; ]
org.elasticsearch.indices.recovery.RecoveryFailedException: [twitter][2]: Recovery failed on {MRed28V}{MRed28V9SVCOhAKpfxh01Q}{qty_A70hSGisRYnY2jT4VA}{127.0.0.1}{127.0.0.1:9300}
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1512) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.1.1.jar:5.1.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:300) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: restore failed
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:406) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: failed to restore snapshot [backup05/Zl8a5zPGSR6NsKbHIegyDw]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:914) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: Failed to recover index
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1600) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:912) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.apache.lucene.index.CorruptIndexException: verification failed (hardware problem?) : expected=1ck4qa4 actual=null footer=null writtenLength=0 expectedLength=130 (resource=name [segments_1], length [130], checksum [1ck4qa4], writtenBy [5.0.0]) (resource=VerifyingIndexOutput(segments_1))
at org.elasticsearch.index.store.Store$LuceneVerifyingIndexOutput.verify(Store.java:1120) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.store.Store.verify(Store.java:450) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restoreFile(BlobStoreRepository.java:1662) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1597) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:912) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
[2017-01-09T15:56:51,943][INFO ][o.e.c.r.a.AllocationService] [MRed28V] Cluster health status changed from [YELLOW] to [RED] (reason: [shards failed [[twitter][2]] ...]).
[2017-01-09T15:56:52,031][WARN ][o.e.i.c.IndicesClusterStateService] [MRed28V] [[twitter][3]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [twitter][3]: Recovery failed on {MRed28V}{MRed28V9SVCOhAKpfxh01Q}{qty_A70hSGisRYnY2jT4VA}{127.0.0.1}{127.0.0.1:9300}
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1512) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.1.1.jar:5.1.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:300) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: restore failed
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:406) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: failed to restore snapshot [backup05/Zl8a5zPGSR6NsKbHIegyDw]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:914) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.elasticsearch.index.snapshots.IndexShardRestoreFailedException: Failed to recover index
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1600) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:912) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more
Caused by: org.apache.lucene.index.CorruptIndexException: verification failed (hardware problem?) : expected=jll51r actual=null footer=null writtenLength=0 expectedLength=405 (resource=name [_0.cfe], length [405], checksum [jll51r], writtenBy [6.3.0]) (resource=VerifyingIndexOutput(_0.cfe))
at org.elasticsearch.index.store.Store$LuceneVerifyingIndexOutput.verify(Store.java:1120) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.store.Store.verify(Store.java:450) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restoreFile(BlobStoreRepository.java:1662) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository$RestoreContext.restore(BlobStoreRepository.java:1597) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.restoreShard(BlobStoreRepository.java:912) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.restore(StoreRecovery.java:401) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromRepository$4(StoreRecovery.java:235) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:258) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromRepository(StoreRecovery.java:233) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.restoreFromRepository(IndexShard.java:1244) ~[elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1508) ~[elasticsearch-5.1.1.jar:5.1.1]
... 4 more

@ppf2
Copy link
Member

ppf2 commented Jan 12, 2017

Linking PR: #22577

Workaround is to explicitly set the chunk_size setting to 64m to match the default chunck_size value, eg.

PUT _snapshot/my_azure_repo
{
    "type": "azure",
    "settings": {
        "container": "container_name",
        "base_path": "some_path",
        "chunk_size": "64m"
    }
}

abeyad pushed a commit that referenced this issue Jan 12, 2017
Before, the default chunk size for Azure repositories was
-1 bytes, which meant that if the chunk_size was not set on
the Azure repository, nor as a node setting, then no data
files would get written as part of the snapshot (because
the BlobStoreRepository's PartSliceStream does not know
how to process negative chunk sizes).

This commit fixes the default chunk size for Azure repositories
to be the same as the maximum chunk size.  This commit also
adds tests for both the Azure and Google Cloud repositories to
ensure only valid chunk sizes can be set.

Closes #22513
abeyad pushed a commit that referenced this issue Jan 12, 2017
Before, the default chunk size for Azure repositories was
-1 bytes, which meant that if the chunk_size was not set on
the Azure repository, nor as a node setting, then no data
files would get written as part of the snapshot (because
the BlobStoreRepository's PartSliceStream does not know
how to process negative chunk sizes).

This commit fixes the default chunk size for Azure repositories
to be the same as the maximum chunk size.  This commit also
adds tests for both the Azure and Google Cloud repositories to
ensure only valid chunk sizes can be set.

Closes #22513
abeyad pushed a commit that referenced this issue Jan 12, 2017
Before, the default chunk size for Azure repositories was
-1 bytes, which meant that if the chunk_size was not set on
the Azure repository, nor as a node setting, then no data
files would get written as part of the snapshot (because
the BlobStoreRepository's PartSliceStream does not know
how to process negative chunk sizes).

This commit fixes the default chunk size for Azure repositories
to be the same as the maximum chunk size.  This commit also
adds tests for both the Azure and Google Cloud repositories to
ensure only valid chunk sizes can be set.

Closes #22513
@clintongormley clintongormley added :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs and removed :Plugin Cloud Azure labels Feb 14, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants