Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting {"acknowledged":true}, but files don't show up in S3. #257

Closed
darylyu opened this issue Dec 12, 2015 · 10 comments
Closed

Getting {"acknowledged":true}, but files don't show up in S3. #257

darylyu opened this issue Dec 12, 2015 · 10 comments

Comments

@darylyu
Copy link

darylyu commented Dec 12, 2015

We wanted to try using the aws-cloud for backups for our elasticsearch data. I'm using the plugin for ES1.7. I only get {"acknowledged":true} when I run this:

curl -XPUT 'http://localhost:9200/_snapshot/ph_es_snapshot?wait_for_completion=true' -d '{
    "type": "s3",
    "settings": {
        "bucket": "projectname_bucket",
        "region": "us-east-1",
        "base_path": "/es_snapshots/projectname",
        "access_key": <our_s3_access_key>,
        "secret_key": <our_s3_secret_key>
    }
}'

ES and the plugin are running inside a Docker container.

When I run: curl localhost:9200/_snapshot/projectname/

I get back the data included in my PUT above.

When I run: curl localhost:9200/_snapshot/project_name/_all

I get: {"snapshots":[]}

@dadoonet
Copy link
Member

Did you mean: curl localhost:9200/_snapshot/ph_es_snapshot/ and curl localhost:9200/_snapshot/ph_es_snapshot/_all?

Any chance you could test it without docker? Can you also check elasticsearch logs?

@dadoonet dadoonet self-assigned this Dec 30, 2015
@mfcarbo
Copy link

mfcarbo commented Mar 26, 2016

I have a similar issue. I get the acknowledged message but no files, no errors. I am using the IAM role of the instance and am able to copy files to the bucket/prefix. I am using ES 1.6 and 2.6.0 plugin. Trying to backup so I can upgrade.

PUT /_snapshot/backup
{
  "type": "s3",
  "settings": {
    "bucket": "xxxx-xxxx-xxxx-appdata",
    "region": "us-east-1",
    "base_path": "ELASTIC",
    "server_side_encryption": "true"
    }
}

GET _snapshot/backup/_all
{
   "snapshots": []
}

elasticsearch log:

[2016-03-26 07:28:24,679][INFO ][repositories             ] [ip-xxxxx-development_svc] put repository [backup]
[2016-03-26 07:33:14,706][INFO ][repositories             ] [ip-xxxxxxxx-development_svc] update repository [backup]

@xuzha
Copy link

xuzha commented Mar 26, 2016

Have you created the bucket before created the repo?

@mfcarbo
Copy link

mfcarbo commented Mar 26, 2016

Yes, another team creates the buckets.

@mfcarbo
Copy link

mfcarbo commented Mar 31, 2016

I have added this to the yaml but still no files in s3 or errors.

cloud:
    aws:
        s3:
            protocol: https
            proxy:
                port: 3128
                host: awsproxy

@dadoonet
Copy link
Member

dadoonet commented Jul 4, 2016

@mfcarbo @darylyu Did you manage solving your issue?

@darylyu
Copy link
Author

darylyu commented Jul 7, 2016

Nope. Ended up using a different tool.

@dadoonet
Copy link
Member

dadoonet commented Jul 7, 2016

@darylyu Did you backup "manually" your shards on S3? Like with rsync or such? I'm curious about what you used.

@mfcarbo
Copy link

mfcarbo commented Jul 7, 2016

@dadoonet I have not been able to solve. In the short term I may need to set up an nfs mount and do the filesystem backup. But this is not ideal in AWS. I am playing around with v5 alpha next sprint and will see if the backup works with it.

@dadoonet dadoonet removed their assignment Sep 5, 2016
@wtfiwtz
Copy link

wtfiwtz commented Oct 11, 2016

Looks like you need a snapshot name! e.g. PUT /_snapshot/s3_repository/snapshot1

After that, the upload sprung to life...

Using node.js on Lambda:

'use strict';

const http = require('http');

/**
 * Pass the data to send as `event.data`, and the request options as
 * `event.options`. For more information see the HTTPS module documentation
 * at https://nodejs.org/api/https.html.
 *
 * Will succeed with the response body.
 */
exports.handler = (event, context, callback) => {
    const req = http.request(event.options, (res) => {
        let body = '';
        console.log('Status:', res.statusCode);
        console.log('Headers:', JSON.stringify(res.headers));
        res.setEncoding('utf8');
        res.on('data', (chunk) => body += chunk);
        res.on('end', () => {
            console.log('Successfully processed HTTP response');
            // If we know it's JSON, parse it
            if (res.headers['content-type'] === 'application/json') {
                body = JSON.parse(body);
            }
            callback(null, body);
        });
    });
    req.on('error', callback);
    req.write(JSON.stringify(event.data));
    req.end();
};

... with this JSON:

{
  "options": {
    "host": "elastic-load-balancer.ap-southeast-2.elb.amazonaws.com",
    "port": 9200,
    "path": "/_snapshot/s3_repository/snapshot1",
    "method": "PUT"
  },
  "data": {
    "type": "s3",
    "settings": {
      "bucket": "production-es-snapshots",
      "region": "ap-southeast-2",
      "protocol": "https",
      "chunk_size": "100m"
    }
  }
}

This document on logging was also useful:

https://www.elastic.co/blog/elasticsearch-logging-secrets

Cheers,
Nigel (@wtfiwtz)

@darylyu darylyu closed this as not planned Won't fix, can't repro, duplicate, stale Aug 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants