Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(storage): support gs:// copy/move dests #1416

Merged
merged 1 commit into from
Jul 12, 2016
Merged

feat(storage): support gs:// copy/move dests #1416

merged 1 commit into from
Jul 12, 2016

Conversation

zbjornson
Copy link
Contributor

Fixes #1395 -- supports gs:// strings for specifying destination buckets.

I'm not totally thrilled with what I did on line 391 but I don't see a better way to create a new Bucket instance from File.js.

The string format testing with GS_URL_REGEXP.exec is both more specific and faster than using destination.indexOf("gs://").

@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Jul 10, 2016
@zbjornson
Copy link
Contributor Author

Preferences for how to address the lint error (line length >80 chars)? Could break before the callback, move the dest string to a variable, shorten some names... -- all of which break the pattern of the other usage examples.

@stephenplusplus
Copy link
Contributor

Thanks! Will check this out tomorrow.

@stephenplusplus stephenplusplus added the api: storage Issues related to the Cloud Storage API. label Jul 10, 2016
* // If you pass in a string starting with "gs://" for the destination, the
* // file is copied to the other bucket and under the new name provided.
* //-
* file.copy('gs://other-bucket/my-image-copy.png', function(err, copiedFile, apiResponse) {

This comment was marked as spam.

@stephenplusplus
Copy link
Contributor

stephenplusplus commented Jul 11, 2016

LGTM. Would you mind adding a system test? You can add one after this, "it should copy an existing file with a GS URL" (or something similar)

If you haven't run the system tests before, we have a guide on setting the right env vars: https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/CONTRIBUTING.md#system-tests

Also, running the test will create a bucket and file under your account (then delete them), so it's understandable if you'd rather not run the tests. If you do, make sure you change the test you're working on to read:

it.only('...', function() {})

The only will prevent the other tests from running when you run npm run system-test

@coveralls
Copy link

coveralls commented Jul 12, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 0e8cf7b on zbjornson:1395-gs-urls into 0a6da60 on GoogleCloudPlatform:master.

@zbjornson
Copy link
Contributor Author

Thanks for your review.

On the system test, how would you like to handle file/bucket cleanup? You have a special deleteBucket function that addresses eventual consistency on line 69. I could move that up a scope so that it's accessible to the individual tests, or not delete the new bucket and let the after-hook do the cleanup.

Also, is it unnecessary to assert on the contents of the other bucket?

it.only('should copy to another bucket given a gs:// URL', function(done) {
  var opts = { destination: 'CloudLogo' };
  bucket.upload(FILES.logo.path, opts, function(err, file) {
    assert.ifError(err);

    var otherBucket = storage.bucket(generateName());
    otherBucket.create(function (err) {
      assert.ifError(err);

      var destPath = 'gs://' + otherBucket.name + '/CloudLogoCopy';
      file.copy(destPath, function(err, copiedFile) {
        assert.ifError(err);

        otherBucket.getFiles(function (err, files) { // Necessary? >>
          assert.ifError(err);

          assert.equal(files.length, 1);
          var newFile = files[0];

          assert.equal(newFile.name, "CloudLogoCopy"); // << Necessary?

          async.parallel([
            file.delete.bind(file),
            newFile.delete.bind(newFile),
            otherBucket.delete.bind(otherBucket) // Needs eventual consistency protection
          ], done);
        });

      });
    });
  });
});

@stephenplusplus
Copy link
Contributor

For bucket cleanup, you can let the after hook take care of it.

And I like that you're checking that the new bucket received the file 👍

I'll just have some linting notes, but LGTM.

destName = destination;
var parsedDestination = GS_URL_REGEXP.exec(destination);
if (parsedDestination !== null && parsedDestination.length === 3) {
destBucket = this.storage.bucket(parsedDestination[1])

This comment was marked as spam.

@zbjornson
Copy link
Contributor Author

Sorry about all that, my linter apparently decided to retire.

assert.strictEqual(files.length, 1);
var newFile = files[0];

assert.equal(newFile.name, 'CloudLogoCopy');

This comment was marked as spam.

@stephenplusplus
Copy link
Contributor

Haha, I'm sorry that I have to even comment on this little stuff. I wish it could just be auto-formatted.

@zbjornson
Copy link
Contributor Author

Squashed together... managed to swamp travis though 😢

@stephenplusplus
Copy link
Contributor

No problem. I canceled the queued, old builds.

@coveralls
Copy link

coveralls commented Jul 12, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling d387283 on zbjornson:1395-gs-urls into 4bce495 on GoogleCloudPlatform:master.

@stephenplusplus
Copy link
Contributor

Perfect! We intend to get a release out shortly with support for Bigtable (currently held up on #1424), so this will ship with that. Thanks very much :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: storage Issues related to the Cloud Storage API. cla: yes This human has signed the Contributor License Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants