Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS S3 plugin also works with Google Cloud Storage (direct uploading) #460

Closed
ogtfaber opened this issue Dec 20, 2017 · 21 comments
Closed

Comments

@ogtfaber
Copy link
Contributor

No description provided.

@goto-bus-stop
Copy link
Contributor

Not that I'm aware at least! we're not working on it at Transloadit. wld you like to implement a plugin for that, perhaps? :D

@arturi
Copy link
Contributor

arturi commented Dec 20, 2017

Hi! We are working on docs on how to make plugins, and since Uppy is flexible like that, it should be fairly easy to implement, so if you’d like to give it a try, we are here to help.

@ogtfaber
Copy link
Contributor Author

Ok guys, we've implemented the URL signing for Google. Seems the "AWSS3" plugin works perfectly with Google's resumable uploads. Maybe rename the plugin, or add to the documentation?
Thanks for making this awesome package!

@arturi
Copy link
Contributor

arturi commented Dec 23, 2017

Thank you! Added this to todo, we’ll think about naming 👌 Could you send some of your usage/signing our way? If there’s something we could use in docs, for example. Thanks!

@arturi arturi changed the title Is anyone working on Google Cloud Storage support (for direct uploading) ? AWS S3 plugin also works with Google Cloud Storage (direct uploading) Dec 23, 2017
@goto-bus-stop
Copy link
Contributor

I don't think we need to rename it, since Google Cloud Storage probably just copied the API from S3 to make it easier to switch between the two. Adding an example to the docs seems great though. We could do one for DigitalOcean's new object storage thing too, that also mimicks S3's API: https://www.digitalocean.com/products/spaces/

@ghost
Copy link

ghost commented Feb 12, 2018

@ogtfaber, any news on how to do with Google?

@ghost
Copy link

ghost commented Feb 12, 2018

@ogtfaber I have problem with signature:

<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
</Message>
<StringToSign>
PUT
image/jpeg
1518399402
/mybucket.appspot.com/7d5e4aad1e3a737fb8d2c59571fdb980.jpg
</StringToSign>
</Error>

googleapis/google-cloud-ruby#1964

@rajascript
Copy link

@johnunclesam Did you find any solution for this signature problem?

@jimyaghi
Copy link

jimyaghi commented Jun 10, 2018

i've been banging my head on the wall for about a week trying to make uppy and uppy-server work with Google Cloud Storage and it won't. The only way i've managed it is to do a native post upload with my own custom signing server. Even this breaks with the AwsS3 plugin because GCS returns the wrong content-type.

I'm really trying to not have to write a whole bunch of custom code to enable large uploads on front-end. My platform is all based on GCS. So if anyone has got uppy-server to work via interoperability with Google Cloud Storage please share the steps.

Note i've done the CORS and interoperability steps already when messing around with fineuploader

Here's what i'm seeing in my most recent attempts with AwsS3 plugin + uppy-server + GCS interoperability:

Request

Request URL: https://storage.googleapis.com/_[BUCKET_NAME]_
Request Method: POST

Request Body

------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="acl"

public-read
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="key"

blursample.png
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="success_action_status"

201
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="content-type"

image/png
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="bucket"

_[BUCKET_NAME]_
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="X-Amz-Algorithm"

AWS4-HMAC-SHA256
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="X-Amz-Credential"

GOOGRRXSJZVFQMEGWXIM36VP/20180610/us-east-1/s3/aws4_request
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="X-Amz-Date"

20180610T165659Z
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="Policy"

eyJleHBpcmF0aW9uIjoiMjAxOC0wNi0xMFQxNzowMTo1OVoiLCJjb25kaXRpb25zIjpbeyJhY2wiOiJwdWJsaWMtcmVhZCJ9LHsia2V5IjoiYmx1cnNhbXBsZS5wbmcifSx7InN1Y2Nlc3NfYWN0aW9uX3N0YXR1cyI6IjIwMSJ9LHsiY29udGVudC10eXBlIjoiaW1hZ2UvcG5nIn0seyJidWNrZXQiOiJ5bG1lbWJfYXR0YWNobWVudHMifSx7IlgtQW16LUFsZ29yaXRobSI6IkFXUzQtSE1BQy1TSEEyNTYifSx7IlgtQW16LUNyZWRlbnRpYWwiOiJHT09HUlJYU0paVkZRTUVHV1hJTTM2VlAvMjAxODA2MTAvdXMtZWFzdC0xL3MzL2F3czRfcmVxdWVzdCJ9LHsiWC1BbXotRGF0ZSI6IjIwMTgwNjEwVDE2NTY1OVoifV19
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="X-Amz-Signature"

dbad2160f44e22fc97c3a64b488c3f231f17d3ef3117d3ced934fc6503be4f61
------WebKitFormBoundaryczUeAxXc3kTN0hfA
Content-Disposition: form-data; name="file"; filename="blursample.png"
Content-Type: image/png


------WebKitFormBoundaryczUeAxXc3kTN0hfA--`

Response (from GCS)

<Error>
<Code>
AccessDenied
</Code>
<Message>
Access denied.
</Message>
<Details>
Anonymous caller does not have storage.objects.create access to ylmemb_attachments.
</Details>
</Error>

It doesnt look like it has any idea about the AWS-style POST form.

Here's what my env config looks like for uppy-server:

export NODE_ENV="${NODE_ENV:-development}"
export DEPLOY_ENV="${DEPLOY_ENV:-production}"
export UPPYSERVER_PORT=3020
export UPPYSERVER_DOMAIN="localhost"
export UPPYSERVER_SELF_ENDPOINT="localhost:3020"

export UPPYSERVER_PROTOCOL="http"
export UPPYSERVER_DATADIR="/tmp"
export UPPYSERVER_SECRET="secret"

export UPPYSERVER_AWS_KEY="GOOG*****************"   // these are filled out in real life
export UPPYSERVER_AWS_SECRET="yJuRL**************+c"
export UPPYSERVER_AWS_BUCKET="_[BUCKET_NAME]_"
export UPPYSERVER_AWS_ENDPOINT="https://storage.googleapis.com/"

Here's how uppy is instantiated in React code:

    componentWillMount() {

        this.uppy = new Uppy({
            autoProceed: true,
            id: "uppy"
        }).use(AwsS3, {
            host: "http://localhost:3020"
        }).run();

  }

Not trying to do anything over the top crazy here. Just trying to get the basics working, with GCS. What am i missing?

~j

@danielmahon
Copy link

@jimyaghi did you ever get this working?

@jimyaghi
Copy link

No I didn't manage unfortunately and switched to another library which also gave me trouble but I think I got that one working. It's been a while though I can't remember what the other library was. It looks like support for gcs in upload libraries very much relies on its ability to emulate s3 as being the more popular.

@danielmahon
Copy link

danielmahon commented Nov 30, 2018

Got this to work!

Needed to add repsonseHeader to the cors.json config, which is used to set cors on the bucket via gsutil:

[
  {
    "origin": ["https://localhost:5000"],
    "method": ["GET", "PUT"],
    "responseHeader": ["Content-Type"],
    "maxAgeSeconds": 3000
  },
  {
    "origin": ["*"],
    "method": ["GET"],
    "maxAgeSeconds": 3000
  }
]

Im currently signing my own urls with a "custom" companion (below) but I will try this again with the AWS companion and see if it works...

uppy-companion-google.js

import { Storage } from '@google-cloud/storage';

// Check for required env variables
if (!process.env.GOOGLE_APPLICATION_CREDENTIALS) {
  throw new Error(
    'Missing Google Cloud credentials, please set the GOOGLE_APPLICATION_CREDENTIALS environment variable to your credentials.json location'
  );
}
if (!process.env.COMPANION_GOOGLE_BUCKET) {
  throw new Error(
    'Missing bucket, please set the COMPANION_GOOGLE_BUCKET environment variable'
  );
}

// Create new storage client
const storage = new Storage();

// Express middleware to return a signed url
const getSignedUrl = (bucket, ...options) => ({ body, headers }, res) => {
  // Get bucket reference from env variable
  const myBucket = storage.bucket(
    bucket || process.env.COMPANION_GOOGLE_BUCKET
  );
  // Get file reference
  const file = myBucket.file(body.filename);

  // Merge config with default
  const config = {
    action: 'write',
    contentType: body.contentType,
    expires: Date.now() + 1000 * 60 * 60, // 1 hour from now
    ...options,
  };

  //-
  // Generate a URL to allow write permissions. This means anyone with this
  // URL can send a PUT request with new data that will overwrite the file.
  //-

  file.getSignedUrl(config).then(function(data) {
    res.json({
      method: 'put',
      url: data[0],
      fields: {},
      headers: { 'content-type': body.contentType },
    });
  });
};

export { getSignedUrl };

express app

const { getSignedUrl } = require('./uppy-companion-google');
...
app.use('/getSignedUrl', cors(), bodyParser.json(), getSignedUrl());
...

react app

...
this.uppy.use(AwsS3, {
      limit: 1,
      timeout: 1000 * 60 * 60,
      getUploadParameters(file) {
        // Send a request to our signing endpoint.
        return fetch(process.env.REACT_APP_GRAPHQL_ENDPOINT + '/getSignedUrl', {
          method: 'post',
          // Send and receive JSON.
          headers: {
            accept: 'application/json',
            'content-type': 'application/json',
          },
          body: JSON.stringify({
            filename: file.name,
            contentType: file.type,
          }),
        }).then(response => response.json());
      },
    });
...

screen shot 2018-11-30 at 2 07 54 pm

@hoangvubrvt
Copy link

Thank you @danielmahon
You saved my day.

@rajivchodisetti
Copy link

@danielmahon we are on GCP, do resumable uploads to GCS also work ? and also we need to upload the data via a proxy (https_proxy/http_proxy), does it work ?

@kvz
Copy link
Member

kvz commented Mar 11, 2019

/cc @ifedapoolarewaju

@rajivchodisetti
Copy link

@DanielMohan Can u plz confirm if u guys were referring about gcs multipart upload instead of upload to gce in one shot.

If it's GCS multipart upload, I will go ahead and give a try as I have requirement to upload files upto 10 gigs in size through browser directly to GCS

@danielmahon
Copy link

@rajivchodisetti I haven’t personally used it with resumeable uploads yet so I could be wrong but I don’t think it would be a problem as the google cloud client supports it, you would just need to make sure to setup uppy properly as well. You could probably also use the tus version as well. I am currently using the aws plugin for image/media uploads to google, and the tus plugin for video uploads to Vimeo.

@kvz
Copy link
Member

kvz commented May 22, 2019

Since it's been reported to work I'll close this issue, feel free to re-open however!

@ahmadissa
Copy link
Contributor

Thanks mate @danielmahon

@Johnrobmiller
Copy link

Johnrobmiller commented Sep 13, 2021

I don't think we need to rename it, since Google Cloud Storage probably just copied the API from S3 to make it easier to switch between the two. Adding an example to the docs seems great though. We could do one for DigitalOcean's new object storage thing too, that also mimicks S3's API: https://www.digitalocean.com/products/spaces/

Public names like need to be designed for users (and by "users" I mean the developers who code with Uppy). Will the user know anything about any of these technical details when they first read the name for the very first time? Probably not. Therefore, technical details like this are nothing more than distractions and red-herrings.

From the user's point of view, if something has a name that communicates "this-is-for-thing-A", then the user can and should assume that it is not for thing B, even if this assumption turns out to be wrong. Really, if it works for both "A" and "B", then the name should communicate "this-is-both-for-thing-A-and-B".

With this said, the name should be changed.

When it comes to design problems, we should think like designers, which requires empathy and emotional intelligence. We should definately not be thinking like engineers, even if that's what we all are.

@SiestaMadokaist
Copy link

is there any reference on how to do multipart upload to gcs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests