Skip to content

S3 Provider fails to upload #1618

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
brandonhilkert opened this issue Jun 5, 2017 · 27 comments
Closed

S3 Provider fails to upload #1618

brandonhilkert opened this issue Jun 5, 2017 · 27 comments

Comments

@brandonhilkert
Copy link

  • Version: 18.3.0
  • Electron-Updater: 2.0.0
  • Target: OSX/Windows

I'm using the S3 provider. I can confirm my ENV vars are set properly by successfully uploading via the command line using the aws tool. However, when I go to publish I get:

Error: Cannot cleanup:

Error #1 --------------------------------------------------------------------------------
AccessDenied: Access Denied
    at Request.extractError (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/services/s3.js:539:35)
    at Request.callListeners (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
    at Request.emit (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
    at Request.emit (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:682:14)
    at Request.transition (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:684:12)
    at Request.callListeners (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
    at Request.emit (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
    at Request.emit (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:682:14)
    at Request.transition (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:684:12)
    at Request.callListeners (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
    at callNextListener (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/sequential_executor.js:95:12)
    at IncomingMessage.onEnd (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/event_listeners.js:256:13)
    at emitNone (events.js:91:20)
    at IncomingMessage.emit (events.js:188:7)
From previous event:
    at Request.promise (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/node_modules/aws-sdk/lib/request.js:776:12)
    at /Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/src/uploader.ts:86:153
    at Timeout.tryRun [as _onTimeout] (/Users/bhilkert/Dropbox/code/bark-desktop/node_modules/electron-publisher-s3/src/uploader.ts:198:9)
    at ontimeout (timers.js:386:14)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5)
@develar
Copy link
Member

develar commented Jun 5, 2017

Could you please specify version of electron-publisher-s3? And try to use latest electron-builder?

@brandonhilkert
Copy link
Author

I originally missed the Cannot cleanup line, which mean me realize it's possible it needed more than just PutObject permissions. I allowed all actions and it worked. Can you confirm the minimal permissions for the S3 provider?

electron-publisher-s3: 18.5.0

@brandonhilkert
Copy link
Author

Closing b/c increased permissions seems to have resolved it.

@develar
Copy link
Member

develar commented Jun 11, 2017

Reopened — question "Can you confirm the minimal permissions for the S3 provider?" is not answered, we must do something smart to save user's time.

@develar develar reopened this Jun 11, 2017
@kasprownik
Copy link

Same problem here, what are the minimal working permissions?

@dsagal
Copy link

dsagal commented Jul 12, 2017

In case this helps anyone, I had a similar symptom that was caused by using a non-default AWS profile. The aws tool uses AWS_DEFAULT_PROFILE variable, but the SDK ignores it and uses only AWS_PROFILE. If you use a non-default profile, you need to set the latter one. (See also here.)

@romanrev
Copy link

romanrev commented Jul 12, 2017

@dsagal and @develar that's what we ended up with after trailing the CloudTrail logs for the requests issued by the electron-builder:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAppS3Releases",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::release-bucket/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": [
                "arn:aws:s3:::release-bucket"
            ]
        }
    ]
}

notice the *ObjectAcl actions that we need to allow: that's because the electron-build adds a header "x-amz-acl": "public-read" with each upload - trying to mark every object as publicly readable. I am going to open another issue #1822 to ask the developers to make that optional, since one can also achieve the same effect with appropriately crafted S3 bucket policy.

@develar
Copy link
Member

develar commented Jul 12, 2017

@romanrev Thanks (official docs: http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html)

@romanrev
Copy link

thanks @develar yes, that's what we looked at after we saw the CreateMultiPartUpload call :-)

@emilbruckner
Copy link

To everyone coming from the docs who is as stupid as I am:

You have to change your bucket-name in these permissions …

@jacob-qurika
Copy link

I needed s3:GetBucketLocation as well.

@PriscilaAlves
Copy link

I also needed s3:GetBucketLocation. This should probably be referred to in the documentation.

@JM-Mendez
Copy link

@kalokiston @PriscilaAlves

Do you have your bucket in a region besides the default US East (N. Virginia)?

I didn't need to add the s3:GetBucketLocation permission. But I'm using the default region, so wondering if it's just falling back to that region.

@mlynch
Copy link

mlynch commented Oct 19, 2018

Thanks for the policy example! What do you all use for your Principal fields? Can't create a bucket policy without it and it seems like the two entries should have different values for that field.

@JM-Mendez
Copy link

JM-Mendez commented Oct 19, 2018

@mlynch you have two options

  1. use * and allow everyone access to your bucket (not recommended)
  2. create an iam user and use it's AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

Then add the user to the policy. I named my iam user ci_server

"Sid": "Stmt123456789",
"Effect": "Allow",
"Principal": {
    "AWS": "arn:aws:iam::1234567890:user/ci_server"
 },

walkthrough: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html

@mlynch
Copy link

mlynch commented Oct 19, 2018

Thanks, though there are two operations here. It seems you want to allow read access to everyone (so users can see and download new versions), but only write access to an IAM user, correct? That would mean two separate principal options?

@JM-Mendez
Copy link

@mlynch If you want to give users direct access to your bucket, then yeah, you'd need to set up those two levels of access. But I recommend using cloudfront.

The data transfer pricing is cheaper than having users download directly from s3. And when you set up cloudfront, it'll ask to add a policy to the bucket for you.

Here are some articles that led me to this conclusion:

https://medium.com/devopslinks/this-is-how-i-reduced-my-cloudfront-bills-by-80-a7b0dfb24128

https://www.expatsoftware.com/articles/2009/01/cloudfront-costs-compared-to-s3.html

@mlynch
Copy link

mlynch commented Oct 19, 2018

Awesome thanks so much! Also probably better download performance for users around the globe I’d imagine

@JM-Mendez
Copy link

You're welcome :-).

Yeah you'll get way better download performance since your app will be distributed across the edge network, and will be much closer to your users.

One thing to keep in mind -- distributing to cloudfront is not immediate. I've seen an average of 20-30 min for full availability. So if you want to test publishing, use minio server like the guide suggests. The cloudfront url is static, so it doesn't affect publishing.

@dariocravero
Copy link

dariocravero commented Nov 27, 2018

Thanks for the policy guide! I wanted to add that apart from setting I also had to uncheck the Block new public ACLs and uploading public objects (Recommended) option in the Permissions tab for it to let me upload, otherwise I kept on getting Access Denied.

screenshot 2018-11-27 at 20 28 18

EDIT: I also had to uncheck the second option Remove public access granted through public ACLs (Recommended) otherwise the app wasn't able to look for updates.

@erikjalevik
Copy link

@JM-Mendez, if I use CloudFront for serving my updates, how do I configure the electron-updater inside my app to use the CloudFront URL instead of trying to access S3 directly?

@erikjalevik
Copy link

erikjalevik commented Feb 8, 2019

@JM-Mendez, if I use CloudFront for serving my updates, how do I configure the electron-updater inside my app to use the CloudFront URL instead of trying to access S3 directly?

Using autoUpdater.setFeedURL seems to do the trick, but then a deprecation message appears in the log: "Feed url Deprecated. Do not use it."

What is the correct way?

@JM-Mendez
Copy link

@erikjalevik this is how I did it. Are you sure you're calling setFeedUrl and not getFeedUrl? The latter is the one that's deprecated according to the source code

getFeedURL(): string | null | undefined {
return "Deprecated. Do not use it."
}

setFeedURL(options: PublishConfiguration | AllPublishOptions | string) {
const runtimeOptions = this.createProviderRuntimeOptions()
// https://github.com/electron-userland/electron-builder/issues/1105
let provider: Provider<any>
if (typeof options === "string") {
provider = new GenericProvider({provider: "generic", url: options}, this, {
...runtimeOptions,
isUseMultipleRangeRequest: isUrlProbablySupportMultiRangeRequests(options),
})
}
else {
provider = createClient(options, this, runtimeOptions)
}
this.clientPromise = Promise.resolve(provider)
}

@kersten
Copy link

kersten commented Sep 16, 2019

Sorry to bother u again with this issue, but i can't upload to S3 I always get the error like the issue opener.

@laurenschroeder
Copy link

I was able to get this working for a private bucket after setting ACL to 'private':

 "publish": {    
      "provider": "s3",    
      "bucket": "bucket-name",    
      "region": "us-east-2",    
      "acl": "private"    
    }

@MajesticMug
Copy link

I was able to get this working for a private bucket after setting ACL to 'private':

 "publish": {    
      "provider": "s3",    
      "bucket": "bucket-name",    
      "region": "us-east-2",    
      "acl": "private"    
    }

This worked for me, thanks!

@jackhodkinson
Copy link

I have setup my iam user to have full AWS access and setup the bucket to allow ACLs, however, I still get the following error when I try to build unless acl is set to private as mentioned above.

image
AccessControlListNotSupported: The bucket does not allow ACLs

Setting acl to private makes the build work, however, when I try to use the autoUpdater in my main process I'm seeing an error because my AWS keys are not included in the build.

In my index.ts file in the main process I have:

import { autoUpdater } from "electron-updater"

autoUpdater.on('update-downloaded', (event) => {
  const dialogOpts = {
    type: 'info' as const,
    buttons: ['Restart', 'Later'],
    title: 'Application Update',
    message: String(process.platform === 'win32' ? event.releaseNotes : event.releaseName),
    detail: 'A new version has been downloaded. Restart the application to apply the updates.'
  }

  dialog.showMessageBox(dialogOpts).then((returnValue) => {
    if (returnValue.response === 0) autoUpdater.quitAndInstall()
  })
})

autoUpdater.on('error', (message) => {
  console.error('There was a problem updating the application')
  console.error(message)
})

When I run this I get a 403:

HttpError: 403 Forbidden
"method: GET url: https://<bucketName>.s3.amazonaws.com/latest-mac.yml?noCache=...

Data:
  <?xml version=\"1.0\" encoding=\"UTF-8\"?>
    <Error><Code>AccessDenied</Code><Message>Access Denied</Message><Reques...
  File "app:///node_modules/builder-util-runtime/out/httpExecutor.js", line 14, in createHttpError
  File "app:///node_modules/builder-util-runtime/out/httpExecutor.js", line 147, in IncomingMessage.<anonymous>
...

I suppose we can get around this by baking in the AWS keys into the main process, but idk how I feel about baking in credentials into the build. Alternatively I've seen people discuss using a server to pass credentials to the app, but I don't need a server for anything else so that seems excessive.

Are the only options here to either make the bucket completely public OR bake in some limited credentials to access the bucket? Are there any best practices here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests