Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

blockmap Forbidden - autoUpdater - private S3 bucket #4030

Closed
stuartcusackie opened this issue Jul 9, 2019 · 32 comments
Closed

blockmap Forbidden - autoUpdater - private S3 bucket #4030

stuartcusackie opened this issue Jul 9, 2019 · 32 comments

Comments

@stuartcusackie
Copy link

stuartcusackie commented Jul 9, 2019

  • Version: 20.44.4
  • Target: Windows

I am getting this error after removing public access from my S3 bucket through the AWS console. It also happens when I grant full public access to the bucket and use 'acl': 'private' in my publish config.

cannot download differentially, fallback to full download: 
Error: Cannot download "https://mybucket.s3.amazonaws.com/releases/electron/win/MyApp%20Setup%200.0.6.exe.blockmap", status 403: Forbidden

The actual Setup file seems to download fine.

It is a private repository but I have given my AWS user FullS3Access and it still doesn't work.

Is this the problem?:
It can see that the aws region is missing from my autoUpdater blockmap requests:

Download block maps (old: "https://mybucket.s3.amazonaws.com/releases/electron/win/MyApp%20Setup%200.0.6.exe.blockmap", new: https://mybucket.s3.amazonaws.com/releases/electron/win/MyApp%20Setup%200.0.7.exe.blockmap)

The files are actually located at:
https://mybucket.s3-eu-west-1.amazonaws.com/releases/electron/win/MyApp+Setup+0.0.6.exe.blockmap

So the region is being added correctly to my autoUpdater.requestHeaders but I have no control over the blockmap request urls.

I am using aws4 to sign my request headers.

@stuartcusackie stuartcusackie changed the title downloadBlockMap Forbidden on aws4-S3 downloadBlockMap Forbidden - autoUpdater - aws4-S3 Jul 9, 2019
@stuartcusackie stuartcusackie changed the title downloadBlockMap Forbidden - autoUpdater - aws4-S3 blockmap Forbidden - autoUpdater - aws4-S3 Jul 9, 2019
@stuartcusackie stuartcusackie changed the title blockmap Forbidden - autoUpdater - aws4-S3 blockmap Forbidden - autoUpdater - private S3 bucket (aws4) Jul 9, 2019
@stuartcusackie stuartcusackie changed the title blockmap Forbidden - autoUpdater - private S3 bucket (aws4) blockmap Forbidden - autoUpdater - private S3 bucket Jul 9, 2019
@stuartcusackie
Copy link
Author

stuartcusackie commented Jul 9, 2019

I can see somebody made a change to the blockmap headers here:
#3536

I don't think it takes an S3 region into account though. Is there some other way to configure this?

@mayankvadia
Copy link

You have to add region and channel name @stuartcusackie

@stuartcusackie
Copy link
Author

stuartcusackie commented Jul 10, 2019

@mvkanha I already have the region set in my builder config. Do I really need the channel??

I have tried adding a channel and I still have same problem. This is how I am doing it:

builder: {
	    appId: 'com.myname.myapp',
	    productName: 'MyApp',
            // here I am allowing channels
            generateUpdatesFilesForAllChannels: true,
		win: {
	    	    icon: 'src-electron/icons/icon.ico'
	        },
	    nsis:{
	      oneClick: false,
	      allowToChangeInstallationDirectory: true
	    },
	    publish: [{
			provider: 's3',
                        // here's my region
			region: 'eu-west-1',
			bucket: 'mybucket',
			path: '/releases/win/',
			acl: 'private'
		}]
	  }

And in my electron-main.js file:
autoUpdater.channel = 'latest'

But I am getting the same problem:
Error: Cannot download https://mybucket.s3.amazonaws.com/releases/win/MyApp%20Setup%200.0.11.exe.blockmap status 403: Forbidden

The region is still missing from the Blockmap request.

@mayankvadia
Copy link

mayankvadia commented Jul 10, 2019

@stuartcusackie

I think you should add endpoint to configuration.
Here I am attaching my configuration

"publish": [
			{
				"provider": "s3",
				"bucket": "mybucket",
				"region": "us-east-2",
				"path": "/",
				"endpoint": "https://s3.us-east-2.amazonaws.com",
				"channel": "latest"
			}
		] 

@develar
Copy link
Member

develar commented Jul 10, 2019

Obviously, if bucket is private, electron-updater cannot access it.

@stuartcusackie
Copy link
Author

stuartcusackie commented Jul 10, 2019

@develar You actually can access a private bucket by updating the autoUpdater request headers using the aws4 signing package.

Example (inspired by #2355):

const aws4 = require('aws4');

autoUpdater.on('checking-for-update', () => {
  let opts = {
  	service: 's3',
  	region: 'eu-west-1',
  	method: 'GET',
        // define your host variable elsewhere
  	host: host,
        // define your latest.yml path elsewhere
        path: latest_yml_path
  };
  aws4.sign(opts, {
    // define access credential variables elsewhere
    accessKeyId: access_key_id,
    secretAccessKey: secret_access_key
  });
  autoUpdater.requestHeaders = opts.headers
})

autoUpdater.on('update-available', (info) => {
  // define your release path variable elsewhere
  let update_path = `/${release_path}/${info.path}`;
  let opts = {
  	service: 's3',
  	region: 'eu-west-1',
  	method: 'GET',
  	host: host,
    path: update_path
  };
  aws4.sign(opts, {
    accessKeyId: access_key_id,
    secretAccessKey: secret_access_key
  });
  autoUpdater.requestHeaders = opts.headers
  autoUpdater.downloadUpdate()
})

This works except for the problem with the region in the Blockmap request. I will try what @mvkanha has said and set an endpoint in my publish config...

@develar
Copy link
Member

develar commented Jul 10, 2019

@stuartcusackie PR will be accepted, should be easy to find error and fix it.

@develar develar reopened this Jul 10, 2019
@develar develar closed this as completed Jul 10, 2019
@stuartcusackie
Copy link
Author

stuartcusackie commented Jul 10, 2019

@develar Thanks. I will try to find where the region can be added in the autoUpdater code.

@mvkanha Adding the endpoint and channel didn't work. I get an aws error when I do that:
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided

 publish: [{
    provider: 's3',
    region: 'eu-west-1',
    bucket: 'mybucket',
    endpoint: 'https://s3.eu-west-1.amazonaws.com',
    channel: 'latest',
    path: '/releases/win/',
   acl: 'private'
}]

@stuartcusackie
Copy link
Author

Maybe the region isn't the problem.

I wonder if the blockmap requests need two more aws signing requests (one for the old blockmap and one for the new)...

@justinwaite
Copy link

Did you ever get this figured out @stuartcusackie?

@stuartcusackie
Copy link
Author

@justinwaite Unfortunately, I did not.

@david-wallace-croft
Copy link

david-wallace-croft commented Nov 2, 2019

I have this problem as well. It happens even if I have differentialPackage set to false such that there are no blockmaps uploaded to S3. Is there a way to disable the requests for blockmaps?

@yangfan0356
Copy link

yangfan0356 commented Feb 19, 2021

Maybe the region isn't the problem.

I wonder if the blockmap requests need two more aws signing requests (one for the old blockmap and one for the new)...

@stuartcusackie

I guess you are right. I checked the server log for the download events, and there exists two requrests for both old and new blockmap files, and they all have "SignatureDoesNotMatch" access deny (perhaps it used the signed header we set for requestheader for the actual EXE, however the path part shall not be the same). I am not sure how to inject correct headers for downloading blockmap..

@mmaietta
Copy link
Collaborator

Is there a way to disable the requests for blockmaps?

I took a quick look at the code, this is where the requests for the blockmaps are:
https://github.com/electron-userland/electron-builder/blob/master/packages/electron-updater/src/NsisUpdater.ts#L175-L177
You can comment out the post-compiled line in your node modules and give it a try, I'm not sure how it'll impact diff downloads. The function always returns false and I didn't see elsewhere in the code for .blockmap string constants

Here's where it uses the headers:
https://github.com/electron-userland/electron-builder/blob/master/packages/electron-updater/src/NsisUpdater.ts#L144-L147

I wonder if the blockmap requests need two more aws signing requests (one for the old blockmap and one for the new)...

Happy to review a PR on this if someone is willing to give it a stab 🙂
These are our Dev Env Setup instructions: https://github.com/electron-userland/electron-builder/blob/master/CONTRIBUTING.md#to-setup-a-local-dev-environment

@bansari-electroscan
Copy link

Facing the same issue. @stuartcusackie were you able to make it work?

@stuartcusackie
Copy link
Author

Facing the same issue. @stuartcusackie were you able to make it work?

@bansari-electroscan I did not. I'm still using a public bucket.

@ktc1016
Copy link

ktc1016 commented Sep 11, 2021

I was able to make it work on my private bucket by adding "channel" as one of the attributes in "dev-app-update.yml" file

I'm still figuring out what's the connection of this and the production app-update.yml but I hope this will also work on your project

Update

Disabled the autoDownload feature made it work for me. My app instead asks the user if it wants to update his/her app.

I hope this also fix your issue

autoUpdater.autoDownload = false;

@leena-prabhat
Copy link

leena-prabhat commented Sep 21, 2021

@ktc1016 were you able to make it work?
Setting autoUpdate.autoDownload = false; has no effect. After calling autoUpdate.downloadUpdate();, update process starts downloading blockmap file. After that shows 403 Forbidden, the full file is downloaded.

@ktc1016
Copy link

ktc1016 commented Sep 25, 2021

@leena-prabhat after reviewing my detailed logs, it looks like the 403 forbidden error message was just captured by my "try-catch" statements. Therefore, my solution did not fix the original issue but I'm still able to use a private bucket for this.

I'll update my comment to avoid further confusions.

@code-drunk-debug-sober
Copy link

Hey, anyone got anything or any combination to work on it ? I tried multiple scenarios but none success. facing issues from SignatureDoesNotMatch - The request signature we calculated does not match the signature you provided. Check your key and signing method. This is even after setting the feed url.

@qube13
Copy link

qube13 commented Mar 14, 2022

It is not a solution to this problem but a workaround. We developed a lambda function that returns a signedUrl in the Location header, the autoUpdater automatically redirects to the signedUrl.

// in electron app 
autoUpdater.setFeedURL({
    provider: 'generic',
    url: `https://my-lambda-function.com/`,
    useMultipleRangeRequest: false,
  });
// api key to access the lambda function via Api Gateway
autoUpdater.requestHeaders = { 'x-api-key': API_KEY };

in lambda function:

// create a signedUrl based on the pathParameters 
return {
    statusCode: 302,
    headers: { Location: signedUrl },
  }

@linchen1010
Copy link

linchen1010 commented Apr 13, 2022

I also had this issue during developing. It's not a solution but I end up using aws-sdk to download my file if electron-updater found any updated version in my private S3.
Note:
This will not utilize the blockmap to download the updated app, so it might take a few more seconds to download your app.
You will also need to run the installer manually to trigger that update, I did it with child_process.spawn.

First, disable autoDownload:

autoUpdater.autoDownload = false;
import * as AWS from 'aws-sdk';
import fs from 'fs';
import aws4 from 'aws4'

autoUpdater.on('checking-for-update', () => {
    let opts = {
        service: 's3',
        region: s3_region,
        host: s3_host,
        path: latest_yml_path
    };
  
    aws4.sign(opts, {
        accessKeyId,
        secretAccessKey,
        sessionToken
    });
    autoUpdater.requestHeaders = opts.headers
})

autoUpdater.on('update-available', (updateInfo) => {
    AWS.config.update({
        accessKeyId,
        secretAccessKey,
        sessionToken
    });
    const s3 = new AWS.S3();
    const version = this.getVersion(updateInfo.path);
    const params = {
        Bucket: `${your_s3_bucket}`,
        Key: `${path_to_your_file)`
    }
   download(s3, params);
})

and the download looks something like this:

const download = (s3, params) => {
  s3.getObject(params, async (err, data) => {
      if (err) {
          throw new Error(err)
      }
      await fs.promises.writeFile(filePath, data.Body);
      console.info(`${filePath} has been downloaded!`);
      autoUpdater.emit('update-downloaded', filePath);
  });
}

@rohanrk-tricon
Copy link

rohanrk-tricon commented Sep 23, 2022

Were you able to find out how to achieve it for a private s3 bucket? It's still not working for me. @stuartcusackie @develar @mayankvadia @justinwaite ?

@bansari-electroscan
Copy link

@rohanrk-tricon Did you found any solution around this? Tried public and private buckets, played around with the request options. Nothing works. It's always signature mismatch error. :(

@mmaietta
Copy link
Collaborator

mmaietta commented Nov 4, 2022

Not sure where it is failing tbh. Is the electron distributable being signed/modified after electron-builder produces the app-update file pre-upload to s3? That would change the signature of the file after the update hash has been generated.

@rohanrk-tricon
Copy link

rohanrk-tricon commented Nov 5, 2022

@bansari-electroscan @mmaietta I was finally able to make it work.
The publish inside your build object in package.json should look like this.

  "build": {
    "publish": {
      "provider": "s3",
      "bucket": "<your-bucket-name>",
      "acl": null,
      "channel": "latest",
      "path": "/",
      "endpoint": "https://s3.<region>.amazonaws.com"
    }
  },

This is my AppUpdater class which performs all auto-update events

import aws4 from 'aws4';

class AppUpdater {
  constructor() {
    log.transports.file.level = 'info';
    autoUpdater.logger = log;

    // To check for updates
    // To get the latest.yml file we are authenticating the request with getOptions func
    autoUpdater.logger.info('Auto update initiated...');
    autoUpdater.on('checking-for-update', () => {
      autoUpdater.logger?.info('checking for updates...');
      autoUpdater.requestHeaders = getOptions('latest.yml').headers;
    });

    // To get the latest exe file we are authenticating the request with getOptions func
    // options.path contains name of latest exe
    autoUpdater.on('update-available', (options) => {
      autoUpdater.logger?.info('update available event is triggered');
      autoUpdater.requestHeaders = getOptions(options.path).headers;
    });

    autoUpdater.on('update-downloaded', async (event, arg) => {
      autoUpdater.logger?.info('update download triggered');
    });

    // Set the feed URL of our s3 bucket
    autoUpdater.setFeedURL(`${process.env.AWS_FEED_URL}`);

    // Checks for updates and notifies the user
    autoUpdater.checkForUpdatesAndNotify().catch((err) => {
      autoUpdater.logger = err;
    });
  }
}

getOptions() will attach required authentication headers(accessKeyId and secretAccessKey) to each call to the s3 bucket.
You need to keep all AWS related info in an env file.

 // util function to authenticate our requests to s3
const getOptions = (path: string) => {
  const options = {
    service: process.env.AWS_SERVICE,
    region: process.env.REGION,
    method: 'GET',
    // define your host variable elsewhere
    host:
      process.env.AWS_S3_BUCKET +
      '.s3.' +
      process.env.REGION +
      '.amazonaws.com',
    // define your latest.yml path elsewhere
    path,
  };
  // autoUpdater.logger?.info('options--', options);

  aws4.sign(options, {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  });
  // autoUpdater.logger?.info('options2--', options);
  return options;
};

This is how I have set my env file which will hold all aws info

  AWS_ACCESS_KEY_ID: <your access key id>,
  AWS_SECRET_ACCESS_KEY: '<your secret access key>',
  REGION: '<region>,
  OUTPUT: 'json',
  AWS_S3_BUCKET: '<your-bucket-name>',
  AWS_SERVICE: 's3',
  AWS_FEED_URL: 'https://<your-bucket-name>.s3.<region>.amazonaws.com',

You can see what auto updater logs in this file path

C:\Users\user\AppData\Roaming\Electron\logs\main.log

Note: It will be easy for you if you enter all the sensitive info in the code directly as a string until you test it, later you can move it to env file.

@code-drunk-debug-sober
Copy link

code-drunk-debug-sober commented Nov 5, 2022

@bansari-electroscan @mmaietta I was finally able to make it work. The publish inside your build object in package.json should look like this.

  "build": {
    "publish": {
      "provider": "s3",
      "bucket": "<your-bucket-name>",
      "acl": null,
      "channel": "latest",
      "path": "/",
      "endpoint": "https://s3.<region>.amazonaws.com"
    }
  },

This is my AppUpdater class which performs all auto-update events

import aws4 from 'aws4';

class AppUpdater {
  constructor() {
    log.transports.file.level = 'info';
    autoUpdater.logger = log;

    // To check for updates
    // To get the latest.yml file we are authenticating the request with getOptions func
    autoUpdater.logger.info('Auto update initiated...');
    autoUpdater.on('checking-for-update', () => {
      autoUpdater.logger?.info('checking for updates...');
      autoUpdater.requestHeaders = getOptions('latest.yml').headers;
    });

    // To get the latest exe file we are authenticating the request with getOptions func
    // options.path contains name of latest exe
    autoUpdater.on('update-available', (options) => {
      autoUpdater.logger?.info('update available event is triggered');
      autoUpdater.requestHeaders = getOptions(options.path).headers;
    });

    autoUpdater.on('update-downloaded', async (event, arg) => {
      autoUpdater.logger?.info('update download triggered');
    });

    // Set the feed URL of our s3 bucket
    autoUpdater.setFeedURL(`${process.env.AWS_FEED_URL}`);

    // Checks for updates and notifies the user
    autoUpdater.checkForUpdatesAndNotify().catch((err) => {
      autoUpdater.logger = err;
    });
  }
}

getOptions() will attach required authentication headers(accessKeyId and secretAccessKey) to each call to the s3 bucket. You need to keep all AWS related info in an env file.

 // util function to authenticate our requests to s3
const getOptions = (path: string) => {
  const options = {
    service: process.env.AWS_SERVICE,
    region: process.env.REGION,
    method: 'GET',
    // define your host variable elsewhere
    host:
      process.env.AWS_S3_BUCKET +
      '.s3.' +
      process.env.REGION +
      '.amazonaws.com',
    // define your latest.yml path elsewhere
    path,
  };
  // autoUpdater.logger?.info('options--', options);

  aws4.sign(options, {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  });
  // autoUpdater.logger?.info('options2--', options);
  return options;
};

This is how I have set my env file which will hold all aws info

  AWS_ACCESS_KEY_ID: <your access key id>,
  AWS_SECRET_ACCESS_KEY: '<your secret access key>',
  REGION: '<region>,
  OUTPUT: 'json',
  AWS_S3_BUCKET: '<your-bucket-name>',
  AWS_SERVICE: 's3',
  AWS_FEED_URL: 'https://<your-bucket-name>.s3.<region>.amazonaws.com',

You can see what auto updater logs in this file path

C:\Users\user\AppData\Roaming\Electron\logs\main.log

Note: It will be easy for you if you enter all the sensitive info in the code directly as a string until you test it, later you can move it to env file.

Let me know, how that works out for you.

@rohanrk-tricon Thanks for the great info and sorry for jumping in. Even I am in a similar dilemma. Can you please also share the S3 bucket policy that you have set for it ?

@rohanrk-tricon
Copy link

rohanrk-tricon commented Nov 5, 2022

@code-drunk-debug-sober

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAppS3Releases",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::release-bucket/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": [
                "arn:aws:s3:::release-bucket"
            ]
        }
    ]
}

Also check if you have enabled CORS(Cross-Origin Resource Sharing).

@EliveltonRepolho
Copy link

EliveltonRepolho commented Jan 7, 2023

Hello @rohanrk-tricon , can you post your CORS config ?

I am getting this error Error: Error: net::ERR_CERT_COMMON_NAME_INVALID and not sure what else I have to do, I have a bucket with public access enabled (not sure if it really needed), created an user with the policy you sent on comment above and set following CORS:

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET",
            "HEAD"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": [
            "Content-Range",
            "Content-Length",
            "ETag"
        ]
    }
]

One question about the policy you just sent, are you attaching to an user or setting directly in the bucket ? Directly in the bucket it says Principal is neede, I am asking because not sure what I may be doing wrong.

and these are my options logging:
image

Thanks in advance

@reddybhavanish
Copy link

i am getting error 403 forbidden after following all these steps.

@ricopollantecs
Copy link

any updates on this?

@fabiobsantosprogrow
Copy link

Nowadays we still facing the same problem with version 6.1.4.
I don't understand why this issue is closed when people still having issues...
We will try the download manually on private s3 #4030 but I would like to have some guidance how to this using the auto-update without extra work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests