Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getSignedURL() in Google Cloud Function produces link that works for several days, then returns "SignatureDoesNotMatch" error #244

Closed
colinjstief opened this issue Jun 20, 2018 · 88 comments
Assignees
Labels
api: storage Issues related to the googleapis/nodejs-storage API. type: question Request for information or clarification. Not an issue.

Comments

@colinjstief
Copy link

colinjstief commented Jun 20, 2018

Environment details

  • OS: Heroku dyno (Ubuntu 16.04)
  • Node.js version: 8.6
  • npm version: 6.0.0
  • @google-cloud/storage version: 1.6.0

Steps to reproduce

Repo of relevant code:
https://github.com/colinjstief/getSignedUrl-example

Lots of this code is copied/pasted straight from official example (link here). Code in my repo runs as a Google Cloud Function and produces a link that works for a few days (maybe 7?), and then seems to expire with "SignatureDoesNotMatch" 403 response. Full message reads:

"The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method."

URL seems valid, including this query parameter: '...&Expires=16730323200&Signature=...'

Both the initial request and the later request (after apparent expiration) are performed the same ways: either through an axios.get() or simply placing in browser bar. Both work at first, then both fail later.

Same issue seems to be reported here googleapis/google-cloud-node#1976, here #144, and here firebase/functions-samples#360

@JustinBeckwith JustinBeckwith added the triage me I really want to be triaged. label Jun 20, 2018
@stephenplusplus
Copy link
Contributor

Thank you for reporting. Are you able to share the entire link that works, then eventually fails? It should probably be looked at by someone on the Storage team (cc @frankyn).

@stephenplusplus stephenplusplus added type: question Request for information or clarification. Not an issue. and removed triage me I really want to be triaged. labels Jun 20, 2018
@frankyn
Copy link
Member

frankyn commented Jun 20, 2018

IIUC, the issue here is that it expires when it shouldn't given the config at defined: index.js#L28

@stephenplusplus
Copy link
Contributor

That looks right and matches the expires timestamp that we use in the constructed URL: 16730323200. Is there anything else that would help debug from the Storage API-side? Unless you think there's something we could be doing improperly from the code, outside of the Expires parameter?

@frankyn
Copy link
Member

frankyn commented Jun 20, 2018

I added a shorter Node script to generate blobToSign:
Given the version used for the example I modified client library v1.6.0 to console log the blobToSign on line#1703:

GET


16730352000
/bucket/image.jpg

Shortened script:

// Imports the Google Cloud client library                                      
const Storage = require('@google-cloud/storage');                               
                                                                                
// Your Google Cloud Platform project ID                                        
const projectId = 'project-id';                                     
                                                                                
// Creates a client                                                             
const storage = new Storage({                                                   
  projectId: projectId,                                                         
});                                                                             
                                                                                
// The name for the new bucket                                                  
const bucketName = 'bucket';                                                
                                                                                
bucket = storage.bucket(bucketName);                                            
file = bucket.file("image.jpg");                         
                                                                                
                                                                                
CONFIG = {                                                                      
  action: 'read',                                                               
  expires: '03-01-2500',                                                        
};                                                                              
file.getSignedUrl(CONFIG, function(err, url) {                                  
  console.log(url);                                                             
});

I do notice no metadata (contentType) is being used when creating the blobToSign so this should provide the similar output due to different service accounts, bucket, and object names. @stephenplusplus does this look right?

@frankyn
Copy link
Member

frankyn commented Jun 20, 2018

@colinjstief could you share the signed URL or provide the value inside <StringToSign> in the error message?

@frankyn
Copy link
Member

frankyn commented Jun 20, 2018

@colinjstief one more request. Could you recreate this signedURL using my script above and the service account landlinc-stage@appspot.gserviceaccount.com ? I want to compare.

@colinjstief
Copy link
Author

Away from computer right now, will post this evening.

@colinjstief
Copy link
Author

Seems like I need a little more in that snippet related to credentials? Just running it via node on my local machine and it's throwing a "Cannot sign data without client_email" error.

@frankyn
Copy link
Member

frankyn commented Jun 26, 2018

@stephenplusplus is the only way to define client_email by setting the GOOGLE_APPLICATION_CREDENTIALS? The error message a bit confusing at the moment as it's not actionable.

@colinjstief Instead of running the shorter script, could you re-upload the same object you did before with the same name? The goal would be to regenerate the same signed URL you provided earlier and compare the versions.

@frankyn
Copy link
Member

frankyn commented Jun 27, 2018

Could you update the logic to use the same unique identifier as the prior signed URL?

Technically, if the signedURL method provided all the same parameters it will generate the same signedURL. It doesn't use current time and also doesn't maintain state for the signed URL in the GCS backend.

@frankyn
Copy link
Member

frankyn commented Jun 27, 2018

One difference between this version from the prior besides the different Signature is @ is %40 (percent encoded). Did you update the version of the NodeJs storage client library?

Given the different signature, I wonder if the private key for the service account was refreshed, maybe? Will try to reproduce the error and report back.

@colinjstief
Copy link
Author

colinjstief commented Jun 27, 2018

That is strange. No, I just checked the change history and I’m 99% sure the library wasn’t updated.

@BorisBeast
Copy link

Hello,
I have the exact same problem (signed URLs expire after some time, even though 'expires' property is set to a date far in the future). This is the relevant code (initialization is slightly different than the example @colinjstief provided):

admin.initializeApp(functions.config().firebase);

exports.onFileUpload = functions.storage.object().onFinalize(data => {
    const bucket = data.bucket;
    const filePath = data.name;
    const destBucket = admin.storage().bucket(bucket);
    const file = destBucket.file(filePath);

    return file.getSignedUrl({
        action: 'read',
        expires: '03-09-2491'
    })
    .then(url => {
        return url[0];
    })
    .then(
        // save url to db and do other stuff ...
    )
};

@google-cloud/storage version is the same: 1.6.0

These are two generated URLs for the exact same file (just re-uploaded and overwritten couple days later). Currently both links work (even though the signatures are different), but the first one will probably stop working a week from now approximately, the second one after two weeks. That estimation is based on previous observations.

https://storage.googleapis.com/vennfx-web-ng.appspot.com/1524742000212_ForexAdvantages.jpg?GoogleAccessId=vennfx-web-ng%40appspot.gserviceaccount.com&Expires=16447017600&Signature=FDo8YPo0TkkNqhe60Igdf5YNSY%2FDDtijImLE3Ah0W2AP8JOxhABBFQAnUCFyvsvupZ4hOJhKb48QZzvKakVplTLAolZswrrmNaWe9OOSH9vnDm5cvkd0J7QCOd%2F7Bo7XrQmCtKrdDQdtc9tnVp6Hp5Bs0vQgmOucfaMD0UU0utn4uy0m5KRzsZiagHPzzBqkx9jiF%2BwdqQBAFttZEV4dQUQNB6g4OnuuwN7%2FALutR%2FZSzMlPmftMKxccntCpu2FO0goeXOMfTFyn6SaFRokVrssddzzcSbHcH%2B%2BvA74yWzWeiFWcy3k0kvmn4YPotWlDpr1SJ3PC8J4qYI05MMVsCg%3D%3D

https://storage.googleapis.com/vennfx-web-ng.appspot.com/1524742000212_ForexAdvantages.jpg?GoogleAccessId=vennfx-web-ng%40appspot.gserviceaccount.com&Expires=16447017600&Signature=t2OiQH4ulovSDNBy88EccKuczuHf3pcO4wvDnYmYIux0tnJoP92w4wKYCL6RTuLf8B93bwSZMgERVg5edCUns8kdDt7Ira1OjhjDWw%2BiBlyzdUXCibbeMiv8V44BBLQH5q%2Br32DA7rgX%2BcSWrolgl03pKAZaJoYfSy95TmhyT8eQAYvk8LZRlLwYRt8Q247eYGcWGlLX9gSpgSzrhLvDOQZTkqcqKuOmpvLdSoU73O3OuJf7y2txSOsLU7d%2FG7O%2B7u4eQxg2XZdJbUlJcZRNCZ8AxWKGkDzEZMnAYcaGZ1%2F%2BYlVXF6q60WjYEFB7UAbsi0hEKgl2V2aWyY3ji6pTIw%3D%3D

@colinjstief
Copy link
Author

@frankyn I see that the first URL that @BorisBeast generated is now expired. Mine both seem to be working, but I expect that to change in the next few days. Will keep checking.

@shaibt
Copy link

shaibt commented Jul 8, 2018

Have the exact same problem.
In a Firebase function, using GCS node js API ver 1.7.0, uploading an image to GCS and immediately asking for a signed URL to expire in the future (~7 years):
let gcsBucket = gcs.bucket(bucketname);
let options = { destination: gcsFilename, metadata: { metadata: metadata, contentType: 'image/jpeg' } };
gcsBucket.upload(mainImageLocalUrl, options).then((data) => {
let file = data[0];
let config = { action: 'read', expires: '01-01-2025' };
return file.getSignedUrl(config);

Everything is working fine (url is accessible by clients) but at some point (after few days, ~10 days) - it "expires" and returns:
<Error> <Code>SignatureDoesNotMatch</Code> <Message> The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method. </Message> <StringToSign> GET 1735689600 /XXXXXXXXX.appspot.com/some_directory%2FFAD0E0EA-61AE-4689-AB22-189E02FD2710.jpg </StringToSign> </Error>

URLs created more recently are available and working as expected.
Random examples (as of 8am GMT on July 8th):
URLs created on June 22nd are returning the above error
URLs created on June 29th are still working and allowing access to files

@colinjstief
Copy link
Author

@frankyn Both of the URLs created for this thread are now broken and throwing the same error.

@frankyn
Copy link
Member

frankyn commented Jul 9, 2018

Hi all,

Apologies for the delay:
I created a new service account and then generated a signed URL for a GET request.
After deleting the service account key from the Google Cloud Console, I received the same error message you all have been seeing:

<Error>
  <Code>SignatureDoesNotMatch</Code>
  <Message>
    The request signature we calculated does not match the signature you provided. Check your Google 
    secret key and signing method.
  </Message>
  <StringToSign>
    GET 1531165081 /coderfrank/51f7251dc8561126be000027_736.jpg
  </StringToSign>
</Error>

I'll follow-up internally with the Cloud Functions team. My current guess is the service account key used to provision your Cloud Function environment is deleted after a certain amount of time (5-7 days). The service account is used to generate a signedURL and when deleted renders the URL useless (SignatureDoesNotMatch).

I appreciate everyone's patience on this issue.

@colinjstief
Copy link
Author

Not a problem, thanks for your work on this and I hope you got some time off last week for the holiday.

@BorisBeast
Copy link

@frankyn, I'm sorry to bother you, but I don't understand one thing. Is a new service account key generated each time I call getSignedURL()? If not, why is the signature different every time, but the links still work?

@shaibt
Copy link

shaibt commented Jul 9, 2018

Hi @frankyn,

So would a possible workaround (for now) be to use an explicit service account in cloud functions (using an account key JSON file) instead of the default GCP authentication for the GCS SDK and the cloud function running it?

@frankyn
Copy link
Member

frankyn commented Jul 9, 2018

re:@BorisBeast, at the moment I'm only guessing* until I get a better understanding of the Cloud Functions environment. When I do I'll have an answer to your question.

re: @shaibt, If my guess is correct, then potentially yes including the service account in your deployed Cloud Function could be a workaround, but I would not recommend it as a best practice yet.

@frankyn
Copy link
Member

frankyn commented Jul 10, 2018

Part follow-up: GCF team pointed me to documentation located at Docs

GCP-managed keys. These keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis.

So the Google Managed key is rotated on a weekly basis and makes sense given the behavior everyone has encountered. Looking into a best practice next, for now as @shaibt mentioned, a user managed service account would be a workaround for this behavior.

@schankam
Copy link

schankam commented Nov 23, 2018

@shaibt @colinjstief Hey guys, since I was not really satisfied with the outcome of this, I kind of looked for a "clean" way to handle this (from my point of view), and here is what I came up with:

I am using the firebase CLI to switch between my environment, with the firebase use command. I am then using firebase functions:config:set sa.key='<YOUR-SERVICE-ACCOUNT.JSON-DATA>'

Variables set this way are only relative to the current used environment, so it's good to switch between different configurations easily. This way, I also do not have any files stored anywhere on my disk, or on my version control, no need to upload a file to cloud function either.

I then initialize my firebase admin SDK this way:

import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';

const serviceAccount = functions.config().sa.key;

admin.initializeApp({
  credential: admin.credential.cert(serviceAccount),
  databaseURL: JSON.parse(process.env.FIREBASE_CONFIG).databaseURL, 
  storageBucket: JSON.parse(process.env.FIREBASE_CONFIG).storageBucket, 
});

// And then later in my code when I need it...
const storage = admin.storage();

And it works like a charm ! :)

This is the cleanest way I found to deal with this environment / private key storage safety concern.

taherbert added a commit to CambriaSolutions/MDHS-PolicySearch that referenced this issue Nov 24, 2018
For details on why there's an issue: googleapis/nodejs-storage#244 (comment)
taherbert added a commit to CambriaSolutions/MDHS-PolicySearch that referenced this issue Nov 24, 2018
For details on why there's an issue: googleapis/nodejs-storage#244 (comment)
@mqln
Copy link

mqln commented Dec 6, 2018

@schankam I love your solution, but I haven't been able to get the CLI to play nice!

How did you actually get your service-account json file into the CLI? Copy pasting it into 'functions:config:set' generates too many weird artifacts, and escape characters, and then it cannot be read in the actual functions/index.js

Thanks for your answer, this problem has been killing me!

@amitbravo
Copy link

amitbravo commented Jan 7, 2019

Below is my cloud function code about when an image is uploaded then a thumbnail is created , getSignedURL for thumbnail and save it into database.
My thumbnails's getSignedURL for "noticeboard" never expire , but gallery thumbnails expire within a week. I do not know what I am doing wrong.
@google-cloud/storage: "^2.3.4",
firebase-admin: "^6.4.0",
firebase-functions: "^2.1.0",
firebase-tools: "^6.2.2",

'use strict';
const functions = require('firebase-functions');
const admin = require("firebase-admin");
admin.initializeApp();

const {Storage} = require('@google-cloud/storage');

const projectId = 'myprojectid';

// Instantiates a client
const gcs = new Storage({
  projectId: projectId,
});

const path = require('path');
const sharp = require('sharp');
const THUMB_MAX_WIDTH = 300;
const THUMB_MAX_HEIGHT = 300;




exports.thumbnailGen =  functions.storage.object().onFinalize(async (object) => {
  const fileBucket = object.bucket; // The Storage bucket that contains the file.
  const filePath = object.name; // File path in the bucket.
  //console.log('filePath',filePath)
  console.log('object',object)
  const contentType = object.contentType; // File content type.


  const fileDir = path.dirname(filePath); //iadd
  console.log('fileDir',fileDir)
  const lastDir = fileDir.split("/");  //iadd
  console.log('lastDir',lastDir);



  // Exit if this is triggered on a file that is not an image.
  if (!contentType.startsWith('image/')) {
    console.log('This is not an image.');
    return null;
  }

  // Get the file name.
  const fileName = path.basename(filePath);
  // Exit if the image is already a thumbnail.
  //console.log('fileName',fileName)
  if (fileName.startsWith('thumb_')) {
    console.log('Already a Thumbnail.');
    return null;
  }
  /* iadd */
  const filenameWithoutExt = fileName.split('.').shift();
  //console.log('get filenameWithoutExt: ',filenameWithoutExt);
  let exp = /^(.+)_([0-9]+)$/;
  let thekey = filenameWithoutExt.replace(exp, '$1');
  console.log('thekey',thekey)
  // Download file from bucket.
  const bucket = gcs.bucket(fileBucket);

  const metadata = {
    contentType: contentType,
  };
  // We add a 'thumb_' prefix to thumbnails file name. That's where we'll upload the thumbnail.
  const thumbFileName = `thumb_${fileName}`;
  const thumbFilePath = path.join(fileDir, thumbFileName);
  // Create write stream for uploading thumbnail
  const thumbnailUploadStream = bucket.file(thumbFilePath).createWriteStream({metadata});

  let theimageindex = filenameWithoutExt.replace(exp, '$2');
  //console.log('theimageindex',theimageindex)
  /* iadd */

  // Create Sharp pipeline for resizing the image and use pipe to read from bucket read stream
  const pipeline = sharp();
  pipeline
    .resize(THUMB_MAX_WIDTH, THUMB_MAX_HEIGHT)
    .jpeg({ quality: 100 })
    .png({ quality: 100 })
    .max()
    .pipe(thumbnailUploadStream);

  bucket.file(filePath).createReadStream().pipe(pipeline);

  const config = {action: 'read', expires: '03-01-2025' }

  const streamAsPromise = new Promise((resolve, reject) =>
    thumbnailUploadStream.on('finish', resolve).on('error', reject));

    let onlyFileName =  filenameWithoutExt.split('@'); // returns at 2nd index somthing like -L7QEvCNSEJqJAy9DPu5_0 , image without extension but with _0 or _1 etc.
    let createDownloadablepath =  thekey.split('@');

    let thumbFileUrl

    let thumbFile

    let updatedatabasepath = {};

    if (lastDir[1] === 'noticeboard') {

        return streamAsPromise
        .then(() => {
          console.log('Thumbnail created successfully');
          thumbFile = bucket.file(thumbFilePath);
          return thumbFile.getSignedUrl(config)
        })
        .then((results)=>{
          thumbFileUrl = results[0];
          console.log('Got Signed URLs.',thumbFileUrl);
          return admin.database().ref('/'+fileDir+'/'+createDownloadablepath[0]+'/'+createDownloadablepath[1]+'/thumbnails/'+onlyFileName[1]).update({thumbnailURL : thumbFileUrl })
        })
        .then(() => {
          console.log('All done successfully');
          return null
        })

    }

   else if (lastDir[1] === 'gallery' ) {



      return streamAsPromise
        .then(() => {
          console.log('Thumbnail created successfully');
          thumbFile = bucket.file(thumbFilePath);
          return thumbFile.getSignedUrl(config)
        })
        .then((results)=>{
          thumbFileUrl = results[0];
          console.log('Got Signed URLs. upto 03-01-2500',thumbFileUrl);
       	  updatedatabasepath['/'+fileDir+'/'+createDownloadablepath[0]+'/'+createDownloadablepath[1]+'/thumbnails/'+onlyFileName[1]+'/thumbnailURL'] = thumbFileUrl;
          updatedatabasepath['/'+fileDir+'/'+createDownloadablepath[0]+'/thumbnailURL'] = thumbFileUrl;
          return admin.database().ref().update(updatedatabasepath)
       }).then(() => {
       console.log('All done successfully');
       return null
       })

    }

    else {

      return streamAsPromise
      .then(() => {
        console.log('Thumbnail created successfully');
        return null
      })

    }

}); //exports.thumbnailGen

@danejordan
Copy link

danejordan commented Jan 16, 2019

Hey all, was @mcdonamp 's solution of using iam.signBlobRequest the accepted solution to this problem?

In my case I'm not able to get any access whatsoever using all the techniques outlined above (see details in my so post), and I cannot find any documentation for iam, all I found was the HTTP API endpoint and here as a bucket property.

UPDATE: Turns out I was having the same problem as this issue, where simply adding contentType to the options param of getSignedUrl as well as defining the respective Content-Type header in the request of said resource fixes the problem.

@tdkehoe
Copy link

tdkehoe commented Mar 27, 2019

I'm having the same issue, my Firebase Storage signed download URLs expire after about three days. I see that this discussion is closed but I don't see any answer marked as solving the problem?

I have the content_type set to audio/mp3. I've read this entire thread but I don't understand if there's a solution to this problem? Here's my code, which is mostly copied from the documentation:

const {Storage} = require('@google-cloud/storage');
const storage = new Storage();
const bucket = storage.bucket('my-app.appspot.com');
var file = bucket.file('Audio/' + longLanguage + '/' + pronunciation + '/' + wordFileType);

const config = {
    action: 'read',
    expires: '03-17-2025',
    content_type: 'audio/mp3'
  };

  function oedPromise() {
    return new Promise(function(resolve, reject) {
      http.get(oedAudioURL, function(response) {
        response.pipe(file.createWriteStream(options))
        .on('error', function(error) {
          console.error(error);
          reject(error);
        })
        .on('finish', function() {
          file.getSignedUrl(config, function(err, url) {
            if (err) {
              console.error(err);
              return;
            } else {
              resolve(url)
            }
          });
        });
      });
    });
  }

@iekuzmuk
Copy link

iekuzmuk commented Dec 4, 2019

Any update here ? I have the same problem for more than a month now and it is the third time that there are no photo displayed on my app after 10-12 days. Every time it happens, I try to regenerate signed urls but that takes a while for 50 000 images :/
Is there a fix to make the signed url expiration longer than 2 weeks ?

hi, did you solve it? i dont have 50 000 but it would be greate to regenerate a couple of dozens

@iekuzmuk
Copy link

iekuzmuk commented Dec 4, 2019

as i see the sollution is to explicit tell the service account. but that shouldnt be the problem, the cloud function use the default service account..

If that service account has the necessary permissions should work fine

@google-cloud-label-sync google-cloud-label-sync bot added the api: storage Issues related to the googleapis/nodejs-storage API. label Jan 31, 2020
@alexanderkjeldaas
Copy link

Is there a cost associated with repeatedly doing a file.getSignedUrl on the server-side to check if a given URL is still valid?

@shaibt
Copy link

shaibt commented Feb 23, 2020

as i see the sollution is to explicit tell the service account. but that shouldnt be the problem, the cloud function use the default service account..

If that service account has the necessary permissions should work fine

The default service account keys get rotated every once in a while (~7 days) so a URL signed with a certain key might become invalid a few days later after the keys change. The solution is to use an explicit account where the key is controlled by the dev and doesn't change.
Also note that v4 signed URLs are valid for 7 days no matter which service account you are using.

@xaphod
Copy link

xaphod commented Oct 6, 2020

I'm disappointed that we're being bitten by this issue near the end of 2020, more than two years after it was reported (and we're using v4 urls that expire after only one day: we were bitten by this inside of a day.)

That this issue still exists makes it feel like the different Google teams are not working well with each other. I understand the reasons given by @asciimike and that's understandable for a small period of time, but not a year, and certainly not as many years as it has been. Assuming that Google wants projects to use Firebase in production, then this issue needs to be solved. Why is it closed?

Suggestion in the meantime: Firebase documentation should be amended to avoid the serviceAccountId / IAM method of authentication for Firebase Functions, as this issue is going to continue to cost many people hours and hours of their time, and serious embarrassment / public loss of confidence.

@asciimike
Copy link

I no longer work at Firebase, so I'll unsubscribe from the issue and leave it in the hands of @schmidt-sebastian and @paulb777 (or let them delegate)

@iekuzmuk
Copy link

iekuzmuk commented Oct 13, 2020 via email

@paulb777
Copy link

paulb777 commented Oct 20, 2020

This is a closed issue, so probably best to open a new issue and refer back to this one.

@schmidt-sebastian
Copy link

Long-lived URL creation is currently not supported via @googe-cloud/storage or firebase-admin. Only the Mobile Web, iOS and Android SDKs support these long-lived SDKs, as all requests against these URLs are validated by the Firebase Storage backends. We are aware that this is feature gap that we need to close.

@xaphod
Copy link

xaphod commented Oct 20, 2020

@schmidt-sebastian In order to avoid the collateral damage discussed in the later parts of this thread, documentation improvements could be made. For example, guidance to avoid the serviceAccountId / IAM method of initializing firebase apps in Functions if you are using Firebase Storage. It is not at all clear that this issue exists, and the cost of discovering late that your links are all invalidated before their expiryDate can be high. The likelihood of discovering this late is also high: if you use short expiration dates (1 day) and you never run a test at exactly the unknown window of that 1 day inside the window when server keys are re-rolled, then you'll never hit this. Totally possible to get to production and then have customer-facing red-face issues.

@hiranya911
Copy link

There is an obscure way of generating long-lived download URLs using this library. See https://stackoverflow.com/questions/42956250/get-download-url-from-file-uploaded-with-cloud-functions-for-firebase/43764656#43764656

We've had a few discussions about making this part of the SDK API surface, so developers can use it easily. We should try to revive that discussion, and it could be the long term solution for this problem.

@schmidt-sebastian
Copy link

+cc @egilmorez for docs

@mckrava
Copy link

mckrava commented Oct 26, 2020

Super helpful answers but what should I do if I already have production with hundreds of broken links? Regenerate all of them with @hiranya911 method? Or any other ways as regeneration will be super difficult action in my case.

(╯°□°)╯︵ ┻━┻

@frankyn
Copy link
Member

frankyn commented Oct 26, 2020

Apologies @mckrava but that's the only method at this time.

@rbokajr
Copy link

rbokajr commented Jan 20, 2021

@schmidt-sebastian In order to avoid the collateral damage discussed in the later parts of this thread, documentation improvements could be made. For example, guidance to avoid the serviceAccountId / IAM method of initializing firebase apps in Functions if you are using Firebase Storage. It is not at all clear that this issue exists, and the cost of discovering late that your links are all invalidated before their expiryDate can be high. The likelihood of discovering this late is also high: if you use short expiration dates (1 day) and you never run a test at exactly the unknown window of that 1 day inside the window when server keys are re-rolled, then you'll never hit this. Totally possible to get to production and then have customer-facing red-face issues.

We're squarely in this position now, and finding out about this honestly sort of angers me. We've been chasing our tails on this forever assuming our users were crazy. Never did we realize that ya know, our links given an expiry time still expire.

@acrdlph
Copy link

acrdlph commented Sep 22, 2021

Thanks for the extensive research done by all here!

After considering all the options I decided that having the signed URLs refreshed regularly with a scheduled cloud function is the best solution for me.

Pro: avoids the security and management issues with having a user-generated service account
Con: requires a big read/write operation, that goes through all documents that need a signed file URL

// v4 signedUrls have maximum expiry 1 week & v2 signedUrls (which theoretically could have longer expiry)
// get invalidated because Google rotates the keys for the managed service account that
// does the signing every two weeks. Hence, we have to renew the signed link regularly.
exports.refreshV4SignedFieldReportImageLink = functions.pubsub
  .schedule("every 160 hours")
  // slightly less than a week - and importantly: less than the expiration set below
  .onRun(async (context) => {
    return db
      .collection("fieldReports")
      .get()
      .then(async (fieldReportsCollection) => {
        try {
          await Promise.all(
            fieldReportsCollection.docs.map(async (fieldReportDoc) => {
              const targetId = fieldReportDoc.data().targetId;
              const imageUrl = fieldReportDoc.data().imageUrl;
              if (imageUrl) {
                // we only want to refresh the signed link for field reports that have an image
                const newSignedUrl = await getSignedUrl(
                  `fieldReportImages/${targetId}`
                );
                return fieldReportDoc.ref.set(
                  { imageUrl: newSignedUrl },
                  { merge: true }
                );
              } else {
                return null;
              }
            })
          );
        } catch (err) {
          throw new Error(`Unable to refresh signed links: ${err}`);
        }
        return null;
      });
  });

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: storage Issues related to the googleapis/nodejs-storage API. type: question Request for information or clarification. Not an issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.