-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: retries for resumable bucket.upload and file.save #1511
Conversation
src/bucket.ts
Outdated
@@ -3737,7 +3737,7 @@ class Bucket extends ServiceObject { | |||
.pipe(writable) | |||
.on('error', err => { | |||
if ( | |||
isMultipart && | |||
(isMultipart || options.resumable) && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will always evaluate to true, right? Because each upload is either multipart or resumable. I think this should be checking for (isMultipart || err.message) or something like that. We want to retry all multipart uploads and then resumable uploads only where it is a certain error (because otherwise gcs-resumable-upload will handle it)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case maybe we don't need this check at all. Certain errors are being explicitly handled down at the gcs-resumable-upload level here. This basically tells Gaxios to let gcs-resumable-upload handle everything. However, during the URL creation only certain codes are handled at this the same level and the rest bubble up. I think we should handle anything that bubbles up and meets our retryableFn criteria at this level. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we try to retry a 500 error downstream and it never works out, it will be bubbled up here and then we will send it back downstream. This creates exponentially more retries than maxRetries
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It actually doesn't cause an exponential increase in retries because the error that occurs when retries are exhausted is not being checked for in the retry function (it only has an error message and no error code). As a result the code at this level does not send it back downstream. Everything appears to get handled correctly. I will circle up with you to show you what I mean in case I missed anything obvious.
test/file.ts
Outdated
@@ -4118,7 +4118,7 @@ describe('File', () => { | |||
await file.save(DATA, options); | |||
throw Error('unreachable'); | |||
} catch (e) { | |||
assert.strictEqual(e.message, 'first error'); | |||
assert.strictEqual(e.message, 'unreachable'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't test this like this. If we're checking that it retries, we shouldn't throw the "unreachable" error (because it will be reachable) and instead assert the retryCount
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #<issue_number_goes_here> 🦕