New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug-1881575: support storing crashes in gcs #977
Conversation
f42a75f
to
3670126
Compare
This is tied to the wrong bug. This PR is one step in updating crash ingestion to use GCS instead of S3 for storage. The Socorro processor also has a crash storage for exporting crash data to Telemetry which is ultimately stored in I don't think we have a general bug for switching from S3 to GCS as part of the GCP migration. We need to create a new bug. |
filed https://bugzilla.mozilla.org/show_bug.cgi?id=1881575 and updated this PR to point there instead |
Thank you! Sorry it took me ages to notice. |
dc5b852
to
5d3f78b
Compare
project="test", | ||
) | ||
else: | ||
self.client = storage.Client() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unlike for pubsub in #974 (comment) the default retry and timeout behavior for google-cloud-storage is better defined. we use two network methods in this class: Client.get_bucket
and Blob.upload_from_string
, both of which have a default timeout of 60 seconds. Blob.upload_from_string
is expected not to retry because the client assumes it is not an idempotent action. Client.get_bucket
may retry and I think that's fine given there is a default timeout set, and a default retry timeout of 120 seconds. Overall a single file upload is bounded to (bucket retry timeout)+(bucket rpc timeout)+(blob upload rpc timeout) => 120+60+60 => 240 seconds.
tl;dr I think this client sets sane defaults for retry and timeout, and we shouldn't mess with them unless/until we see an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's one thing that should get fixed, one thing we should figure out, and I think we should redo the tests (other than the permissions one) to use the emulator rather than mocking everything out.
Other than that, this looks good!
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the changes you made in the tests and I'm puzzled about the new problem with markus not closing the statsd socket. Can you explain more about what you're thinking?
@@ -36,6 +33,7 @@ CRASHMOVER_CRASHSTORAGE_ENDPOINT_URL=http://localstack:4566 | |||
CRASHMOVER_CRASHSTORAGE_REGION=us-east-1 | |||
CRASHMOVER_CRASHSTORAGE_ACCESS_KEY=foo | |||
CRASHMOVER_CRASHSTORAGE_SECRET_ACCESS_KEY=foo | |||
# Used for S3 and GCS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a much better idea than what I was thinking. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
No description provided.