-
Notifications
You must be signed in to change notification settings - Fork 14
Migrate all ISRG owned storage to GCP cloud storage #68
Comments
Apple can provide a GCP account on its global manifest. Our ingestor global manifest will have both an AWS account and a GCP account. Maybe we can make it generic if we add to the data share processor specific manifests to include fields that details the protocol to use (AWS or GCP) to access the bucket and the account to use (AWS or GCP) For Example:
Copied from https://docs.google.com/document/d/1MdfM3QT63ISU70l63bwzTrxr93Z7Tv7EDjLfammzo6Q/edit?ts=5f90e408#heading=h.f7uu5u7i5wfc with two extra keys: {"ingestion-account","ingestion-protocol"} |
I agree that the data share processor needs to advertise what kind of storage it uses. I might nest the structure a little. I don't think we need Correspondingly, we need to amend the schema for the ingestion server global manifest so its |
Per discussion in #68, ISRG is moving its storage to GCP Cloud Storage buckets. This commit removes all the AWS resource management from Terraform, and makes guesses as to how we will exchange the necessary parameters with peers.
This commit moves ISRG's storage from S3 to GCP, and sets up the peer test env so that one has its ingestion and peer validation buckets in S3 and the other has all its storage in GCS. To that end, this adds new modules cloud_storage_aws and cloud_storage_gcp which are responsible for creating storage buckets and policies for the respective platforms. This also includes bumps to the format versions of various manifests, though they will change some more before we can finalize format = 1 of various documents. Finally, facilitator and workflow-manager now accept the argument "default" for their various identity parameters, indicating that no special role assumption or service account impersonation should take place. Addresses #68 and #160.
This commit moves ISRG's storage from S3 to GCP, and sets up the peer test env so that one has its ingestion and peer validation buckets in S3 and the other has all its storage in GCS. To that end, this adds new modules cloud_storage_aws and cloud_storage_gcp which are responsible for creating storage buckets and policies for the respective platforms. This also includes bumps to the format versions of various manifests, though they will change some more before we can finalize format = 1 of various documents. Finally, facilitator and workflow-manager now accept the argument "default" for their various identity parameters, indicating that no special role assumption or service account impersonation should take place. Addresses #68 and #160.
This commit moves ISRG's storage from S3 to GCP, and sets up the peer test env so that one has its ingestion and peer validation buckets in S3 and the other has all its storage in GCS. To that end, this adds new modules cloud_storage_aws and cloud_storage_gcp which are responsible for creating storage buckets and policies for the respective platforms. This also includes bumps to the format versions of various manifests, though they will change some more before we can finalize format = 1 of various documents. Finally, facilitator and workflow-manager now accept the argument "default" for their various identity parameters, indicating that no special role assumption or service account impersonation should take place. Addresses #68 and #160.
#181 mostly addresses this, but we need to make sure the manifest format changes that PR implemented are OK with everyone. I will drive that discussion when everyone is back from Thanksgiving break next week. |
We did a successful integration test today and Apple signed off on the format changes. This is donezo. |
We need to move our entire deployment, including all the buckets we use as mailboxes, to GCP for budget reasons. This will entail some protocol changes because we will have to figure out how Apple will authenticate, and what parameters we need to discover from them to be able to configure the ingestion buckets (probably a GCP service account).
The text was updated successfully, but these errors were encountered: