-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete copies of CONUS hourly data on object storage #417
Comments
I'm currently deleting the version on
It's deleting at rate of about 6TB per hour. |
I forgot, there is one more copy! Before we had the
The rclone remote for this bucket I have specified in
@amsnyder shall I delete this as well or would you like someone else to have the experience? |
You can go ahead and delete the copy in |
Okay, I'll fire off the batch job to remove it. |
@amsnyder, @alaws-USGS and hytest crew:
As described in more detail in this notebook, there are currently 3 copies of the CONUS404 hourly data on object storage:
Since this data also exists on caldera, I suggest we delete the dataset on AWS S3 nhgf-development (saving USGS about $1600/month) and also on the RENCI pod rsignellbucket2, which was an allocation to be used before the USGS pod was acquired.
Any objections?
If not I will delete the dataset from the RENCI pod rsignellbucket2, freeing up the space for other use.
Someone with permissions for nhgf-development would need to delete the copy there.
The text was updated successfully, but these errors were encountered: