goblob
is a tool for migrating Cloud Foundry blobs from one blobstore to
another. Presently it only supports migrating from an NFS blobstore to an
S3-compatible one.
Download the latest release.
Requirements:
mkdir -p $GOPATH/src/github.com/pivotal-cf/goblob
git clone git@github.com:pivotal-cf/goblob.git $GOPATH/src/github.com/pivotal-cf/goblob
cd $GOPATH/src/github.com/pivotal-cf/goblob
glide install
GOARCH=amd64 GOOS=linux go install github.com/pivotal-cf/goblob/cmd/goblob
The tool is a Golang binary, which must be executed on the NFS VM that you intend to migrate. The commands of the tool are:
Command | Description |
---|---|
goblob migrate [OPTIONS] |
Migrate NFS blobstore to S3-compatible blobstore |
goblob migrate2azure [OPTIONS] |
Migrate NFS blobstore to Azure blob storage |
For each option you use, add --
before the option name in the command you want to execute.
goblob migrate [OPTIONS]
concurrent-uploads
: Number of concurrent uploads (default: 20)exclude
: Directory to exclude (may be given more than once)
blobstore-path
: The path to the root of the NFS blobstore, e.g. /var/vcap/store/shared
s3-endpoint
: The endpoint of the S3-compatible blobstores3-accesskey
: The access key to use with the S3-compatible blobstores3-secretkey
: The secret key to use with the S3-compatible blobstoreregion
: The region to use with the S3-compatible blobstorebuildpacks-bucket-name
: The bucket containing buildpacksdroplets-bucket-name
: The bucket containing dropletspackages-bucket-name
: The bucket containing packagesresources-bucket-name
: The bucket containing resourcesuse-multipart-uploads
: Whether to use multi-part uploadsdisable-ssl
: Whether to disable SSL when uploading blobsinsecure-skip-verify
: Skip server SSL certificate verification
goblob migrate2azure [OPTIONS]
goblob migrate2azure --blobstore-path /var/vcap/store/shared \
--azure-storage-account $storage_account_name \
--azure-storage-account-key $storage_account_key \
--cloud-name AzureCloud \
--buildpacks-bucket-name cf-buildpacks \
--droplets-bucket-name cf-droplets \
--packages-bucket-name cf-packages \
--resources-bucket-name cf-resources
concurrent-uploads
: Number of concurrent uploads (default: 20)exclude
: Directory to exclude (may be given more than once)
blobstore-path
: The path to the root of the NFS blobstore, e.g. /var/vcap/store/shared
azure-storage-account
: Azure storage account nameazure-storage-account-key
: Azure storage account keycloud-name
: cloud name, available names are: AzureCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernmentbuildpacks-bucket-name
: The container for buildpacksdroplets-bucket-name
: The container for dropletspackages-bucket-name
: The container for packagesresources-bucket-name
: The container for resources
- If your S3 service uses an SSL certificate signed by your own CA: Before applying changes in Ops Manager to switch to S3, make sure the root CA cert that signed the endpoint cert is a BOSH-trusted-certificate. You will need to update Ops Manager ca-certs (place the CA cert in /usr/local/share/ca-certificates and run update-ca-certificates, and restart tempest-web). You will need to add this certificate back in each time you do an upgrade of Ops Manager. In PCF 1.9+, Ops Manager will let you replace its own SSL cert and have that persist across upgrades.
- Update OpsManager File Storage Config to point at S3 blobstore using buckets (cc-buildpacks-, cc-droplets-, cc-packages-, cc-resources-)
- Click
Apply Changes
in Ops Manager - Once changes are applied, re-run
goblob
to migrate any files which were created after the initial migration - Validate apps can be restaged and pushed
- Turn off bosh resurrector (
bosh vm resurrection off
) - In the IaaS console (e.g. AWS EC2, vCenter console, etc.), terminate all the CC VM jobs (Cloud Controller, Cloud Controller Worker, and Clock Global) + NFS (ensure the attached disks are removed as well). Note that your CF API services will stop being available at this point (running apps should continue to be available though). This step is required to ensure the removal of the NFS mount from these jobs.
bosh cck
the cf deployment to check for any errors with the bosh state. It should ask you if you want to delete references to the missing CC/NFS jobs, which you want to do.- Go back to Ops Mgr and update your ERT configurations to zero NFS instances and re-add your desired instance counts for the CC jobs.
- Click
Apply Changes
in Ops Manager. After this deploy is finished, your CF API service availibility will resume. - Turn back on the bosh vm resurrector, if it isn’t turned back on after your re-deploy (
bosh vm resurrection on
).
- Starting with PCF 1.12, Ops Manager no longer allows you to select the S3 blobstorage configuration for ERT/PAS, instead of the internal NFS option, without also deleting the NFS server VM. This means that you should shut down the Cloud Controller VMs before you switch your configuration, as you will not have a chance to run a post-configuration-switch migration once the NFS server is gone.
- Install Docker
docker pull minio/minio
To run all of the tests in a Docker container:
./testrunner
To continually run the tests during development:
docker run -p 9000:9000 -e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" -e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" minio/minio server /tmp
- (in a separate terminal)
MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY ginkgo watch -r