diff --git a/docs/guides/akamai/solutions/iot-firmware-upgrades-with-obj-and-cdn/index.md b/docs/guides/akamai/solutions/iot-firmware-upgrades-with-obj-and-cdn/index.md index a64e432afc8..9d27616f3e3 100644 --- a/docs/guides/akamai/solutions/iot-firmware-upgrades-with-obj-and-cdn/index.md +++ b/docs/guides/akamai/solutions/iot-firmware-upgrades-with-obj-and-cdn/index.md @@ -39,7 +39,7 @@ This solution creates a streamlined delivery pipeline that allows developers to ### Systems and Components -- **Linode Object Storage:** An S3 compatible Object Storage bucket +- **Linode Object Storage:** An Amazon S3-compatible Object Storage bucket - **Linode VM:** A Dedicated 16GB Linode virtual machine diff --git a/docs/guides/akamai/solutions/observability-with-datastream-and-multiplexing/index.md b/docs/guides/akamai/solutions/observability-with-datastream-and-multiplexing/index.md index 25db3010fa6..bc20b3e8980 100644 --- a/docs/guides/akamai/solutions/observability-with-datastream-and-multiplexing/index.md +++ b/docs/guides/akamai/solutions/observability-with-datastream-and-multiplexing/index.md @@ -58,7 +58,7 @@ Coupling cloud-based multiplexing with DataStream edge logging allows you to con ### Integration and Migration Effort -The multiplexing solution in this guide does not require the migration of any application-critical software or data. This solution exists as a location-agnostic, cloud-based pipeline between your edge delivery infrastructure and log storage endpoints (i.e. s3 buckets, Google Cloud Storage, etc.). +The multiplexing solution in this guide does not require the migration of any application-critical software or data. This solution exists as a location-agnostic, cloud-based pipeline between your edge delivery infrastructure and log storage endpoints (i.e. Amazon S3-compatible buckets, Google Cloud Storage, etc.). Using the following example, you can reduce your overall egress costs by pointing your cloud multiplexing architecture to Akamai’s Object Storage rather than a third-party object storage solution. diff --git a/docs/guides/akamai/solutions/observability-with-datastream-and-trafficpeak/index.md b/docs/guides/akamai/solutions/observability-with-datastream-and-trafficpeak/index.md index 90f165e5e29..5a5538a12c4 100644 --- a/docs/guides/akamai/solutions/observability-with-datastream-and-trafficpeak/index.md +++ b/docs/guides/akamai/solutions/observability-with-datastream-and-trafficpeak/index.md @@ -87,4 +87,4 @@ Below is a high-level diagram and walkthrough of a DataStream and TrafficPeak ar - **VMs:** Compute Instances used to run TrafficPeak’s log ingest and processing software. Managed by Akamai. -- **Object Storage:** S3 compatible object storage used to store log data from TrafficPeak. Managed by Akamai. \ No newline at end of file +- **Object Storage:** Amazon S3-compatible object storage used to store log data from TrafficPeak. Managed by Akamai. \ No newline at end of file diff --git a/docs/guides/applications/big-data/getting-started-with-pytorch-lightning/index.md b/docs/guides/applications/big-data/getting-started-with-pytorch-lightning/index.md index b4e757551b3..b4b39f21a17 100644 --- a/docs/guides/applications/big-data/getting-started-with-pytorch-lightning/index.md +++ b/docs/guides/applications/big-data/getting-started-with-pytorch-lightning/index.md @@ -51,7 +51,7 @@ An optimized pipeline consists of a set of one or more "gold images". These beco Lightning code is configured to include multiple data loader steps to train neural networks. Depending on the desired training iterations and epochs, configured code can optionally store numerous intermediate storage objects and spaces. This allows for the isolation of training and validation steps for further testing, validation, and feedback loops. -Throughout the modeling process, various storage spaces are used for staging purposes. These spaces might be confined to the Linux instance running PyTorch Lightning. Alternatively, they can have inputs sourced from static or streaming objects located either within or outside the instance. Such sourced locations can include various URLs, local Linode volumes, Linode (or other S3 buckets), or external sources. This allows instances to be chained across multiple GPU instances if desired. +Throughout the modeling process, various storage spaces are used for staging purposes. These spaces might be confined to the Linux instance running PyTorch Lightning. Alternatively, they can have inputs sourced from static or streaming objects located either within or outside the instance. Such sourced locations can include various URLs, local Linode volumes, Linode (or other Amazon S3-compatible buckets), or external sources. This allows instances to be chained across multiple GPU instances if desired. This introduces an additional stage in the pipeline between and among instances for high-volume or large tensor data source research. @@ -91,7 +91,7 @@ Several storage profiles work for the needs of modeling research, including: - **Mounted Linode Volumes**: Up to eight logical disk volumes ranging from 10 GB to 80 TB can be optionally added to any Linode. Volumes are mounted and unmounted either manually or programmatically. Volumes may be added, deleted, and/or backed-up during the research cycle. Volume storage costs are optional. -- **Linode Object Storage**: Similar to CORS S3 storage, Linode Object Storage emulates AWS or DreamHost S3 storage, so S3 objects can be migrated to Linode and behave similarly. Standard S3 buckets can be imported, stored, or deleted as needed during the research cycle. Object storage costs are optional. +- **Linode Object Storage**: Similar to CORS S3 storage, Linode Object Storage emulates AWS or DreamHost S3 storage, so Amazon S3-compatible objects can be migrated to Linode and behave similarly. Standard S3 buckets can be imported, stored, or deleted as needed during the research cycle. Object storage costs are optional. - **External URL Code Calls**: External networked data sources are subject to the data flow charges associated with the Linode GPU or other instance cost. diff --git a/docs/guides/applications/configuration-management/terraform/secrets-management-with-terraform/index.md b/docs/guides/applications/configuration-management/terraform/secrets-management-with-terraform/index.md index d29c704993f..34a13805a9c 100644 --- a/docs/guides/applications/configuration-management/terraform/secrets-management-with-terraform/index.md +++ b/docs/guides/applications/configuration-management/terraform/secrets-management-with-terraform/index.md @@ -173,7 +173,7 @@ As of the writing of this guide, **sensitive information used to generate your T ### Remote Backends -Terraform [*backends*](https://www.terraform.io/docs/backends/index.html) allow the user to securely store their state in a remote location. For example, a key/value store like [Consul](https://www.consul.io/), or an S3 compatible bucket storage like [Minio](https://www.minio.io/). This allows the Terraform state to be read from the remote store. Because the state only ever exists locally in memory, there is no worry about storing secrets in plain text. +Terraform [*backends*](https://www.terraform.io/docs/backends/index.html) allow the user to securely store their state in a remote location. For example, a key/value store like [Consul](https://www.consul.io/), or an Amazon S3-compatible bucket storage like [Minio](https://www.minio.io/). This allows the Terraform state to be read from the remote store. Because the state only ever exists locally in memory, there is no worry about storing secrets in plain text. Some backends, like Consul, also allow for state locking. If one user is applying a state, another user cannot make any changes. diff --git a/docs/guides/applications/messaging/linode-object-storage-with-mastodon/index.md b/docs/guides/applications/messaging/linode-object-storage-with-mastodon/index.md index a67f02ac9fe..d5f7a2f743b 100644 --- a/docs/guides/applications/messaging/linode-object-storage-with-mastodon/index.md +++ b/docs/guides/applications/messaging/linode-object-storage-with-mastodon/index.md @@ -34,13 +34,13 @@ Mastodon by default stores its media attachments locally. Every upload is saved If your Mastodon instance stays below a certain size and traffic level, these image uploads might not cause issues. But as your Mastodon instance grows, the local storage approach can cause difficulties. Media stored in this way is often difficult to manage and a burden on your server. -But object storage, by contrast, excels when it comes to storing static files — like Mastodon's media attachments. An S3-compatible object storage bucket can more readily store a large number of static files and scale appropriately. +But object storage, by contrast, excels when it comes to storing static files — like Mastodon's media attachments. An Amazon S3-compatible object storage bucket can more readily store a large number of static files and scale appropriately. To learn more about the features of object storage generally and Linode Object Storage more particularly, take a look at our [Linode Object Storage overview](/docs/products/storage/object-storage/). ## How to Use Linode Object Storage with Mastodon -The rest of this guide walks you through setting up a Mastodon instance to use Linode Object Storage for storing its media attachments. Although the guide uses Linode Object Storage, the steps should also provide an effective model for using other S3-compatible object storage buckets with Mastodon. +The rest of this guide walks you through setting up a Mastodon instance to use Linode Object Storage for storing its media attachments. Although the guide uses Linode Object Storage, the steps should also provide an effective model for using other Amazon S3-compatible object storage buckets with Mastodon. The tutorial gives instructions for creating a new Mastodon instance, but the instructions should also work for most existing Mastodon instances regardless of whether it was installed on Docker or from source. Additionally, the tutorial includes steps for migrating existing, locally-stored Mastodon media to the object storage instance. @@ -195,7 +195,7 @@ At this point, your Mastodon instance is ready to start storing media on your Li If you are adding object storage to an existing Mastodon instance, likely already have content stored locally. And likely you want to migrate that content to your new Linode Object Storage bucket. -To do so, you can use a tool for managing S3 storage to copy local contents to your remote object storage bucket. For instance, AWS has a command-line S3 tool that should be configurable for Linode Object Storage. +To do so, you can use a tool for managing Amazon S3-compatible storage to copy local contents to your remote object storage bucket. For instance, AWS has a command-line S3 tool that should be configurable for Linode Object Storage. However, this guide uses the powerful and flexible [rclone](https://rclone.org/s3/). `rclone` operates on a wide range of storage devices and platforms, not just S3, and it is exceptional for syncing across storage mediums. @@ -239,6 +239,6 @@ Perhaps the simplest way to verify your Mastodon configuration is by making a po You Mastodon instance now has its media storage needs being handled by object storage. And with that your server has become more scalable and prepared for an expanding user base. -The links below provide additional information on how the setup between Mastodon and an S3-compatible storage works. +The links below provide additional information on how the setup between Mastodon and Amazon S3-compatible storage works. To keep learning about Mastodon, be sure to take a look at the official [Mastodon blog](https://blog.joinmastodon.org/) and the [Mastodon discussion board](https://discourse.joinmastodon.org/). diff --git a/docs/guides/development/version-control/using-gitlab-runners-with-linode-object-storage/index.md b/docs/guides/development/version-control/using-gitlab-runners-with-linode-object-storage/index.md index ae11312b24c..8239f52c2e7 100644 --- a/docs/guides/development/version-control/using-gitlab-runners-with-linode-object-storage/index.md +++ b/docs/guides/development/version-control/using-gitlab-runners-with-linode-object-storage/index.md @@ -395,7 +395,7 @@ test-job-1: By default, cached files are stored locally alongside your GitLab Runner Manager. But that option may not be the most efficient, especially as your GitLab pipelines become more complicated and your projects' storage needs expand. -To remedy this, you can adjust your GitLab Runner configuration to use an S3-compatible object storage solution, like [Linode Object Storage](/docs/products/storage/object-storage/get-started/). +To remedy this, you can adjust your GitLab Runner configuration to use an Amazon S3-compatible object storage solution, like [Linode Object Storage](/docs/products/storage/object-storage/get-started/). These next steps show you how you can integrate a Linode Object Storage bucket with your GitLab Runner to store cached resources from CI/CD jobs. diff --git a/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md b/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md index f69537f3524..3314919eccc 100644 --- a/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md +++ b/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md @@ -20,7 +20,7 @@ external_resources: ## What is Minio? -Minio is an open source, S3 compatible object store that can be hosted on a Linode. Deployment on a Kubernetes cluster is supported in both standalone and distributed modes. This guide uses [Kubespray](https://github.com/kubernetes-incubator/kubespray) to deploy a Kubernetes cluster on three servers running Ubuntu 16.04. Kubespray comes packaged with Ansible playbooks that simplify setup on the cluster. Minio is then installed in standalone mode on the cluster to demonstrate how to create a service. +Minio is an open source, Amazon S3-compatible object store that can be hosted on a Linode. Deployment on a Kubernetes cluster is supported in both standalone and distributed modes. This guide uses [Kubespray](https://github.com/kubernetes-incubator/kubespray) to deploy a Kubernetes cluster on three servers running Ubuntu 16.04. Kubespray comes packaged with Ansible playbooks that simplify setup on the cluster. Minio is then installed in standalone mode on the cluster to demonstrate how to create a service. ## Before You Begin @@ -411,6 +411,6 @@ Persistent Volumes(PV) are an abstraction in Kubernetes that represents a unit o ![Minio Login Screen](minio-login-screen.png) -1. Minio has similar functionality to S3: file uploads, creating buckets, and storing other data. +1. Minio has similar functionality to Amazon S3: file uploads, creating buckets, and storing other data. ![Minio Browser](minio-browser.png) \ No newline at end of file diff --git a/docs/guides/platform/migrate-to-linode/migrate-from-aws-s3-to-linode-object-storage/index.md b/docs/guides/platform/migrate-to-linode/migrate-from-aws-s3-to-linode-object-storage/index.md index 72eeabd5e4d..e187a1f5adb 100644 --- a/docs/guides/platform/migrate-to-linode/migrate-from-aws-s3-to-linode-object-storage/index.md +++ b/docs/guides/platform/migrate-to-linode/migrate-from-aws-s3-to-linode-object-storage/index.md @@ -13,7 +13,7 @@ external_resources: - '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)' --- -Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from AWS S3 to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI. +Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from AWS S3 to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI. ## Migration Considerations @@ -37,7 +37,7 @@ Linode Object Storage is an S3-compatible service used for storing large amounts There are two architecture options for completing a data migration from AWS S3 to Linode Object Storage. One of these architectures is required to be in place prior to initiating the data migration: -**Architecture 1:** Utilizes an EC2 instance running rclone in the same region as the source S3 bucket. Data is transferred internally from the S3 bucket to the EC2 instance and then over the public internet from the EC2 instance to the target Linode Object Storage bucket. +**Architecture 1:** Utilizes an EC2 instance running rclone in the same region as the source AWS S3 bucket. Data is transferred internally from the AWS S3 bucket to the EC2 instance and then over the public internet from the EC2 instance to the target Linode Object Storage bucket. - **Recommended for:** speed of transfer, users with AWS platform familiarity @@ -53,7 +53,7 @@ Rclone generally performs better when placed closer to the source data being cop 1. A source AWS S3 bucket with the content to be transferred. -1. An AWS EC2 instance running rclone in the same region as the source S3 bucket. The S3 bucket communicates with the EC2 instance via VPC Endpoint within the AWS region. Your IAM policy should allow S3 access only via your VPC Endpoint. +1. An AWS EC2 instance running rclone in the same region as the source AWS S3 bucket. The AWS S3 bucket communicates with the EC2 instance via VPC Endpoint within the AWS region. Your IAM policy should allow S3 access only via your VPC Endpoint. 1. Data is copied across the public internet from the AWS EC2 instance to a target Linode Object Storage bucket. This results in egress (outbound traffic) being calculated by AWS. @@ -93,7 +93,7 @@ Rclone generally performs better when placed closer to the source data being cop - Secret key - Region ID -- If using Architecture 1, there must be a [VPC gateway endpoint created](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) for S3 in the same VPC where your EC2 instance is deployed. This should be the same region as your S3 bucket. +- If using Architecture 1, there must be a [VPC gateway endpoint created](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) for S3 in the same VPC where your EC2 instance is deployed. This should be the same region as your AWS S3 bucket. - An **existing Linode Object Storage bucket** with: @@ -194,7 +194,7 @@ Rclone generally performs better when placed closer to the source data being cop #### Rclone Copy Command Breakdown -- `aws:aws-bucket-name/`: The AWS remote provider and source S3 bucket. Including the slash at the end informs the `copy` command to include everything within the bucket. +- `aws:aws-bucket-name/`: The AWS remote provider and source AWS S3 bucket. Including the slash at the end informs the `copy` command to include everything within the bucket. - `linode:linode-bucket-name/`: The Linode remote provider and target Object Storage bucket. diff --git a/docs/guides/platform/migrate-to-linode/migrate-from-azure-blob-storage-to-linode-object-storage/index.md b/docs/guides/platform/migrate-to-linode/migrate-from-azure-blob-storage-to-linode-object-storage/index.md index 7b309a636e1..dbd812a4d9b 100644 --- a/docs/guides/platform/migrate-to-linode/migrate-from-azure-blob-storage-to-linode-object-storage/index.md +++ b/docs/guides/platform/migrate-to-linode/migrate-from-azure-blob-storage-to-linode-object-storage/index.md @@ -12,7 +12,7 @@ external_resources: - '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)' --- -Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Azure Blob Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI. +Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Azure Blob Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI. ## Migration Considerations @@ -305,4 +305,4 @@ There are several next steps to consider after a successful object storage migra - **Confirm the changeover is functioning as expected.** Allow some time to make sure your updated workloads and jobs are interacting successfully with Linode Object Storage. Once you confirm everything is working as expected, you can safely delete the original source bucket and its contents. -- **Take any additional steps to update your system for S3 compatibility.** Since the Azure Blob Storage API is not S3 compatible, you may need to make internal configuration changes to ensure your system is set up to communicate using S3 protocol. This means your system should be updated to use an S3-compatible [SDK](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html) like [Boto3](https://aws.amazon.com/sdk-for-python/) or S3-compatible command line utility like [s3cmd](https://s3tools.org/s3cmd). The [AWS SDK](https://techdocs.akamai.com/cloud-computing/docs/using-the-aws-sdk-for-php-with-object-storage) can also be configured to function with Linode Object Storage. \ No newline at end of file +- **Take any additional steps to update your system for Amazon S3 compatibility.** Since the Azure Blob Storage API is not Amazon S3-compatible, you may need to make internal configuration changes to ensure your system is set up to communicate using S3 protocol. This means your system should be updated to use an Amazon S3-compatible [SDK](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html) like [Boto3](https://aws.amazon.com/sdk-for-python/) or Amazon S3-compatible command line utility like [s3cmd](https://s3tools.org/s3cmd). The [AWS SDK](https://techdocs.akamai.com/cloud-computing/docs/using-the-aws-sdk-for-php-with-object-storage) can also be configured to function with Linode Object Storage. \ No newline at end of file diff --git a/docs/guides/platform/migrate-to-linode/migrate-from-google-cloud-storage-to-linode-object-storage/index.md b/docs/guides/platform/migrate-to-linode/migrate-from-google-cloud-storage-to-linode-object-storage/index.md index 1a92c36512a..066334b976f 100644 --- a/docs/guides/platform/migrate-to-linode/migrate-from-google-cloud-storage-to-linode-object-storage/index.md +++ b/docs/guides/platform/migrate-to-linode/migrate-from-google-cloud-storage-to-linode-object-storage/index.md @@ -12,7 +12,7 @@ external_resources: - '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)' --- -Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Google Cloud Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI. +Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Google Cloud Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI. ## Migration Considerations @@ -310,4 +310,4 @@ There are several next steps to consider after a successful object storage migra - **Confirm the changeover is functioning as expected.** Allow some time to make sure your updated workloads and jobs are interacting successfully with Linode Object Storage. Once you confirm everything is working as expected, you can safely delete the original source bucket and its contents. -- **Take any additional steps to update your system for S3 compatibility.** You may need to make additional internal configuration changes to ensure your system is set up to communicate using S3 protocol. See Google’s documentation for [interoperability with other storage providers](https://cloud.google.com/storage/docs/interoperability). \ No newline at end of file +- **Take any additional steps to update your system for Amazon S3 compatibility.** You may need to make additional internal configuration changes to ensure your system is set up to communicate using S3 protocol. See Google’s documentation for [interoperability with other storage providers](https://cloud.google.com/storage/docs/interoperability). \ No newline at end of file diff --git a/docs/guides/platform/object-storage/backing-up-compute-instance-to-object-storage/index.md b/docs/guides/platform/object-storage/backing-up-compute-instance-to-object-storage/index.md index 697740740d2..3b91c9e48b2 100644 --- a/docs/guides/platform/object-storage/backing-up-compute-instance-to-object-storage/index.md +++ b/docs/guides/platform/object-storage/backing-up-compute-instance-to-object-storage/index.md @@ -12,7 +12,7 @@ external_resources: - '[Ubuntu Forums: Heliode - Howto: Backup and Restore Your System!](https://ubuntuforums.org/showthread.php?t=35087)' --- -Linode's [Object Storage](https://www.linode.com/products/object-storage/) service is an S3-compatible cloud-based file storage solution that offers high availability. In addition to many other data-storage uses, Linode Object Storage can efficiently store backups of your Linode Compute Instances. +Linode's [Object Storage](https://www.linode.com/products/object-storage/) service is an Amazon S3-compatible cloud-based file storage solution that offers high availability. In addition to many other data-storage uses, Linode Object Storage can efficiently store backups of your Linode Compute Instances. In this tutorial, learn how to create full-system backups from the command line and store them on Linode Object Storage. Afterwards, find out how to automate and schedule the entire process. @@ -66,7 +66,7 @@ The tar command also supports incremental backups, using its `--listed-increment With your backup made and stored in a convenient `backup.tgz` file, you can start the process of storing it on an Object Storage bucket. -The [rclone](https://rclone.org/) utility handles that process efficiently, especially when you plan on automating backups (covered in the next section). You can learn more about rclone and its usage with S3 object storage in our guide [Use Rclone to Sync Files to Linode Object Storage](/docs/guides/rclone-object-storage-file-sync/). +The [rclone](https://rclone.org/) utility handles that process efficiently, especially when you plan on automating backups (covered in the next section). You can learn more about rclone and its usage with object storage in our guide [Use Rclone to Sync Files to Linode Object Storage](/docs/guides/rclone-object-storage-file-sync/). Follow along with the steps here to set up rclone and store your initial backup file to a Linode Object Storage instance. diff --git a/docs/guides/platform/object-storage/how-to-configure-nextcloud-to-use-linode-object-storage-as-an-external-storage-mount/index.md b/docs/guides/platform/object-storage/how-to-configure-nextcloud-to-use-linode-object-storage-as-an-external-storage-mount/index.md index 5c84e675e24..2210ebd5168 100644 --- a/docs/guides/platform/object-storage/how-to-configure-nextcloud-to-use-linode-object-storage-as-an-external-storage-mount/index.md +++ b/docs/guides/platform/object-storage/how-to-configure-nextcloud-to-use-linode-object-storage-as-an-external-storage-mount/index.md @@ -79,7 +79,7 @@ If you have not yet [created an Object Storage access key](/docs/products/storag 1. From the **External Storage** dropdown menu, select the **Amazon S3** option. {{< note respectIndent=false >}} -Linode Object Storage is *S3-compatible*. Nextcloud connects to Amazon's Object Storage service by default, however, in the next step you override the default behavior to use Linode Object Storage hosts instead. +Linode Object Storage is *Amazon S3-compatible*. Nextcloud connects to Amazon's Object Storage service by default, however, in the next step you override the default behavior to use Linode Object Storage hosts instead. {{< /note >}} 1. Select **Access Key** from the **Authentication** dropdown menu. diff --git a/docs/guides/platform/object-storage/how-to-move-objects-between-buckets/index.md b/docs/guides/platform/object-storage/how-to-move-objects-between-buckets/index.md index 8596d46a74a..36167019552 100644 --- a/docs/guides/platform/object-storage/how-to-move-objects-between-buckets/index.md +++ b/docs/guides/platform/object-storage/how-to-move-objects-between-buckets/index.md @@ -17,7 +17,7 @@ aliases: ['/platform/object-storage/how-to-move-objects-between-buckets/'] {{% content "object-storage-ga-shortguide" %}} -Linode’s Object Storage is a globally-available, S3-compatible method for storing and accessing data. With Object Storage more widely available, you may have buckets in multiple locations, this guide shows you how to move objects between buckets quickly and easily. +Linode’s Object Storage is a globally-available, Amazon S3-compatible method for storing and accessing data. With Object Storage more widely available, you may have buckets in multiple locations, this guide shows you how to move objects between buckets quickly and easily. In this guide you learn how to move objects between buckets using: diff --git a/docs/guides/platform/object-storage/working-with-cors-linode-object-storage/index.md b/docs/guides/platform/object-storage/working-with-cors-linode-object-storage/index.md index d06f8c72889..2b70ef9330c 100644 --- a/docs/guides/platform/object-storage/working-with-cors-linode-object-storage/index.md +++ b/docs/guides/platform/object-storage/working-with-cors-linode-object-storage/index.md @@ -13,33 +13,33 @@ external_resources: - '[DreamHost Knowledge Base: Configuring (CORS) on a DreamObjects Bucket](https://help.dreamhost.com/hc/en-us/articles/216201557-How-to-setup-Cross-Origin-Resource-Sharing-CORS-on-DreamObjects)' --- -[Linode Object Storage](/docs/products/storage/object-storage/) offers a globally-available, S3-compatible storage solution. Whether you are storing critical backup files or data for a static website, S3 object storage can efficiently answer the call. +[Linode Object Storage](/docs/products/storage/object-storage/) offers a globally-available, Amazon S3-compatible storage solution. Whether you are storing critical backup files or data for a static website, Amazon S3-compatible object storage can efficiently answer the call. To make the most of object storage, you may need to access the data from other domains. For instance, your dynamic applications may opt to use S3 for static file storage. This leaves you dealing with Cross-Origin Resource Sharing, or CORS. However, it's often not clear how to effectively navigate CORS policies or deal with issues as they come up. -This tutorial aims to clarify how to work with CORS and S3. It covers tools and approaches for effectively reviewing and managing CORS policies for Linode Object Storage or most other S3-compatible storage solutions. +This tutorial aims to clarify how to work with CORS and S3. It covers tools and approaches for effectively reviewing and managing CORS policies for Linode Object Storage or most other Amazon S3-compatible storage solutions. ## CORS and S3 Storage - What you Need to Know -Linode Object Storage is an S3, which stands for *simple storage service*. With S3, data gets stored as objects in "buckets." This gives S3s a flat approach to storage, in contrast to the hierarchical and logistically more complicated storage structures like traditional file systems. Objects stored in S3 can also be given rich metadata. +Linode Object Storage is an "S3", which stands for *simple storage service*. With S3, data gets stored as objects in "buckets." This gives S3s a flat approach to storage, in contrast to the hierarchical and logistically more complicated storage structures like traditional file systems. Objects stored in S3 can also be given rich metadata. CORS defines how clients and servers from different domains may share resources. Generally, CORS policies restrict access to resources to requests from the same domain. By managing your CORS policies, you can open up services to requests from specified origin domains, or from any domains whatsoever. -An S3 like Linode Object Storage can provide excellent storage for applications. However, you also want to keep your data as secure as possible while also allowing your applications the access they need. +An Amazon S3-compatible solution like Linode Object Storage can provide excellent storage for applications. However, you also want to keep your data as secure as possible while also allowing your applications the access they need. This is where managing CORS policies on your object storage service becomes imperative. Applications and other tools often need to access stored resources from particular domains. Implementing specific CORS policies controls what kinds of requests, and responses, each origin domain is allowed. ## Working with CORS Policies on Linode Object Storage -One of the best tools for managing policies on your S3, including Linode Object Storage, is `s3cmd`. Follow along with our guide [Using S3cmd with Object Storage](/docs/products/storage/object-storage/guides/s3cmd/) to: +One of the best tools for managing policies for your Amazon S3-compatible storage, including Linode Object Storage, is `s3cmd`. Follow along with our guide [Using S3cmd with Object Storage](/docs/products/storage/object-storage/guides/s3cmd/) to: -1. Install `s3cmd` on your system. The installation takes place on the system from which you intend to manage your S3 instance. +1. Install `s3cmd` on your system. The installation takes place on the system from which you intend to manage your S3 storage. -2. Configure `s3cmd` for your Linode Object Storage instance. This includes indicating the instance's access key, endpoint, etc. +2. Configure `s3cmd` for your Linode Object Storage. This includes indicating the access key, endpoint, etc. -You can verify the connection to your object storage instance with the command to list your buckets. This example lists the one bucket used for this tutorial, `example-cors-bucket`: +You can verify the connection to your object storage with the command to list your buckets. This example lists the one bucket used for this tutorial, `example-cors-bucket`: s3cmd ls @@ -47,11 +47,11 @@ You can verify the connection to your object storage instance with the command t 2022-09-24 16:13 s3://example-cors-bucket {{< /output >}} -Once you have `s3cmd` set up for your S3 instance, use it to follow along with the upcoming sections of this tutorial. These show you how to use the tool to review and deploy CORS policies. +Once you have `s3cmd` set up, use it to follow along with the upcoming sections of this tutorial. These show you how to use the tool to review and deploy CORS policies. ## Reviewing CORS Policies for Linode Object Storage -You can get the current CORS policies for your S3 bucket using the `info` flag for `s3cmd`. The command provides general information on the designated bucket, including its policies: +You can get the current CORS policies for your bucket using the `info` flag for `s3cmd`. The command provides general information on the designated bucket, including its policies: s3cmd info s3://example-cors-bucket @@ -79,7 +79,7 @@ These next sections break down the particular fields needed for CORS policies an ### Configuring Policies -The overall structure for CORS policies on S3 looks like the following. While policies on your object storage instance can generally be set with JSON or XML, CORS policies must use the XML format: +The overall structure for CORS policies on Amazon S3-compatible storage looks like the following. While policies on your object storage can generally be set with JSON or XML, CORS policies must use the XML format: {{< file "cors_policies.xml" xml >}} @@ -182,9 +182,9 @@ To give more concrete ideas of how you can work with CORS policies, the followin ### Deploying Policies -The next step is to actually deploy your CORS policies. Once you do, your S3 bucket starts following them to determine what origins to allow and what request and response information to permit. +The next step is to deploy your CORS policies. Once you do, your bucket starts following them to determine what origins to allow and what request and response information to permit. -Follow these steps to put your CORS policies into practice on your S3 instance. +Follow these steps to put your CORS policies into practice on your Amazon S3-compatible storage. 1. Save your CORS policy into a XML file. This example uses a file named `cors_policies.xml` which contains the second example policy XML above. @@ -208,7 +208,7 @@ s3://example-cors-bucket/ (bucket): ## Troubleshooting Common CORS Errors -Having CORS-related issues on your S3 instance? Take these steps to help narrow down the issue and figure out the kind of policy change needed to resolve it. +Having CORS-related issues with your Amazon S3-compatible storage? Take these steps to help narrow down the issue and figure out the kind of policy change needed to resolve it. 1. Review your instance's CORS policies using `s3cmd`: @@ -232,6 +232,6 @@ Having CORS-related issues on your S3 instance? Take these steps to help narrow ## Conclusion -This covers the tools and approaches you need to start managing CORS for your Linode Object Storage or other S3 instance. Once you have these, addressing CORS issues is a matter of reviewing and adjusting policies against desired origins and request types. +This covers the tools and approaches you need to start managing CORS for your Linode Object Storage or other Amazon S3-compatible storage. Once you have these, addressing CORS issues is a matter of reviewing and adjusting policies against desired origins and request types. -Keep improving your resources for managing your S3 through our collection of [object storage guides](/docs/products/storage/object-storage/guides/). These cover a range of topics to help you with S3 generally, and Linode Object Storage in particular. +Keep improving your resources for managing your S3 through our [object storage guides](/docs/products/storage/object-storage/guides/). diff --git a/docs/guides/tools-reference/tools/rclone-object-storage-file-sync/index.md b/docs/guides/tools-reference/tools/rclone-object-storage-file-sync/index.md index 4ceee52b426..94a71d227e3 100644 --- a/docs/guides/tools-reference/tools/rclone-object-storage-file-sync/index.md +++ b/docs/guides/tools-reference/tools/rclone-object-storage-file-sync/index.md @@ -73,7 +73,7 @@ Before you configure Rclone, [create a new Linode bucket](/docs/products/storage 1. Next, enter a name to use your new remote. -1. When prompted for the type of storage, select the option that corresponds with **S3** (*"AWS S3 Compliant Storage Providers including..."*). +1. When prompted for the type of storage, select the option that corresponds with **S3** (*"Amazon S3 Compliant Storage Providers including..."*). ```output / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi diff --git a/docs/guides/uptime/monitoring/migrating-from-aws-cloudwatch-to-prometheus-and-grafana-on-akamai/index.md b/docs/guides/uptime/monitoring/migrating-from-aws-cloudwatch-to-prometheus-and-grafana-on-akamai/index.md index 09ecd57a978..e111ed79fb2 100644 --- a/docs/guides/uptime/monitoring/migrating-from-aws-cloudwatch-to-prometheus-and-grafana-on-akamai/index.md +++ b/docs/guides/uptime/monitoring/migrating-from-aws-cloudwatch-to-prometheus-and-grafana-on-akamai/index.md @@ -509,7 +509,7 @@ CloudWatch also visualizes metrics in graphs. For instance, by querying the endp ### Export Existing CloudWatch Logs and Metrics -AWS includes tools for exporting CloudWatch data for analysis or migration. For example, CloudWatch logs can be exported to an S3 bucket, making them accessible outside AWS and enabling them to be re-ingested into other tools. +AWS includes tools for exporting CloudWatch data for analysis or migration. For example, CloudWatch logs can be exported to an S3 bucket. This allows them to be accessible outside AWS and enables them to be re-ingested into other Amazon S3-compatible tools. To export CloudWatch Logs to S3, use the following [`create-export-task`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/logs/create-export-task.html) command from the system where your AWS CLI is configured: diff --git a/docs/marketplace-docs/guides/apache-spark-cluster/index.md b/docs/marketplace-docs/guides/apache-spark-cluster/index.md index 00474ed8fda..78cc5093a15 100644 --- a/docs/marketplace-docs/guides/apache-spark-cluster/index.md +++ b/docs/marketplace-docs/guides/apache-spark-cluster/index.md @@ -61,7 +61,7 @@ The minimum RAM requirement for the worker nodes is 4GB RAM to ensure that jobs Once the deployment is complete, visit the Spark UI at the URL provided at `/etc/motd`. This is either the domain you entered when deploying the cluster or the reverse DNS value of the master node. -The Spark Cluster needs access to external storage such as S3, HDFS, Azure Blob Storage, Apache HBase, or your local filesystem. For more details on this, see [Integration With Cloud Infrastructures](https://spark.apache.org/docs/3.5.1/cloud-integration.html). +The Spark Cluster needs access to external storage such as Amazon S3 or S3-compatible storage, HDFS, Azure Blob Storage, Apache HBase, or your local filesystem. For more details on this, see [Integration With Cloud Infrastructures](https://spark.apache.org/docs/3.5.1/cloud-integration.html). ### Authentication diff --git a/docs/marketplace-docs/guides/jetbackup/index.md b/docs/marketplace-docs/guides/jetbackup/index.md index 184106321ab..0c8c2c79514 100644 --- a/docs/marketplace-docs/guides/jetbackup/index.md +++ b/docs/marketplace-docs/guides/jetbackup/index.md @@ -16,7 +16,7 @@ marketplace_app_id: 869623 marketplace_app_name: "JetBackup" --- -[JetBackup](https://www.jetbackup.com/) is a backup solution that can integrate with cPanel or be used as a standalone software within supported Linux distributions. It offers flexible backup management options, including the ability to perform off-site backups through S3-compatible storage (like Linode's [Object Storage](https://www.linode.com/products/object-storage/)). +[JetBackup](https://www.jetbackup.com/) is a backup solution that can integrate with cPanel or be used as a standalone software within supported Linux distributions. It offers flexible backup management options, including the ability to perform off-site backups through Amazon S3-compatible storage (like Linode's [Object Storage](https://www.linode.com/products/object-storage/)). {{< note >}} JetBackup requires a valid license to use the software beyond the available 10 day [free trial](https://cpanel.net/products/trial/) period. To purchase a license, visit [JetBackup's website](https://billing.jetapps.com/index.php?rp=/store/prorated-license) and select a plan that fits your needs. Licenses are not available directly through Linode. diff --git a/docs/marketplace-docs/guides/nirvashare/index.md b/docs/marketplace-docs/guides/nirvashare/index.md index 2a9f33ea7ca..befa52034ca 100644 --- a/docs/marketplace-docs/guides/nirvashare/index.md +++ b/docs/marketplace-docs/guides/nirvashare/index.md @@ -18,7 +18,7 @@ marketplace_app_name: "NirvaShare" NirvaShare has been removed from the App Marketplace and can no longer be deployed. This guide is retained for reference only. {{< /note >}} -NirvaShare is a simplified and secure enterprise file sharing solution built on top of your existing file storage. Use NirvaShare with SFTP, local storage, or even S3-compatible storage like Linode's [Object Storage](https://www.linode.com/products/object-storage/). Collaborate with your internal or external users such as customers, partners, and vendors. NirvaShare provides fine-tuned access control in a very simplified manner. NirvaShare integrates with multiple many external identity providers such as Active Directory, Google Workspace, AWS SSO, KeyClock, and others. +NirvaShare is a simplified and secure enterprise file sharing solution built on top of your existing file storage. Use NirvaShare with SFTP, local storage, or even Amazon S3-compatible storage like Linode's [Object Storage](https://www.linode.com/products/object-storage/). Collaborate with your internal or external users such as customers, partners, and vendors. NirvaShare provides fine-tuned access control in a very simplified manner. NirvaShare integrates with multiple many external identity providers such as Active Directory, Google Workspace, AWS SSO, KeyClock, and others. ## Deploying a Marketplace App diff --git a/docs/reference-architecture/cloud-based-document-management-system/_index.md b/docs/reference-architecture/cloud-based-document-management-system/_index.md index be1bbd40a63..d129905e64f 100644 --- a/docs/reference-architecture/cloud-based-document-management-system/_index.md +++ b/docs/reference-architecture/cloud-based-document-management-system/_index.md @@ -16,7 +16,7 @@ This reference architecture provides guidance on IaaS primitives, open source so This deployment is using the [Mayan Electronic Document Management System](https://mayan-edms.com/) (EDMS) – an open source web application for document collaboration, tamper proof signing, transformations, and more. Mayan EDMS also comes with a REST API for integrations with 3rd party software. For this example, we are using the recommended [Docker Compose installation](https://docs.mayan-edms.com/chapters/docker/install_docker_compose.html#docker-compose-install), which the Mayan EDMS project recommends for most cases, with two exceptions. This architecture decouples the [PostgreSQL](https://www.postgresql.org/) database layer to achieve separation of concerns and architect for high availability; and employs [NGINX](https://www.nginx.com/) as reverse proxies to the application, using [Certbot](https://certbot.eff.org/) with the [dns_linode plugin](https://certbot-dns-linode.readthedocs.io/en/stable/) for SSL/TLS certificate management. [Unison](https://www.cis.upenn.edu/~bcpierce/unison/) provides bi-directional synchronization of the Let’s Encrypt directories so that both application nodes contain the same certificate and private key. Unison also synchronizes the Docker volume directories between the two instances. -A [NodeBalancer](/docs/products/networking/nodebalancers/) is configured with the TCP protocol to pass traffic through to the backend servers for SSL/TLS termination, and with Proxy Protocol V1 so that NGINX can log the originating client IP addresses. Linode S3-compatible [Object Storage](/docs/products/storage/object-storage/) is the storage backend for Mayan EDNS documents, as well as for routine database backups. +A [NodeBalancer](/docs/products/networking/nodebalancers/) is configured with the TCP protocol to pass traffic through to the backend servers for SSL/TLS termination, and with Proxy Protocol V1 so that NGINX can log the originating client IP addresses. Linode [Object Storage](/docs/products/storage/object-storage/) is the storage backend for Mayan EDNS documents, as well as for routine database backups. All nodes are secured with [Cloud Firewalls](/docs/products/networking/cloud-firewall/) for protection from the outside world, and communicate internally via private [VLAN](/docs/products/networking/vlans/). The application servers connect to the databases via a shared floating VLAN IP address, with [Keepalived](/docs/guides/ip-failover-legacy-keepalived/) to facilitate failover. diff --git a/docs/reference-architecture/video-transcoding/_index.md b/docs/reference-architecture/video-transcoding/_index.md index 80c88bae3a5..e287669c9d9 100644 --- a/docs/reference-architecture/video-transcoding/_index.md +++ b/docs/reference-architecture/video-transcoding/_index.md @@ -29,7 +29,7 @@ The workflow in this document is implemented on the [Akamai Connected Cloud](htt |-----------------------------------|-------------| | [Linode Kubernetes Engine (LKE)](https://www.linode.com/products/kubernetes/) | A fully-managed K8s container orchestration engine for deploying and managing containerized applications and workloads | | [NodeBalancers](https://www.linode.com/products/nodebalancers/) | Managed cloud load balancers | - | [Object Storage](https://www.linode.com/products/object-storage/) | S3-compatible Object Storage, used to manage unstructured data like video files | + | [Object Storage](https://www.linode.com/products/object-storage/) | Amazon S3-compatible Object Storage, used to manage unstructured data like video files | | [Block Storage](https://www.linode.com/products/block-storage/) | Network-attached block file storage volumes | | [API](https://www.linode.com/products/linode-api/) | Programmatic access to Linode products and services | | [DNS Manager](https://www.linode.com/products/dns-manager/) | Domain management, free for Akamai Connected Cloud customers | diff --git a/docs/reference-architecture/video-transcoding/diagrams/index.md b/docs/reference-architecture/video-transcoding/diagrams/index.md index de8180fe372..e7ec0ead85b 100644 --- a/docs/reference-architecture/video-transcoding/diagrams/index.md +++ b/docs/reference-architecture/video-transcoding/diagrams/index.md @@ -114,7 +114,7 @@ Some key features of figure 3 are described as follows: 1. The workflow creates a persistent volume to be shared by all steps of the workflow. This is a common file-based workspace for all steps of the workflow to access. The source files and output files are stored on this volume. - 1. Argo has integrated capabilities to communicate with S3-compliant storage, which includes Linode Object Storage. The source file is transferred from Object Storage to the local persistent volume claim. + 1. Argo has integrated capabilities to communicate with Amazon S3-compliant storage, which includes Linode Object Storage. The source file is transferred from Object Storage to the local persistent volume claim. 1. MediaInfo and FFmpeg are two industry-standard open source tools for media processing workflows. These are incorporated into the reference architecture with community-supported containers from [DockerHub](https://hub.docker.com/). MediaInfo gathers information about the source file, and this metadata is passed to the transcoding process.