layout | title | permalink | redirect_from | ||
---|---|---|---|---|---|
post |
AWS_PROFILE_ENDPOINT |
/docs/cli/aws_profile_endpoint |
|
AIStore supports vendor-specific configuration on a per bucket basis. For instance, any bucket backed up by an AWS S3 bucket (**) can be configured to use alternative:
- named AWS profiles (with alternative credentials and/or AWS region)
- s3 endpoints
(**) Terminology-wise, when we say "s3 bucket" or "google cloud bucket" we in fact reference a bucket in an AIS cluster that is either:
- (A) denoted with the respective
s3:
orgs:
protocol schema, or - (B) is a differently named AIS (that is,
ais://
) bucket that has itsbackend_bck
property referencing the s3 (or google cloud) bucket in question.
For supported backends (that include, but are not limited, to AWS S3), see also:
- Viewing vendor-specific properties
- Environment variables
- Setting profile with alternative access/secret keys and/or region
- When bucket does not exist
- Configuring custom AWS S3 endpoint
While ais show bucket
will show all properties (which is a lengthy list), the way to maybe focus on vendor-specific extension is to look for the section called "extra". For example:
$ ais show bucket s3://abc | grep extra
extra.aws.cloud_region us-east-2
extra.aws.endpoint
extra.aws.profile
Notice that the bucket's region (cloud_region
above) is automatically populated when AIS looks up the bucket in s3. But the other two varables are settable and can provide alternative credentials and/or access endpoint.
AIStore supports the well-known S3_ENDPOINT
and AWS_PROFILE
environment. While S3_ENDPOINT
is often used to utilize AIS cluster as s3-providing service, configurable AWS_PROFILE
specifies what's called a named configuration profile:
The rule is simple:
S3_ENDPOINT
andAWS_PROFILE
are loaded once upon AIS node startup.- Bucket configuration takes precedence over the environment and can be changed at any time.
Assuming, on the one hand:
$ cat ~/.aws/config
[default]
region = us-east-2
[profile prod]
region = us-west-1
and
$ cat ~/.aws/credentials
[default]
aws_access_key_id = foo
aws_secret_access_key = bar
[prod]
aws_access_key_id = 123
aws_secret_access_key = 456
on the other, we can then go ahead and set the "prod" profile directly into the bucket:
$ ais bucket props set s3://abc extra.aws.profile prod
"extra.aws.profile" set to: "prod" (was: "")
and show resulting "extra.aws" configuration:
$ ais show bucket s3://abc | grep extra
extra.aws.cloud_region us-west-1
extra.aws.endpoint
extra.aws.profile prod
From this point on, all calls to read, write, list s3://abc
and get/set its properties will use AWS "prod" profile (see above).
But what if we need to set alternative profile (with alternative access and secret keys) on a bucket that does not yet exist in the cluster?
That must be a fairly common situation, and the way to resolve it is to use --skip-lookup
option:
$ ais create --help
...
OPTIONS:
--props value bucket properties, e.g. --props="mirror.enabled=true mirror.copies=4 checksum.type=md5"
--skip-lookup add Cloud bucket to aistore without checking the bucket's accessibility and getting its Cloud properties
(usage must be limited to setting up bucket's aistore properties with alternative profile and/or endpoint)
$ ais create s3://abc --skip-lookup
"s3://abc" created
Once this is done (**), we simply go ahead and run ais bucket props set s3://abc extra.aws.profile
(as shown above). Assuming, the updated profile contains correct access keys, the bucket will then be fully available for reading, writing, listing, and all the rest operations.
(**)
ais create
command results in adding the bucket to aistoreBMD
- a protected, versioned, and replicated bucket metadata that is further used to update properties of any bucket in the cluster, including certainly the one that we have just added.
When a bucket is hosted by an S3 compliant backend (such as, e.g., minio), we may want to specify an alternative S3 endpoint, so that AIS nodes use it when reading, writing, listing, and generally, performing all operations on remote S3 bucket(s).
Globally, S3 endpoint can be overridden for all S3 buckets via "S3_ENDPOINT" environment.
If you decide to make the change, you may need to restart AIS cluster while making sure that "S3_ENDPOINT" is available for the AIS nodes when they are starting up.
But it can be also be done - and will take precedence over the global setting - on a per-bucket basis.
Here are some examples:
# Let's say, there exists a bucket called s3://abc:
$ ais ls s3://abc
NAME SIZE
README.md 8.96KiB
First, we override empty the endpoint property in the bucket's configuration.
To see that a non-empty value applies and works, we will use the default AWS S3 endpoint: https://s3.amazonaws.com
$ ais bucket props set s3://abc extra.aws.endpoint=s3.amazonaws.com
Bucket "aws://abc": property "extra.aws.endpoint=s3.amazonaws.com", nothing to do
$ ais ls s3://abc
NAME SIZE
README.md 8.96KiB
Second, set the endpoint=foo (or, it could be any other invalid value), and observe that the bucket becomes unreachable:
$ ais bucket props set s3://abc extra.aws.endpoint=foo
Bucket props successfully updated
"extra.aws.endpoint" set to: "foo" (was: "s3.amazonaws.com")
$ ais ls s3://abc
RequestError: send request failed: dial tcp: lookup abc.foo: no such host
Finally, revert the endpoint back to empty, and check that the bucket is visible again:
$ ais bucket props set s3://abc extra.aws.endpoint=""
Bucket props successfully updated
"extra.aws.endpoint" set to: "" (was: "foo")
$ ais ls s3://abc
NAME SIZE
README.md 8.96KiB
Global
export S3_ENDPOINT=...
override is static and readonly. Use it with extreme caution as it applies to all buckets.
On the other hand, for any given
s3://bucket
its S3 endpoint can be set, unset, and otherwise changed at any time - at runtime. As shown above.