Skip to content

Commit 2d05ce7

Browse files
committedMar 6, 2025
Merge branch 'release-1.38.8'
* release-1.38.8: Bumping version to 1.38.8 Update changelog based on model updates Replaced /mybucket/ with amzn-s3-demo-bucket
2 parents f4bdf22 + 7246787 commit 2d05ce7

22 files changed

+334
-285
lines changed
 

‎.changes/1.38.8.json

+37
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
[
2+
{
3+
"category": "``bedrock``",
4+
"description": "This releases adds support for Custom Prompt Router",
5+
"type": "api-change"
6+
},
7+
{
8+
"category": "``cloudtrail``",
9+
"description": "Doc-only update for CloudTrail.",
10+
"type": "api-change"
11+
},
12+
{
13+
"category": "``ivs-realtime``",
14+
"description": "IVS Real-Time now offers customers the ability to merge fragmented recordings in the event of a participant disconnect.",
15+
"type": "api-change"
16+
},
17+
{
18+
"category": "``networkflowmonitor``",
19+
"description": "This release contains 2 changes. 1: DeleteScope/GetScope/UpdateScope operations now return 404 instead of 500 when the resource does not exist. 2: Expected string format for clientToken fields of CreateMonitorInput/CreateScopeInput/UpdateMonitorInput have been updated to be an UUID based string.",
20+
"type": "api-change"
21+
},
22+
{
23+
"category": "``redshift-data``",
24+
"description": "This release adds support for ListStatements API to filter statements by ClusterIdentifier, WorkgroupName, and Database.",
25+
"type": "api-change"
26+
},
27+
{
28+
"category": "``wafv2``",
29+
"description": "You can now perform an exact match or rate limit aggregation against the web request's JA4 fingerprint.",
30+
"type": "api-change"
31+
},
32+
{
33+
"category": "``workspaces``",
34+
"description": "Added a new ModifyEndpointEncryptionMode API for managing endpoint encryption settings.",
35+
"type": "api-change"
36+
}
37+
]

‎CHANGELOG.rst

+12
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,18 @@
22
CHANGELOG
33
=========
44

5+
1.38.8
6+
======
7+
8+
* api-change:``bedrock``: This releases adds support for Custom Prompt Router
9+
* api-change:``cloudtrail``: Doc-only update for CloudTrail.
10+
* api-change:``ivs-realtime``: IVS Real-Time now offers customers the ability to merge fragmented recordings in the event of a participant disconnect.
11+
* api-change:``networkflowmonitor``: This release contains 2 changes. 1: DeleteScope/GetScope/UpdateScope operations now return 404 instead of 500 when the resource does not exist. 2: Expected string format for clientToken fields of CreateMonitorInput/CreateScopeInput/UpdateMonitorInput have been updated to be an UUID based string.
12+
* api-change:``redshift-data``: This release adds support for ListStatements API to filter statements by ClusterIdentifier, WorkgroupName, and Database.
13+
* api-change:``wafv2``: You can now perform an exact match or rate limit aggregation against the web request's JA4 fingerprint.
14+
* api-change:``workspaces``: Added a new ModifyEndpointEncryptionMode API for managing endpoint encryption settings.
15+
16+
517
1.38.7
618
======
719

‎awscli/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
import os
2020

21-
__version__ = '1.38.7'
21+
__version__ = '1.38.8'
2222

2323
#
2424
# Get our data path to be added to botocore's search path

‎awscli/examples/cloudformation/_package_description.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ For example, if your AWS Lambda function source code is in the
4040
``/home/user/code/lambdafunction/`` folder, specify
4141
``CodeUri: /home/user/code/lambdafunction`` for the
4242
``AWS::Serverless::Function`` resource. The command returns a template and replaces
43-
the local path with the S3 location: ``CodeUri: s3://mybucket/lambdafunction.zip``.
43+
the local path with the S3 location: ``CodeUri: s3://amzn-s3-demo-bucket/lambdafunction.zip``.
4444

4545
If you specify a file, the command directly uploads it to the S3 bucket. If you
4646
specify a folder, the command zips the folder and then uploads the .zip file.

‎awscli/examples/emr/add-steps.rst

+8-8
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
- Command::
44

5-
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,Args=arg1,arg2,arg3 Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,MainClass=mymainclass,Args=arg1,arg2,arg3
5+
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://amzn-s3-demo-bucket/mytest.jar,Args=arg1,arg2,arg3 Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://amzn-s3-demo-bucket/mytest.jar,MainClass=mymainclass,Args=arg1,arg2,arg3
66

77
- Required parameters::
88

@@ -25,7 +25,7 @@
2525

2626
- Command::
2727

28-
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=STREAMING,Name='Streaming Program',ActionOnFailure=CONTINUE,Args=[-files,s3://elasticmapreduce/samples/wordcount/wordSplitter.py,-mapper,wordSplitter.py,-reducer,aggregate,-input,s3://elasticmapreduce/samples/wordcount/input,-output,s3://mybucket/wordcount/output]
28+
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=STREAMING,Name='Streaming Program',ActionOnFailure=CONTINUE,Args=[-files,s3://elasticmapreduce/samples/wordcount/wordSplitter.py,-mapper,wordSplitter.py,-reducer,aggregate,-input,s3://elasticmapreduce/samples/wordcount/input,-output,s3://amzn-s3-demo-bucket/wordcount/output]
2929

3030
- Required parameters::
3131

@@ -40,7 +40,7 @@
4040
[
4141
{
4242
"Name": "JSON Streaming Step",
43-
"Args": ["-files","s3://elasticmapreduce/samples/wordcount/wordSplitter.py","-mapper","wordSplitter.py","-reducer","aggregate","-input","s3://elasticmapreduce/samples/wordcount/input","-output","s3://mybucket/wordcount/output"],
43+
"Args": ["-files","s3://elasticmapreduce/samples/wordcount/wordSplitter.py","-mapper","wordSplitter.py","-reducer","aggregate","-input","s3://elasticmapreduce/samples/wordcount/input","-output","s3://amzn-s3-demo-bucket/wordcount/output"],
4444
"ActionOnFailure": "CONTINUE",
4545
"Type": "STREAMING"
4646
}
@@ -72,15 +72,15 @@ NOTE: JSON arguments must include options and values as their own items in the l
7272
"ActionOnFailure": "CONTINUE",
7373
"Args": [
7474
"-files",
75-
"s3://mybucket/mapper.py,s3://mybucket/reducer.py",
75+
"s3://amzn-s3-demo-bucket/mapper.py,s3://amzn-s3-demo-bucket/reducer.py",
7676
"-mapper",
7777
"mapper.py",
7878
"-reducer",
7979
"reducer.py",
8080
"-input",
81-
"s3://mybucket/input",
81+
"s3://amzn-s3-demo-bucket/input",
8282
"-output",
83-
"s3://mybucket/output"]
83+
"s3://amzn-s3-demo-bucket/output"]
8484
}
8585
]
8686

@@ -109,7 +109,7 @@ NOTE: JSON arguments must include options and values as their own items in the l
109109

110110
- Command::
111111

112-
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=HIVE,Name='Hive program',ActionOnFailure=CONTINUE,Args=[-f,s3://mybucket/myhivescript.q,-d,INPUT=s3://mybucket/myhiveinput,-d,OUTPUT=s3://mybucket/myhiveoutput,arg1,arg2] Type=HIVE,Name='Hive steps',ActionOnFailure=TERMINATE_CLUSTER,Args=[-f,s3://elasticmapreduce/samples/hive-ads/libs/model-build.q,-d,INPUT=s3://elasticmapreduce/samples/hive-ads/tables,-d,OUTPUT=s3://mybucket/hive-ads/output/2014-04-18/11-07-32,-d,LIBS=s3://elasticmapreduce/samples/hive-ads/libs]
112+
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=HIVE,Name='Hive program',ActionOnFailure=CONTINUE,Args=[-f,s3://amzn-s3-demo-bucket/myhivescript.q,-d,INPUT=s3://amzn-s3-demo-bucket/myhiveinput,-d,OUTPUT=s3://amzn-s3-demo-bucket/myhiveoutput,arg1,arg2] Type=HIVE,Name='Hive steps',ActionOnFailure=TERMINATE_CLUSTER,Args=[-f,s3://elasticmapreduce/samples/hive-ads/libs/model-build.q,-d,INPUT=s3://elasticmapreduce/samples/hive-ads/tables,-d,OUTPUT=s3://amzn-s3-demo-bucket/hive-ads/output/2014-04-18/11-07-32,-d,LIBS=s3://elasticmapreduce/samples/hive-ads/libs]
113113

114114

115115
- Required parameters::
@@ -134,7 +134,7 @@ NOTE: JSON arguments must include options and values as their own items in the l
134134

135135
- Command::
136136

137-
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=PIG,Name='Pig program',ActionOnFailure=CONTINUE,Args=[-f,s3://mybucket/mypigscript.pig,-p,INPUT=s3://mybucket/mypiginput,-p,OUTPUT=s3://mybucket/mypigoutput,arg1,arg2] Type=PIG,Name='Pig program',Args=[-f,s3://elasticmapreduce/samples/pig-apache/do-reports2.pig,-p,INPUT=s3://elasticmapreduce/samples/pig-apache/input,-p,OUTPUT=s3://mybucket/pig-apache/output,arg1,arg2]
137+
aws emr add-steps --cluster-id j-XXXXXXXX --steps Type=PIG,Name='Pig program',ActionOnFailure=CONTINUE,Args=[-f,s3://amzn-s3-demo-bucket/mypigscript.pig,-p,INPUT=s3://amzn-s3-demo-bucket/mypiginput,-p,OUTPUT=s3://amzn-s3-demo-bucket/mypigoutput,arg1,arg2] Type=PIG,Name='Pig program',Args=[-f,s3://elasticmapreduce/samples/pig-apache/do-reports2.pig,-p,INPUT=s3://elasticmapreduce/samples/pig-apache/input,-p,OUTPUT=s3://amzn-s3-demo-bucket/pig-apache/output,arg1,arg2]
138138

139139

140140
- Required parameters::

‎awscli/examples/emr/create-cluster-examples.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -369,7 +369,7 @@ The following ``create-cluster`` examples add a streaming step to a cluster that
369369
The following example specifies the step inline. ::
370370

371371
aws emr create-cluster \
372-
--steps Type=STREAMING,Name='Streaming Program',ActionOnFailure=CONTINUE,Args=[-files,s3://elasticmapreduce/samples/wordcount/wordSplitter.py,-mapper,wordSplitter.py,-reducer,aggregate,-input,s3://elasticmapreduce/samples/wordcount/input,-output,s3://mybucket/wordcount/output] \
372+
--steps Type=STREAMING,Name='Streaming Program',ActionOnFailure=CONTINUE,Args=[-files,s3://elasticmapreduce/samples/wordcount/wordSplitter.py,-mapper,wordSplitter.py,-reducer,aggregate,-input,s3://elasticmapreduce/samples/wordcount/input,-output,s3://amzn-s3-demo-bucket/wordcount/output] \
373373
--release-label emr-5.3.1 \
374374
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=2,InstanceType=m4.large \
375375
--auto-terminate
@@ -397,7 +397,7 @@ Contents of ``multiplefiles.json``::
397397
"-input",
398398
"s3://elasticmapreduce/samples/wordcount/input",
399399
"-output",
400-
"s3://mybucket/wordcount/output"
400+
"s3://amzn-s3-demo-bucket/wordcount/output"
401401
],
402402
"ActionOnFailure": "CONTINUE",
403403
"Type": "STREAMING"
@@ -409,7 +409,7 @@ Contents of ``multiplefiles.json``::
409409
The following example add Hive steps when creating a cluster. Hive steps require parameters ``Type`` and ``Args``. Hive steps optional parameters are ``Name`` and ``ActionOnFailure``. ::
410410

411411
aws emr create-cluster \
412-
--steps Type=HIVE,Name='Hive program',ActionOnFailure=CONTINUE,ActionOnFailure=TERMINATE_CLUSTER,Args=[-f,s3://elasticmapreduce/samples/hive-ads/libs/model-build.q,-d,INPUT=s3://elasticmapreduce/samples/hive-ads/tables,-d,OUTPUT=s3://mybucket/hive-ads/output/2014-04-18/11-07-32,-d,LIBS=s3://elasticmapreduce/samples/hive-ads/libs] \
412+
--steps Type=HIVE,Name='Hive program',ActionOnFailure=CONTINUE,ActionOnFailure=TERMINATE_CLUSTER,Args=[-f,s3://elasticmapreduce/samples/hive-ads/libs/model-build.q,-d,INPUT=s3://elasticmapreduce/samples/hive-ads/tables,-d,OUTPUT=s3://amzn-s3-demo-bucket/hive-ads/output/2014-04-18/11-07-32,-d,LIBS=s3://elasticmapreduce/samples/hive-ads/libs] \
413413
--applications Name=Hive \
414414
--release-label emr-5.3.1 \
415415
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=2,InstanceType=m4.large
@@ -419,7 +419,7 @@ The following example add Hive steps when creating a cluster. Hive steps require
419419
The following example adds Pig steps when creating a cluster. Pig steps required parameters are ``Type`` and ``Args``. Pig steps optional parameters are ``Name`` and ``ActionOnFailure``. ::
420420

421421
aws emr create-cluster \
422-
--steps Type=PIG,Name='Pig program',ActionOnFailure=CONTINUE,Args=[-f,s3://elasticmapreduce/samples/pig-apache/do-reports2.pig,-p,INPUT=s3://elasticmapreduce/samples/pig-apache/input,-p,OUTPUT=s3://mybucket/pig-apache/output] \
422+
--steps Type=PIG,Name='Pig program',ActionOnFailure=CONTINUE,Args=[-f,s3://elasticmapreduce/samples/pig-apache/do-reports2.pig,-p,INPUT=s3://elasticmapreduce/samples/pig-apache/input,-p,OUTPUT=s3://amzn-s3-demo-bucket/pig-apache/output] \
423423
--applications Name=Pig \
424424
--release-label emr-5.3.1 \
425425
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=2,InstanceType=m4.large
@@ -429,7 +429,7 @@ The following example adds Pig steps when creating a cluster. Pig steps required
429429
The following ``create-cluster`` example runs two bootstrap actions defined as scripts that are stored in Amazon S3. ::
430430

431431
aws emr create-cluster \
432-
--bootstrap-actions Path=s3://mybucket/myscript1,Name=BootstrapAction1,Args=[arg1,arg2] Path=s3://mybucket/myscript2,Name=BootstrapAction2,Args=[arg1,arg2] \
432+
--bootstrap-actions Path=s3://amzn-s3-demo-bucket/myscript1,Name=BootstrapAction1,Args=[arg1,arg2] Path=s3://amzn-s3-demo-bucket/myscript2,Name=BootstrapAction2,Args=[arg1,arg2] \
433433
--release-label emr-5.3.1 \
434434
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=2,InstanceType=m4.large \
435435
--auto-terminate
+22-22
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,23 @@
1-
**To cancel a snapshot export to Amazon S3**
2-
3-
The following ``cancel-export-task`` example cancels an export task in progress that is exporting a snapshot to Amazon S3. ::
4-
5-
aws rds cancel-export-task \
6-
--export-task-identifier my-s3-export-1
7-
8-
Output::
9-
10-
{
11-
"ExportTaskIdentifier": "my-s3-export-1",
12-
"SourceArn": "arn:aws:rds:us-east-1:123456789012:snapshot:publisher-final-snapshot",
13-
"SnapshotTime": "2019-03-24T20:01:09.815Z",
14-
"S3Bucket": "mybucket",
15-
"S3Prefix": "",
16-
"IamRoleArn": "arn:aws:iam::123456789012:role/service-role/export-snap-S3-role",
17-
"KmsKeyId": "arn:aws:kms:us-east-1:123456789012:key/abcd0000-7bfd-4594-af38-aabbccddeeff",
18-
"Status": "CANCELING",
19-
"PercentProgress": 0,
20-
"TotalExtractedDataInGB": 0
21-
}
22-
1+
**To cancel a snapshot export to Amazon S3**
2+
3+
The following ``cancel-export-task`` example cancels an export task in progress that is exporting a snapshot to Amazon S3. ::
4+
5+
aws rds cancel-export-task \
6+
--export-task-identifier my-s3-export-1
7+
8+
Output::
9+
10+
{
11+
"ExportTaskIdentifier": "my-s3-export-1",
12+
"SourceArn": "arn:aws:rds:us-east-1:123456789012:snapshot:publisher-final-snapshot",
13+
"SnapshotTime": "2019-03-24T20:01:09.815Z",
14+
"S3Bucket": "amzn-s3-demo-bucket",
15+
"S3Prefix": "",
16+
"IamRoleArn": "arn:aws:iam::123456789012:role/service-role/export-snap-S3-role",
17+
"KmsKeyId": "arn:aws:kms:us-east-1:123456789012:key/abcd0000-7bfd-4594-af38-aabbccddeeff",
18+
"Status": "CANCELING",
19+
"PercentProgress": 0,
20+
"TotalExtractedDataInGB": 0
21+
}
22+
2323
For more information, see `Canceling a snapshot export task <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html#USER_ExportSnapshot.Canceling>`__ in the *Amazon RDS User Guide* or `Canceling a snapshot export task <https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_ExportSnapshot.html#USER_ExportSnapshot.Canceling>`__ in the *Amazon Aurora User Guide*.
+40-40
Original file line numberDiff line numberDiff line change
@@ -1,40 +1,40 @@
1-
**To describe snapshot export tasks**
2-
3-
The following ``describe-export-tasks`` example returns information about snapshot exports to Amazon S3. ::
4-
5-
aws rds describe-export-tasks
6-
7-
Output::
8-
9-
{
10-
"ExportTasks": [
11-
{
12-
"ExportTaskIdentifier": "test-snapshot-export",
13-
"SourceArn": "arn:aws:rds:us-west-2:123456789012:snapshot:test-snapshot",
14-
"SnapshotTime": "2020-03-02T18:26:28.163Z",
15-
"TaskStartTime": "2020-03-02T18:57:56.896Z",
16-
"TaskEndTime": "2020-03-02T19:10:31.985Z",
17-
"S3Bucket": "mybucket",
18-
"S3Prefix": "",
19-
"IamRoleArn": "arn:aws:iam::123456789012:role/service-role/ExportRole",
20-
"KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff",
21-
"Status": "COMPLETE",
22-
"PercentProgress": 100,
23-
"TotalExtractedDataInGB": 0
24-
},
25-
{
26-
"ExportTaskIdentifier": "my-s3-export",
27-
"SourceArn": "arn:aws:rds:us-west-2:123456789012:snapshot:db5-snapshot-test",
28-
"SnapshotTime": "2020-03-27T20:48:42.023Z",
29-
"S3Bucket": "mybucket",
30-
"S3Prefix": "",
31-
"IamRoleArn": "arn:aws:iam::123456789012:role/service-role/ExportRole",
32-
"KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff",
33-
"Status": "STARTING",
34-
"PercentProgress": 0,
35-
"TotalExtractedDataInGB": 0
36-
}
37-
]
38-
}
39-
40-
For more information, see `Monitoring Snapshot Exports <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html#USER_ExportSnapshot.Monitoring>`__ in the *Amazon RDS User Guide*.
1+
**To describe snapshot export tasks**
2+
3+
The following ``describe-export-tasks`` example returns information about snapshot exports to Amazon S3. ::
4+
5+
aws rds describe-export-tasks
6+
7+
Output::
8+
9+
{
10+
"ExportTasks": [
11+
{
12+
"ExportTaskIdentifier": "test-snapshot-export",
13+
"SourceArn": "arn:aws:rds:us-west-2:123456789012:snapshot:test-snapshot",
14+
"SnapshotTime": "2020-03-02T18:26:28.163Z",
15+
"TaskStartTime": "2020-03-02T18:57:56.896Z",
16+
"TaskEndTime": "2020-03-02T19:10:31.985Z",
17+
"S3Bucket": "amzn-s3-demo-bucket",
18+
"S3Prefix": "",
19+
"IamRoleArn": "arn:aws:iam::123456789012:role/service-role/ExportRole",
20+
"KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff",
21+
"Status": "COMPLETE",
22+
"PercentProgress": 100,
23+
"TotalExtractedDataInGB": 0
24+
},
25+
{
26+
"ExportTaskIdentifier": "my-s3-export",
27+
"SourceArn": "arn:aws:rds:us-west-2:123456789012:snapshot:db5-snapshot-test",
28+
"SnapshotTime": "2020-03-27T20:48:42.023Z",
29+
"S3Bucket": "amzn-s3-demo-bucket",
30+
"S3Prefix": "",
31+
"IamRoleArn": "arn:aws:iam::123456789012:role/service-role/ExportRole",
32+
"KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff",
33+
"Status": "STARTING",
34+
"PercentProgress": 0,
35+
"TotalExtractedDataInGB": 0
36+
}
37+
]
38+
}
39+
40+
For more information, see `Monitoring Snapshot Exports <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html#USER_ExportSnapshot.Monitoring>`__ in the *Amazon RDS User Guide*.

0 commit comments

Comments
 (0)
Failed to load comments.