From 1012048654667ac319a26b315a1d209d500e612c Mon Sep 17 00:00:00 2001 From: awstools Date: Mon, 7 Aug 2023 18:18:39 +0000 Subject: [PATCH] docs(client-rekognition): This release adds code snippets for Amazon Rekognition Custom Labels. --- .../src/commands/AssociateFacesCommand.ts | 5 +- .../src/commands/CreateDatasetCommand.ts | 4 +- .../CreateFaceLivenessSessionCommand.ts | 13 +- .../src/commands/CreateUserCommand.ts | 17 +- .../src/commands/DeleteUserCommand.ts | 5 +- .../src/commands/DetectLabelsCommand.ts | 14 +- .../src/commands/DisassociateFacesCommand.ts | 5 +- .../src/commands/GetFaceDetectionCommand.ts | 2 + .../GetFaceLivenessSessionResultsCommand.ts | 7 +- .../src/commands/GetTextDetectionCommand.ts | 2 +- .../src/commands/SearchUsersByImageCommand.ts | 6 +- .../src/endpoint/ruleset.ts | 2 +- .../client-rekognition/src/models/models_0.ts | 306 +++++++++--------- 13 files changed, 196 insertions(+), 192 deletions(-) diff --git a/clients/client-rekognition/src/commands/AssociateFacesCommand.ts b/clients/client-rekognition/src/commands/AssociateFacesCommand.ts index 7f44dcc34f04..31ea9691d2e6 100644 --- a/clients/client-rekognition/src/commands/AssociateFacesCommand.ts +++ b/clients/client-rekognition/src/commands/AssociateFacesCommand.ts @@ -115,9 +115,8 @@ export interface AssociateFacesCommandOutput extends AssociateFacesResponse, __M *

You are not authorized to perform the action.

* * @throws {@link ConflictException} (client fault) - *

- * A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. ** - *

+ *

A User with the same Id already exists within the collection, or the update or deletion + * of the User caused an inconsistent state. **

* * @throws {@link IdempotentParameterMismatchException} (client fault) *

A ClientRequestToken input parameter was reused with an operation, but at least one of the other input diff --git a/clients/client-rekognition/src/commands/CreateDatasetCommand.ts b/clients/client-rekognition/src/commands/CreateDatasetCommand.ts index 26ff81a4161d..8b26b3765326 100644 --- a/clients/client-rekognition/src/commands/CreateDatasetCommand.ts +++ b/clients/client-rekognition/src/commands/CreateDatasetCommand.ts @@ -38,9 +38,9 @@ export interface CreateDatasetCommandOutput extends CreateDatasetResponse, __Met * @public *

Creates a new Amazon Rekognition Custom Labels dataset. You can create a dataset by using * an Amazon Sagemaker format manifest file or by copying an existing Amazon Rekognition Custom Labels dataset.

- *

To create a training dataset for a project, specify train for the value of + *

To create a training dataset for a project, specify TRAIN for the value of * DatasetType. To create the test dataset for a project, - * specify test for the value of DatasetType. + * specify TEST for the value of DatasetType. *

*

The response from CreateDataset is the Amazon Resource Name (ARN) for the dataset. * Creating a dataset takes a while to complete. Use DescribeDataset to check the diff --git a/clients/client-rekognition/src/commands/CreateFaceLivenessSessionCommand.ts b/clients/client-rekognition/src/commands/CreateFaceLivenessSessionCommand.ts index 8ca9b674c34b..40fd03b40cca 100644 --- a/clients/client-rekognition/src/commands/CreateFaceLivenessSessionCommand.ts +++ b/clients/client-rekognition/src/commands/CreateFaceLivenessSessionCommand.ts @@ -38,11 +38,14 @@ export interface CreateFaceLivenessSessionCommandOutput extends CreateFaceLivene * @public *

This API operation initiates a Face Liveness session. It returns a SessionId, * which you can use to start streaming Face Liveness video and get the results for a Face - * Liveness session. You can use the OutputConfig option in the Settings parameter - * to provide an Amazon S3 bucket location. The Amazon S3 bucket stores reference images and audit images. - * You can use AuditImagesLimit to limit the number of audit images returned. This - * number is between 0 and 4. By default, it is set to 0. The limit is best effort and based on - * the duration of the selfie-video.

+ * Liveness session.

+ *

You can use the OutputConfig option in the Settings parameter to provide an + * Amazon S3 bucket location. The Amazon S3 bucket stores reference images and audit images. If no Amazon S3 + * bucket is defined, raw bytes are sent instead.

+ *

You can use AuditImagesLimit to limit the number of audit images returned + * when GetFaceLivenessSessionResults is called. This number is between 0 and 4. By + * default, it is set to 0. The limit is best effort and based on the duration of the + * selfie-video.

* @example * Use a bare-bones client and the command you need to make an API call. * ```javascript diff --git a/clients/client-rekognition/src/commands/CreateUserCommand.ts b/clients/client-rekognition/src/commands/CreateUserCommand.ts index 600f3d7a22be..0add087db634 100644 --- a/clients/client-rekognition/src/commands/CreateUserCommand.ts +++ b/clients/client-rekognition/src/commands/CreateUserCommand.ts @@ -37,13 +37,13 @@ export interface CreateUserCommandOutput extends CreateUserResponse, __MetadataB /** * @public *

Creates a new User within a collection specified by CollectionId. Takes - * UserId as a parameter, which is a user provided ID which should be unique - * within the collection. The provided UserId will alias the system generated - * UUID to make the UserId more user friendly.

+ * UserId as a parameter, which is a user provided ID which should be unique + * within the collection. The provided UserId will alias the system generated UUID + * to make the UserId more user friendly.

*

Uses a ClientToken, an idempotency token that ensures a call to - * CreateUser completes only once. If the value is not supplied, the AWS SDK - * generates an idempotency token for the requests. This prevents retries after a network - * error results from making multiple CreateUser calls.

+ * CreateUser completes only once. If the value is not supplied, the AWS SDK + * generates an idempotency token for the requests. This prevents retries after a network error + * results from making multiple CreateUser calls.

* @example * Use a bare-bones client and the command you need to make an API call. * ```javascript @@ -71,9 +71,8 @@ export interface CreateUserCommandOutput extends CreateUserResponse, __MetadataB *

You are not authorized to perform the action.

* * @throws {@link ConflictException} (client fault) - *

- * A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. ** - *

+ *

A User with the same Id already exists within the collection, or the update or deletion + * of the User caused an inconsistent state. **

* * @throws {@link IdempotentParameterMismatchException} (client fault) *

A ClientRequestToken input parameter was reused with an operation, but at least one of the other input diff --git a/clients/client-rekognition/src/commands/DeleteUserCommand.ts b/clients/client-rekognition/src/commands/DeleteUserCommand.ts index d0a09a1da1d7..ef3de2c90fd3 100644 --- a/clients/client-rekognition/src/commands/DeleteUserCommand.ts +++ b/clients/client-rekognition/src/commands/DeleteUserCommand.ts @@ -68,9 +68,8 @@ export interface DeleteUserCommandOutput extends DeleteUserResponse, __MetadataB *

You are not authorized to perform the action.

* * @throws {@link ConflictException} (client fault) - *

- * A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. ** - *

+ *

A User with the same Id already exists within the collection, or the update or deletion + * of the User caused an inconsistent state. **

* * @throws {@link IdempotentParameterMismatchException} (client fault) *

A ClientRequestToken input parameter was reused with an operation, but at least one of the other input diff --git a/clients/client-rekognition/src/commands/DetectLabelsCommand.ts b/clients/client-rekognition/src/commands/DetectLabelsCommand.ts index 019bcb7da466..83e713cb224c 100644 --- a/clients/client-rekognition/src/commands/DetectLabelsCommand.ts +++ b/clients/client-rekognition/src/commands/DetectLabelsCommand.ts @@ -59,9 +59,11 @@ export interface DetectLabelsCommandOutput extends DetectLabelsResponse, __Metad * labels or with label categories. You can specify inclusive filters, exclusive filters, or a * combination of inclusive and exclusive filters. For more information on filtering see Detecting * Labels in an Image.

- *

You can specify MinConfidence to control the confidence threshold for the - * labels returned. The default is 55%. You can also add the MaxLabels parameter to - * limit the number of labels returned. The default and upper limit is 1000 labels.

+ *

When getting labels, you can specify MinConfidence to control the + * confidence threshold for the labels returned. The default is 55%. You can also add the + * MaxLabels parameter to limit the number of labels returned. The default and + * upper limit is 1000 labels. These arguments are only valid when supplying GENERAL_LABELS as a + * feature type.

*

* Response Elements *

@@ -108,10 +110,12 @@ export interface DetectLabelsCommandOutput extends DetectLabelsResponse, __Metad *

Dominant Color - An array of the dominant colors in the image.

* *
  • - *

    Foreground - Information about the sharpness, brightness, and dominant colors of the input image’s foreground.

    + *

    Foreground - Information about the sharpness, brightness, and dominant colors of the + * input image’s foreground.

    *
  • *
  • - *

    Background - Information about the sharpness, brightness, and dominant colors of the input image’s background.

    + *

    Background - Information about the sharpness, brightness, and dominant colors of the + * input image’s background.

    *
  • * *

    The list of returned labels will include at least one label for every detected object, diff --git a/clients/client-rekognition/src/commands/DisassociateFacesCommand.ts b/clients/client-rekognition/src/commands/DisassociateFacesCommand.ts index 0dc104eaf03f..10b7b88833bb 100644 --- a/clients/client-rekognition/src/commands/DisassociateFacesCommand.ts +++ b/clients/client-rekognition/src/commands/DisassociateFacesCommand.ts @@ -90,9 +90,8 @@ export interface DisassociateFacesCommandOutput extends DisassociateFacesRespons *

    You are not authorized to perform the action.

    * * @throws {@link ConflictException} (client fault) - *

    - * A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. ** - *

    + *

    A User with the same Id already exists within the collection, or the update or deletion + * of the User caused an inconsistent state. **

    * * @throws {@link IdempotentParameterMismatchException} (client fault) *

    A ClientRequestToken input parameter was reused with an operation, but at least one of the other input diff --git a/clients/client-rekognition/src/commands/GetFaceDetectionCommand.ts b/clients/client-rekognition/src/commands/GetFaceDetectionCommand.ts index 289e71c8b924..273eb5a45423 100644 --- a/clients/client-rekognition/src/commands/GetFaceDetectionCommand.ts +++ b/clients/client-rekognition/src/commands/GetFaceDetectionCommand.ts @@ -49,6 +49,8 @@ export interface GetFaceDetectionCommandOutput extends GetFaceDetectionResponse, * specified in MaxResults, the value of NextToken in the operation response contains a pagination token for getting the next set * of results. To get the next page of results, call GetFaceDetection and populate the NextToken request parameter with the token * value returned from the previous call to GetFaceDetection.

    + *

    Note that for the GetFaceDetection operation, the returned values for + * FaceOccluded and EyeDirection will always be "null".

    * @example * Use a bare-bones client and the command you need to make an API call. * ```javascript diff --git a/clients/client-rekognition/src/commands/GetFaceLivenessSessionResultsCommand.ts b/clients/client-rekognition/src/commands/GetFaceLivenessSessionResultsCommand.ts index 16402ff5b6d2..fceedc6ca1ad 100644 --- a/clients/client-rekognition/src/commands/GetFaceLivenessSessionResultsCommand.ts +++ b/clients/client-rekognition/src/commands/GetFaceLivenessSessionResultsCommand.ts @@ -49,8 +49,11 @@ export interface GetFaceLivenessSessionResultsCommandOutput * sessionId as input, which was created using * CreateFaceLivenessSession. Returns the corresponding Face Liveness confidence * score, a reference image that includes a face bounding box, and audit images that also contain - * face bounding boxes. The Face Liveness confidence score ranges from 0 to 100. The reference - * image can optionally be returned.

    + * face bounding boxes. The Face Liveness confidence score ranges from 0 to 100.

    + *

    The number of audit images returned by GetFaceLivenessSessionResults is + * defined by the AuditImagesLimit paramater when calling + * CreateFaceLivenessSession. Reference images are always returned when + * possible.

    * @example * Use a bare-bones client and the command you need to make an API call. * ```javascript diff --git a/clients/client-rekognition/src/commands/GetTextDetectionCommand.ts b/clients/client-rekognition/src/commands/GetTextDetectionCommand.ts index 08afad9bb782..31872c82d112 100644 --- a/clients/client-rekognition/src/commands/GetTextDetectionCommand.ts +++ b/clients/client-rekognition/src/commands/GetTextDetectionCommand.ts @@ -46,7 +46,7 @@ export interface GetTextDetectionCommandOutput extends GetTextDetectionResponse, * of StartLabelDetection.

    *

    * GetTextDetection returns an array of detected text (TextDetections) sorted by - * the time the text was detected, up to 50 words per frame of video.

    + * the time the text was detected, up to 100 words per frame of video.

    *

    Each element of the array includes the detected text, the precentage confidence in the acuracy * of the detected text, the time the text was detected, bounding box information for where the text * was located, and unique identifiers for words and their lines.

    diff --git a/clients/client-rekognition/src/commands/SearchUsersByImageCommand.ts b/clients/client-rekognition/src/commands/SearchUsersByImageCommand.ts index a89fffc1a923..828811bea236 100644 --- a/clients/client-rekognition/src/commands/SearchUsersByImageCommand.ts +++ b/clients/client-rekognition/src/commands/SearchUsersByImageCommand.ts @@ -42,9 +42,9 @@ export interface SearchUsersByImageCommandOutput extends SearchUsersByImageRespo * ordered by similarity score with the highest similarity first. It also returns a bounding box * for the face found in the input image.

    *

    Information about faces detected in the supplied image, but not used for the search, is - * returned in an array of UnsearchedFace objects. If no valid face is detected - * in the image, the response will contain an empty UserMatches list and no - * SearchedFace object.

    + * returned in an array of UnsearchedFace objects. If no valid face is detected in + * the image, the response will contain an empty UserMatches list and no + * SearchedFace object.

    * @example * Use a bare-bones client and the command you need to make an API call. * ```javascript diff --git a/clients/client-rekognition/src/endpoint/ruleset.ts b/clients/client-rekognition/src/endpoint/ruleset.ts index b0bfcc5cf281..915c8a2b608e 100644 --- a/clients/client-rekognition/src/endpoint/ruleset.ts +++ b/clients/client-rekognition/src/endpoint/ruleset.ts @@ -26,5 +26,5 @@ m={[r]:"booleanEquals",[s]:[true,{[r]:"getAttr",[s]:[{[t]:e},"supportsDualStack" n=[i], o=[j], p=[{[t]:"Region"}]; -const _data={version:"1.0",parameters:{Region:f,UseDualStack:g,UseFIPS:g,Endpoint:f},rules:[{conditions:[{[r]:a,[s]:[h]}],type:b,rules:[{conditions:n,error:"Invalid Configuration: FIPS and custom endpoint are not supported",type:c},{type:b,rules:[{conditions:o,error:"Invalid Configuration: Dualstack and custom endpoint are not supported",type:c},{endpoint:{url:h,properties:k,headers:k},type:d}]}]},{type:b,rules:[{conditions:[{[r]:a,[s]:p}],type:b,rules:[{conditions:[{[r]:"aws.partition",[s]:p,assign:e}],type:b,rules:[{conditions:[i,j],type:b,rules:[{conditions:[l,m],type:b,rules:[{type:b,rules:[{endpoint:{url:"https://rekognition-fips.{Region}.{PartitionResult#dualStackDnsSuffix}",properties:k,headers:k},type:d}]}]},{error:"FIPS and DualStack are enabled, but this partition does not support one or both",type:c}]},{conditions:n,type:b,rules:[{conditions:[l],type:b,rules:[{type:b,rules:[{endpoint:{url:"https://rekognition-fips.{Region}.{PartitionResult#dnsSuffix}",properties:k,headers:k},type:d}]}]},{error:"FIPS is enabled but this partition does not support FIPS",type:c}]},{conditions:o,type:b,rules:[{conditions:[m],type:b,rules:[{type:b,rules:[{endpoint:{url:"https://rekognition.{Region}.{PartitionResult#dualStackDnsSuffix}",properties:k,headers:k},type:d}]}]},{error:"DualStack is enabled but this partition does not support DualStack",type:c}]},{type:b,rules:[{endpoint:{url:"https://rekognition.{Region}.{PartitionResult#dnsSuffix}",properties:k,headers:k},type:d}]}]}]},{error:"Invalid Configuration: Missing Region",type:c}]}]}; +const _data={version:"1.0",parameters:{Region:f,UseDualStack:g,UseFIPS:g,Endpoint:f},rules:[{conditions:[{[r]:a,[s]:[h]}],type:b,rules:[{conditions:n,error:"Invalid Configuration: FIPS and custom endpoint are not supported",type:c},{conditions:o,error:"Invalid Configuration: Dualstack and custom endpoint are not supported",type:c},{endpoint:{url:h,properties:k,headers:k},type:d}]},{conditions:[{[r]:a,[s]:p}],type:b,rules:[{conditions:[{[r]:"aws.partition",[s]:p,assign:e}],type:b,rules:[{conditions:[i,j],type:b,rules:[{conditions:[l,m],type:b,rules:[{endpoint:{url:"https://rekognition-fips.{Region}.{PartitionResult#dualStackDnsSuffix}",properties:k,headers:k},type:d}]},{error:"FIPS and DualStack are enabled, but this partition does not support one or both",type:c}]},{conditions:n,type:b,rules:[{conditions:[l],type:b,rules:[{endpoint:{url:"https://rekognition-fips.{Region}.{PartitionResult#dnsSuffix}",properties:k,headers:k},type:d}]},{error:"FIPS is enabled but this partition does not support FIPS",type:c}]},{conditions:o,type:b,rules:[{conditions:[m],type:b,rules:[{endpoint:{url:"https://rekognition.{Region}.{PartitionResult#dualStackDnsSuffix}",properties:k,headers:k},type:d}]},{error:"DualStack is enabled but this partition does not support DualStack",type:c}]},{endpoint:{url:"https://rekognition.{Region}.{PartitionResult#dnsSuffix}",properties:k,headers:k},type:d}]}]},{error:"Invalid Configuration: Missing Region",type:c}]}; export const ruleSet: RuleSetObject = _data; diff --git a/clients/client-rekognition/src/models/models_0.ts b/clients/client-rekognition/src/models/models_0.ts index 6cff034ab821..2c9af4d85778 100644 --- a/clients/client-rekognition/src/models/models_0.ts +++ b/clients/client-rekognition/src/models/models_0.ts @@ -164,8 +164,8 @@ export interface AssociateFacesRequest { /** * @public *

    Idempotent token used to identify the request to AssociateFaces. If you use - * the same token with multiple AssociateFaces requests, the same response is returned. - * Use ClientRequestToken to prevent the same request from being processed more than + * the same token with multiple AssociateFaces requests, the same response is + * returned. Use ClientRequestToken to prevent the same request from being processed more than * once.

    */ ClientRequestToken?: string; @@ -189,7 +189,8 @@ export type UnsuccessfulFaceAssociationReason = /** * @public - *

    Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated.

    + *

    Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully + * associated.

    */ export interface UnsuccessfulFaceAssociation { /** @@ -262,9 +263,8 @@ export interface AssociateFacesResponse { /** * @public - *

    - * A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. ** - *

    + *

    A User with the same Id already exists within the collection, or the update or deletion + * of the User caused an inconsistent state. **

    */ export class ConflictException extends __BaseException { readonly name: "ConflictException" = "ConflictException"; @@ -613,12 +613,14 @@ export interface BoundingBox { /** * @public - *

    An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.

    + *

    An image that is picked from the Face Liveness video and returned for audit trail + * purposes, returned as Base64-encoded bytes.

    */ export interface AuditImage { /** * @public - *

    The Base64-encoded bytes representing an image selected from the Face Liveness video and returned for audit purposes.

    + *

    The Base64-encoded bytes representing an image selected from the Face Liveness video and + * returned for audit purposes.

    */ Bytes?: Uint8Array; @@ -1126,7 +1128,8 @@ export interface Celebrity { /** * @public - *

    Indicates the direction the eyes are gazing in (independent of the head pose) as determined by its pitch and yaw.

    + *

    Indicates the direction the eyes are gazing in (independent of the head pose) as + * determined by its pitch and yaw.

    */ export interface EyeDirection { /** @@ -1615,8 +1618,8 @@ export type QualityFilter = (typeof QualityFilter)[keyof typeof QualityFilter]; export interface Image { /** * @public - *

    Blob of image bytes up to 5 MBs. Note that the maximum image size you can pass to - * DetectCustomLabels is 4MB.

    + *

    Blob of image bytes up to 5 MBs. Note that the maximum image size you can + * pass to DetectCustomLabels is 4MB.

    */ Bytes?: Uint8Array; @@ -1994,22 +1997,22 @@ export interface ContentModerationDetection { /** * @public - *

    The time in milliseconds defining the start of the timeline - * segment containing a continuously detected moderation label.

    + *

    The time in milliseconds defining the start of the timeline segment containing a + * continuously detected moderation label.

    */ StartTimestampMillis?: number; /** * @public - *

    The time in milliseconds defining the end of the - * timeline segment containing a continuously detected moderation label.

    + *

    The time in milliseconds defining the end of the timeline segment containing a + * continuously detected moderation label.

    */ EndTimestampMillis?: number; /** * @public - *

    The time duration of a segment in milliseconds, - * I.e. time elapsed from StartTimestampMillis to EndTimestampMillis.

    + *

    The time duration of a segment in milliseconds, I.e. time elapsed from + * StartTimestampMillis to EndTimestampMillis.

    */ DurationMillis?: number; } @@ -2321,7 +2324,7 @@ export interface CreateDatasetRequest { /** * @public *

    - * The type of the dataset. Specify train to create a training dataset. Specify test + * The type of the dataset. Specify TRAIN to create a training dataset. Specify TEST * to create a test dataset. *

    */ @@ -2351,9 +2354,9 @@ export interface CreateDatasetResponse { /** * @public - *

    Contains settings that specify the location of an Amazon S3 bucket used - * to store the output of a Face Liveness session. Note that the S3 bucket must be located - * in the caller's AWS account and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are + *

    Contains settings that specify the location of an Amazon S3 bucket used to store the output of + * a Face Liveness session. Note that the S3 bucket must be located in the caller's AWS account + * and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are * auto-generated by the Face Liveness system.

    */ export interface LivenessOutputConfig { @@ -2372,8 +2375,8 @@ export interface LivenessOutputConfig { /** * @public - *

    A session settings object. It contains settings for the operation - * to be performed. It accepts arguments for OutputConfig and AuditImagesLimit.

    + *

    A session settings object. It contains settings for the operation to be performed. It + * accepts arguments for OutputConfig and AuditImagesLimit.

    */ export interface CreateFaceLivenessSessionRequestSettings { /** @@ -2402,8 +2405,8 @@ export interface CreateFaceLivenessSessionRequestSettings { export interface CreateFaceLivenessSessionRequest { /** * @public - *

    The identifier for your AWS Key Management Service key (AWS KMS key). - * Used to encrypt audit images and reference images.

    + *

    The identifier for your AWS Key Management Service key (AWS KMS key). Used to encrypt + * audit images and reference images.

    */ KmsKeyId?: string; @@ -2912,9 +2915,8 @@ export interface CreateUserRequest { /** * @public *

    Idempotent token used to identify the request to CreateUser. If you use the - * same token with multiple CreateUser requests, the same response is returned. - * Use ClientRequestToken to prevent the same request from being processed more than - * once.

    + * same token with multiple CreateUser requests, the same response is returned. Use + * ClientRequestToken to prevent the same request from being processed more than once.

    */ ClientRequestToken?: string; } @@ -3310,7 +3312,8 @@ export type UnsuccessfulFaceDeletionReason = /** * @public - *

    Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted.

    + *

    Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully + * deleted.

    */ export interface UnsuccessfulFaceDeletion { /** @@ -3525,9 +3528,8 @@ export interface DeleteUserRequest { /** * @public *

    Idempotent token used to identify the request to DeleteUser. If you use the - * same token with multiple DeleteUser requests, the same response is returned. - * Use ClientRequestToken to prevent the same request from being processed more than - * once.

    + * same token with multiple DeleteUser requests, the same response is returned. Use + * ClientRequestToken to prevent the same request from being processed more than once.

    */ ClientRequestToken?: string; } @@ -4245,6 +4247,9 @@ export interface DetectFacesRequest { * response time.

    *

    If you provide both, ["ALL", "DEFAULT"], the service uses a logical "AND" * operator to determine which attributes to return (in this case, all attributes).

    + *

    Note that while the FaceOccluded and EyeDirection attributes are supported when using + * DetectFaces, they aren't supported when analyzing videos with + * StartFaceDetection and GetFaceDetection.

    */ Attributes?: (Attribute | string)[]; } @@ -4319,8 +4324,8 @@ export type DetectLabelsFeatureName = (typeof DetectLabelsFeatureName)[keyof typ /** * @public *

    Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, - * exclusive, or a combination of both and can be applied to individual labels or entire label categories. - * To see a list of label categories, see Detecting Labels.

    + * exclusive, or a combination of both and can be applied to individual labels or entire label + * categories. To see a list of label categories, see Detecting Labels.

    */ export interface GeneralLabelsSettings { /** @@ -4355,17 +4360,18 @@ export interface GeneralLabelsSettings { export interface DetectLabelsImagePropertiesSettings { /** * @public - *

    The maximum number of dominant colors to return when detecting labels in an image. The default value is 10.

    + *

    The maximum number of dominant colors to return when detecting labels in an image. The + * default value is 10.

    */ MaxDominantColors?: number; } /** * @public - *

    Settings for the DetectLabels request. Settings can include - * filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive - * or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters - * allow specification of a maximum number of dominant colors.

    + *

    Settings for the DetectLabels request. Settings can include filters for both + * GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and + * applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification + * of a maximum number of dominant colors.

    */ export interface DetectLabelsSettings { /** @@ -4399,7 +4405,8 @@ export interface DetectLabelsRequest { /** * @public *

    Maximum number of labels you want the service to return in the response. The service - * returns the specified number of highest confidence labels.

    + * returns the specified number of highest confidence labels. Only valid when GENERAL_LABELS is + * specified as a feature type in the Feature input parameter.

    */ MaxLabels?: number; @@ -4408,24 +4415,25 @@ export interface DetectLabelsRequest { *

    Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't * return any labels with confidence lower than this specified value.

    *

    If MinConfidence is not specified, the operation returns labels with a - * confidence values greater than or equal to 55 percent.

    + * confidence values greater than or equal to 55 percent. Only valid when GENERAL_LABELS is + * specified as a feature type in the Feature input parameter.

    */ MinConfidence?: number; /** * @public - *

    A list of the types of analysis to perform. Specifying GENERAL_LABELS uses the label detection - * feature, while specifying IMAGE_PROPERTIES returns information regarding image color and quality. - * If no option is specified GENERAL_LABELS is used by default.

    + *

    A list of the types of analysis to perform. Specifying GENERAL_LABELS uses the label + * detection feature, while specifying IMAGE_PROPERTIES returns information regarding image color + * and quality. If no option is specified GENERAL_LABELS is used by default.

    */ Features?: (DetectLabelsFeatureName | string)[]; /** * @public - *

    A list of the filters to be applied to returned detected labels and image properties. Specified - * filters can be inclusive, exclusive, or a combination of both. Filters can be used for individual - * labels or label categories. The exact label names or label categories must be supplied. For - * a full list of labels and label categories, see Detecting labels.

    + *

    A list of the filters to be applied to returned detected labels and image properties. + * Specified filters can be inclusive, exclusive, or a combination of both. Filters can be used + * for individual labels or label categories. The exact label names or label categories must be + * supplied. For a full list of labels and label categories, see Detecting labels.

    */ Settings?: DetectLabelsSettings; } @@ -4480,7 +4488,8 @@ export interface DominantColor { /** * @public - *

    The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.

    + *

    The quality of an image provided for label detection, with regard to brightness, + * sharpness, and contrast.

    */ export interface DetectLabelsImageQuality { /** @@ -4515,9 +4524,9 @@ export interface DetectLabelsImageBackground { /** * @public - *

    The dominant colors found in the background of an image, defined with RGB values, - * CSS color name, simplified color name, and PixelPercentage (the percentage of - * image pixels that have a particular color).

    + *

    The dominant colors found in the background of an image, defined with RGB values, CSS + * color name, simplified color name, and PixelPercentage (the percentage of image pixels that + * have a particular color).

    */ DominantColors?: DominantColor[]; } @@ -4535,47 +4544,46 @@ export interface DetectLabelsImageForeground { /** * @public - *

    The dominant colors found in the foreground of an image, defined with RGB values, - * CSS color name, simplified color name, and PixelPercentage (the percentage of image - * pixels that have a particular color).

    + *

    The dominant colors found in the foreground of an image, defined with RGB values, CSS + * color name, simplified color name, and PixelPercentage (the percentage of image pixels that + * have a particular color).

    */ DominantColors?: DominantColor[]; } /** * @public - *

    Information about the quality and dominant colors of an input image. - * Quality and color information is returned for the entire image, foreground, and background.

    + *

    Information about the quality and dominant colors of an input image. Quality and color + * information is returned for the entire image, foreground, and background.

    */ export interface DetectLabelsImageProperties { /** * @public - *

    Information about the quality of the image foreground as defined by brightness, - * sharpness, and contrast. The higher the value the greater the brightness, - * sharpness, and contrast respectively.

    + *

    Information about the quality of the image foreground as defined by brightness, sharpness, + * and contrast. The higher the value the greater the brightness, sharpness, and contrast + * respectively.

    */ Quality?: DetectLabelsImageQuality; /** * @public - *

    Information about the dominant colors found in an image, described with RGB values, - * CSS color name, simplified color name, and PixelPercentage (the percentage of image pixels - * that have a particular color).

    + *

    Information about the dominant colors found in an image, described with RGB values, CSS + * color name, simplified color name, and PixelPercentage (the percentage of image pixels that + * have a particular color).

    */ DominantColors?: DominantColor[]; /** * @public - *

    Information about the properties of an image’s foreground, including the - * foreground’s quality and dominant colors, including the quality and dominant colors of the image.

    + *

    Information about the properties of an image’s foreground, including the foreground’s + * quality and dominant colors, including the quality and dominant colors of the image.

    */ Foreground?: DetectLabelsImageForeground; /** * @public - *

    Information about the properties of an image’s background, including - * the background’s quality and dominant colors, including the quality - * and dominant colors of the image.

    + *

    Information about the properties of an image’s background, including the background’s + * quality and dominant colors, including the quality and dominant colors of the image.

    */ Background?: DetectLabelsImageBackground; } @@ -4643,10 +4651,9 @@ export interface Parent { /** * @public - *

    Structure containing details about the detected label, including the name, detected instances, parent labels, and level of - * confidence.

    - *

    - *

    + *

    Structure containing details about the detected label, including the name, detected + * instances, parent labels, and level of confidence.

    + *

    */ export interface Label { /** @@ -4663,8 +4670,9 @@ export interface Label { /** * @public - *

    If Label represents an object, Instances contains the bounding boxes for each instance of the detected object. - * Bounding boxes are returned for common object labels such as people, cars, furniture, apparel or pets.

    + *

    If Label represents an object, Instances contains the bounding + * boxes for each instance of the detected object. Bounding boxes are returned for common object + * labels such as people, cars, furniture, apparel or pets.

    */ Instances?: Instance[]; @@ -4719,7 +4727,8 @@ export interface DetectLabelsResponse { /** * @public - *

    Information about the properties of the input image, such as brightness, sharpness, contrast, and dominant colors.

    + *

    Information about the properties of the input image, such as brightness, sharpness, + * contrast, and dominant colors.

    */ ImageProperties?: DetectLabelsImageProperties; } @@ -5242,7 +5251,8 @@ export type UnsuccessfulFaceDisassociationReason = /** * @public - *

    Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated.

    + *

    Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully + * disassociated.

    */ export interface UnsuccessfulFaceDisassociation { /** @@ -5285,7 +5295,8 @@ export interface DisassociateFacesResponse { /** * @public - *

    The status of an update made to a User. Reflects if the User has been updated for every requested change.

    + *

    The status of an update made to a User. Reflects if the User has been updated for every + * requested change.

    */ UserStatus?: UserStatus | string; } @@ -5662,9 +5673,8 @@ export interface GetCelebrityRecognitionResponse { /** * @public - *

    Job identifier for the celebrity recognition operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartCelebrityRecognition.

    + *

    Job identifier for the celebrity recognition operation for which you want to obtain + * results. The job identifer is returned by an initial call to StartCelebrityRecognition.

    */ JobId?: string; @@ -5677,9 +5687,8 @@ export interface GetCelebrityRecognitionResponse { /** * @public - *

    A job identifier specified in the call to StartCelebrityRecognition and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartCelebrityRecognition and returned in the + * job completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; } @@ -5723,17 +5732,16 @@ export interface GetContentModerationRequest { /** * @public - *

    Defines how to aggregate results of the StartContentModeration request. - * Default aggregation option is TIMESTAMPS. - * SEGMENTS mode aggregates moderation labels over time.

    + *

    Defines how to aggregate results of the StartContentModeration request. Default + * aggregation option is TIMESTAMPS. SEGMENTS mode aggregates moderation labels over time.

    */ AggregateBy?: ContentModerationAggregateBy | string; } /** * @public - *

    Contains metadata about a content moderation request, - * including the SortBy and AggregateBy options.

    + *

    Contains metadata about a content moderation request, including the SortBy and AggregateBy + * options.

    */ export interface GetContentModerationRequestMetadata { /** @@ -5793,9 +5801,8 @@ export interface GetContentModerationResponse { /** * @public - *

    Job identifier for the content moderation operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartContentModeration.

    + *

    Job identifier for the content moderation operation for which you want to obtain results. + * The job identifer is returned by an initial call to StartContentModeration.

    */ JobId?: string; @@ -5808,16 +5815,15 @@ export interface GetContentModerationResponse { /** * @public - *

    A job identifier specified in the call to StartContentModeration and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartContentModeration and returned in the job + * completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; /** * @public - *

    Information about the paramters used when getting a response. Includes - * information on aggregation and sorting methods.

    + *

    Information about the paramters used when getting a response. Includes information on + * aggregation and sorting methods.

    */ GetRequestMetadata?: GetContentModerationRequestMetadata; } @@ -5886,9 +5892,8 @@ export interface GetFaceDetectionResponse { /** * @public - *

    Job identifier for the face detection operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartFaceDetection.

    + *

    Job identifier for the face detection operation for which you want to obtain results. The + * job identifer is returned by an initial call to StartFaceDetection.

    */ JobId?: string; @@ -5901,9 +5906,8 @@ export interface GetFaceDetectionResponse { /** * @public - *

    A job identifier specified in the call to StartFaceDetection and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartFaceDetection and returned in the job + * completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; } @@ -5956,8 +5960,8 @@ export interface GetFaceLivenessSessionResultsResponse { /** * @public - *

    Probabalistic confidence score for if the person in the given video was live, represented as a - * float value between 0 to 100.

    + *

    Probabalistic confidence score for if the person in the given video was live, represented + * as a float value between 0 to 100.

    */ Confidence?: number; @@ -5976,7 +5980,8 @@ export interface GetFaceLivenessSessionResultsResponse { *

    A set of images from the Face Liveness video that can be used for audit purposes. It * includes a bounding box of the face and the Base64-encoded bytes that return an image. If the * CreateFaceLivenessSession request included an OutputConfig argument, the image will be - * uploaded to an S3Object specified in the output configuration.

    + * uploaded to an S3Object specified in the output configuration. If no Amazon S3 bucket is defined, + * raw bytes are sent instead.

    */ AuditImages?: AuditImage[]; } @@ -6140,9 +6145,8 @@ export interface GetFaceSearchResponse { /** * @public - *

    Job identifier for the face search operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartFaceSearch.

    + *

    Job identifier for the face search operation for which you want to obtain results. The job + * identifer is returned by an initial call to StartFaceSearch.

    */ JobId?: string; @@ -6155,9 +6159,8 @@ export interface GetFaceSearchResponse { /** * @public - *

    A job identifier specified in the call to StartFaceSearch and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartFaceSearch and returned in the job + * completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; } @@ -6235,8 +6238,8 @@ export interface GetLabelDetectionRequest { /** * @public - *

    Contains metadata about a label detection request, - * including the SortBy and AggregateBy options.

    + *

    Contains metadata about a label detection request, including the SortBy and AggregateBy + * options.

    */ export interface GetLabelDetectionRequestMetadata { /** @@ -6334,9 +6337,8 @@ export interface GetLabelDetectionResponse { /** * @public - *

    Job identifier for the label detection operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartLabelDetection.

    + *

    Job identifier for the label detection operation for which you want to obtain results. The + * job identifer is returned by an initial call to StartLabelDetection.

    */ JobId?: string; @@ -6349,16 +6351,15 @@ export interface GetLabelDetectionResponse { /** * @public - *

    A job identifier specified in the call to StartLabelDetection and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartLabelDetection and returned in the job + * completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; /** * @public - *

    Information about the paramters used when getting a response. Includes - * information on aggregation and sorting methods.

    + *

    Information about the paramters used when getting a response. Includes information on + * aggregation and sorting methods.

    */ GetRequestMetadata?: GetLabelDetectionRequestMetadata; } @@ -6473,9 +6474,8 @@ export interface GetPersonTrackingResponse { /** * @public - *

    Job identifier for the person tracking operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartPersonTracking.

    + *

    Job identifier for the person tracking operation for which you want to obtain results. The + * job identifer is returned by an initial call to StartPersonTracking.

    */ JobId?: string; @@ -6488,9 +6488,8 @@ export interface GetPersonTrackingResponse { /** * @public - *

    A job identifier specified in the call to StartCelebrityRecognition and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartCelebrityRecognition and returned in the + * job completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; } @@ -6765,9 +6764,8 @@ export interface GetSegmentDetectionResponse { /** * @public - *

    Job identifier for the segment detection operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartSegmentDetection.

    + *

    Job identifier for the segment detection operation for which you want to obtain results. + * The job identifer is returned by an initial call to StartSegmentDetection.

    */ JobId?: string; @@ -6780,9 +6778,8 @@ export interface GetSegmentDetectionResponse { /** * @public - *

    A job identifier specified in the call to StartSegmentDetection and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartSegmentDetection and returned in the job + * completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; } @@ -6877,9 +6874,8 @@ export interface GetTextDetectionResponse { /** * @public - *

    Job identifier for the text detection operation for which you - * want to obtain results. The job identifer is returned by an initial call - * to StartTextDetection.

    + *

    Job identifier for the text detection operation for which you want to obtain results. The + * job identifer is returned by an initial call to StartTextDetection.

    */ JobId?: string; @@ -6892,9 +6888,8 @@ export interface GetTextDetectionResponse { /** * @public - *

    A job identifier specified in the call to StartTextDetection and - * returned in the job completion notification sent to your - * Amazon Simple Notification Service topic.

    + *

    A job identifier specified in the call to StartTextDetection and returned in the job + * completion notification sent to your Amazon Simple Notification Service topic.

    */ JobTag?: string; } @@ -7138,8 +7133,8 @@ export interface LabelDetectionSettings { /** * @public *

    Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, - * exclusive, or a combination of both and can be applied to individual labels or entire label categories. - * To see a list of label categories, see Detecting Labels.

    + * exclusive, or a combination of both and can be applied to individual labels or entire label + * categories. To see a list of label categories, see Detecting Labels.

    */ GeneralLabels?: GeneralLabelsSettings; } @@ -7347,13 +7342,13 @@ export interface ListFacesRequest { /** * @public - *

    An array of user IDs to match when listing faces in a collection.

    + *

    An array of user IDs to filter results with when listing faces in a collection.

    */ UserId?: string; /** * @public - *

    An array of face IDs to match when listing faces in a collection.

    + *

    An array of face IDs to filter results with when listing faces in a collection.

    */ FaceIds?: string[]; } @@ -7605,7 +7600,8 @@ export interface ListUsersResponse { /** * @public - *

    A pagination token to be used with the subsequent request if the response is truncated.

    + *

    A pagination token to be used with the subsequent request if the response is + * truncated.

    */ NextToken?: string; } @@ -7965,7 +7961,8 @@ export interface SearchUsersRequest { /** * @public - *

    Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search.

    + *

    Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for + * search.

    */ export interface SearchedFace { /** @@ -8012,15 +8009,14 @@ export interface UserMatch { export interface SearchUsersResponse { /** * @public - *

    An array of UserMatch objects that matched the input face along with the confidence in - * the match. Array will be empty if there are no matches.

    + *

    An array of UserMatch objects that matched the input face along with the confidence in the + * match. Array will be empty if there are no matches.

    */ UserMatches?: UserMatch[]; /** * @public - *

    Version number of the face detection model associated with the input - * CollectionId.

    + *

    Version number of the face detection model associated with the input CollectionId.

    */ FaceModelVersion?: string; @@ -8214,17 +8210,17 @@ export interface SearchUsersByImageResponse { /** * @public - *

    A list of FaceDetail objects containing the BoundingBox for the largest face in image, - * as well as the confidence in the bounding box, that was searched for matches. If no valid - * face is detected in the image the response will contain no SearchedFace object.

    + *

    A list of FaceDetail objects containing the BoundingBox for the largest face in image, as + * well as the confidence in the bounding box, that was searched for matches. If no valid face is + * detected in the image the response will contain no SearchedFace object.

    */ SearchedFace?: SearchedFaceDetails; /** * @public - *

    List of UnsearchedFace objects. Contains the face details infered from the specified - * image but not used for search. Contains reasons that describe why a face wasn't used for - * Search.

    + *

    List of UnsearchedFace objects. Contains the face details infered from the specified image + * but not used for search. Contains reasons that describe why a face wasn't used for Search. + *

    */ UnsearchedFaces?: UnsearchedFace[]; }