Skip to content

Latest commit

 

History

History

amazonaws_rekognition

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

@datafire/amazonaws_rekognition

Client library for Amazon Rekognition

Installation and Usage

npm install --save @datafire/amazonaws_rekognition
let amazonaws_rekognition = require('@datafire/amazonaws_rekognition').create({
  accessKeyId: "",
  secretAccessKey: "",
  region: ""
});

.then(data => {
  console.log(data);
});

Description

This is the Amazon Rekognition API reference.

Actions

CompareFaces

amazonaws_rekognition.CompareFaces({
  "SourceImage": null,
  "TargetImage": null
}, context)

Input

  • input object
    • QualityFilter
    • SimilarityThreshold
    • SourceImage required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • TargetImage required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version

Output

CreateCollection

amazonaws_rekognition.CreateCollection({
  "CollectionId": null
}, context)

Input

  • input object
    • CollectionId required

Output

CreateProject

amazonaws_rekognition.CreateProject({
  "ProjectName": null
}, context)

Input

  • input object
    • ProjectName required

Output

CreateProjectVersion

amazonaws_rekognition.CreateProjectVersion({
  "ProjectArn": null,
  "VersionName": null,
  "OutputConfig": null,
  "TrainingData": null,
  "TestingData": null
}, context)

Input

  • input object
    • OutputConfig required
      • S3Bucket
      • S3KeyPrefix
    • ProjectArn required
    • TestingData required
      • Assets
      • AutoCreate
    • TrainingData required
    • VersionName required

Output

CreateStreamProcessor

amazonaws_rekognition.CreateStreamProcessor({
  "Input": null,
  "Output": null,
  "Name": null,
  "Settings": null,
  "RoleArn": null
}, context)

Input

  • input object
    • Input required
      • KinesisVideoStream
        • Arn
    • Name required
    • Output required
      • KinesisDataStream
        • Arn
    • RoleArn required
    • Settings required
      • FaceSearch
        • CollectionId
        • FaceMatchThreshold

Output

DeleteCollection

amazonaws_rekognition.DeleteCollection({
  "CollectionId": null
}, context)

Input

  • input object
    • CollectionId required

Output

DeleteFaces

amazonaws_rekognition.DeleteFaces({
  "CollectionId": null,
  "FaceIds": null
}, context)

Input

  • input object
    • CollectionId required
    • FaceIds required

Output

DeleteProject

amazonaws_rekognition.DeleteProject({
  "ProjectArn": null
}, context)

Input

  • input object
    • ProjectArn required

Output

DeleteProjectVersion

amazonaws_rekognition.DeleteProjectVersion({
  "ProjectVersionArn": null
}, context)

Input

  • input object
    • ProjectVersionArn required

Output

DeleteStreamProcessor

amazonaws_rekognition.DeleteStreamProcessor({
  "Name": null
}, context)

Input

  • input object
    • Name required

Output

DescribeCollection

amazonaws_rekognition.DescribeCollection({
  "CollectionId": null
}, context)

Input

  • input object
    • CollectionId required

Output

DescribeProjectVersions

amazonaws_rekognition.DescribeProjectVersions({
  "ProjectArn": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • MaxResults
    • NextToken
    • ProjectArn required
    • VersionNames

Output

DescribeProjects

amazonaws_rekognition.DescribeProjects({}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • MaxResults
    • NextToken

Output

DescribeStreamProcessor

amazonaws_rekognition.DescribeStreamProcessor({
  "Name": null
}, context)

Input

  • input object
    • Name required

Output

DetectCustomLabels

amazonaws_rekognition.DetectCustomLabels({
  "ProjectVersionArn": null,
  "Image": {}
}, context)

Input

  • input object
    • Image required Image
    • MaxResults
    • MinConfidence
    • ProjectVersionArn required

Output

DetectFaces

amazonaws_rekognition.DetectFaces({
  "Image": null
}, context)

Input

  • input object
    • Attributes
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version

Output

DetectLabels

amazonaws_rekognition.DetectLabels({
  "Image": null
}, context)

Input

  • input object
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MaxLabels
    • MinConfidence

Output

DetectModerationLabels

amazonaws_rekognition.DetectModerationLabels({
  "Image": null
}, context)

Input

  • input object
    • HumanLoopConfig
      • DataAttributes
      • FlowDefinitionArn required
      • HumanLoopName required
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MinConfidence

Output

DetectProtectiveEquipment

amazonaws_rekognition.DetectProtectiveEquipment({
  "Image": null
}, context)

Input

  • input object
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • SummarizationAttributes

Output

DetectText

amazonaws_rekognition.DetectText({
  "Image": null
}, context)

Input

Output

GetCelebrityInfo

amazonaws_rekognition.GetCelebrityInfo({
  "Id": null
}, context)

Input

  • input object
    • Id required

Output

GetCelebrityRecognition

amazonaws_rekognition.GetCelebrityRecognition({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

Output

GetContentModeration

amazonaws_rekognition.GetContentModeration({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

Output

GetFaceDetection

amazonaws_rekognition.GetFaceDetection({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken

Output

GetFaceSearch

amazonaws_rekognition.GetFaceSearch({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

Output

GetLabelDetection

amazonaws_rekognition.GetLabelDetection({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

Output

GetPersonTracking

amazonaws_rekognition.GetPersonTracking({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

Output

GetSegmentDetection

amazonaws_rekognition.GetSegmentDetection({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken

Output

GetTextDetection

amazonaws_rekognition.GetTextDetection({
  "JobId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • JobId required
    • MaxResults
    • NextToken

Output

IndexFaces

amazonaws_rekognition.IndexFaces({
  "CollectionId": null,
  "Image": null
}, context)

Input

  • input object
    • CollectionId required
    • DetectionAttributes
    • ExternalImageId
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MaxFaces
    • QualityFilter

Output

ListCollections

amazonaws_rekognition.ListCollections({}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • MaxResults
    • NextToken

Output

ListFaces

amazonaws_rekognition.ListFaces({
  "CollectionId": null
}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • CollectionId required
    • MaxResults
    • NextToken

Output

ListStreamProcessors

amazonaws_rekognition.ListStreamProcessors({}, context)

Input

  • input object
    • MaxResults string
    • NextToken string
    • MaxResults
    • NextToken

Output

RecognizeCelebrities

amazonaws_rekognition.RecognizeCelebrities({
  "Image": null
}, context)

Input

  • input object
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version

Output

SearchFaces

amazonaws_rekognition.SearchFaces({
  "CollectionId": null,
  "FaceId": null
}, context)

Input

  • input object
    • CollectionId required
    • FaceId required
    • FaceMatchThreshold
    • MaxFaces

Output

SearchFacesByImage

amazonaws_rekognition.SearchFacesByImage({
  "CollectionId": null,
  "Image": null
}, context)

Input

  • input object
    • CollectionId required
    • FaceMatchThreshold
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MaxFaces
    • QualityFilter

Output

StartCelebrityRecognition

amazonaws_rekognition.StartCelebrityRecognition({
  "Video": null
}, context)

Input

  • input object
    • ClientRequestToken
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

Output

StartContentModeration

amazonaws_rekognition.StartContentModeration({
  "Video": null
}, context)

Input

  • input object
    • ClientRequestToken
    • JobTag
    • MinConfidence
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

Output

StartFaceDetection

amazonaws_rekognition.StartFaceDetection({
  "Video": null
}, context)

Input

  • input object
    • ClientRequestToken
    • FaceAttributes
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

Output

StartFaceSearch

amazonaws_rekognition.StartFaceSearch({
  "Video": null,
  "CollectionId": null
}, context)

Input

  • input object
    • ClientRequestToken
    • CollectionId required
    • FaceMatchThreshold
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

Output

StartLabelDetection

amazonaws_rekognition.StartLabelDetection({
  "Video": null
}, context)

Input

  • input object
    • ClientRequestToken
    • JobTag
    • MinConfidence
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

Output

StartPersonTracking

amazonaws_rekognition.StartPersonTracking({
  "Video": null
}, context)

Input

  • input object
    • ClientRequestToken
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

Output

StartProjectVersion

amazonaws_rekognition.StartProjectVersion({
  "ProjectVersionArn": null,
  "MinInferenceUnits": null
}, context)

Input

  • input object
    • MinInferenceUnits required
    • ProjectVersionArn required

Output

StartSegmentDetection

amazonaws_rekognition.StartSegmentDetection({
  "Video": {},
  "SegmentTypes": null
}, context)

Input

  • input object
    • ClientRequestToken
    • Filters
      • ShotFilter
        • MinSegmentConfidence
      • TechnicalCueFilter
        • MinSegmentConfidence
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • SegmentTypes required
    • Video required Video

Output

StartStreamProcessor

amazonaws_rekognition.StartStreamProcessor({
  "Name": null
}, context)

Input

  • input object
    • Name required

Output

StartTextDetection

amazonaws_rekognition.StartTextDetection({
  "Video": {}
}, context)

Input

  • input object
    • ClientRequestToken
    • Filters
      • RegionsOfInterest
      • WordFilter
        • MinBoundingBoxHeight
        • MinBoundingBoxWidth
        • MinConfidence
    • JobTag
    • NotificationChannel NotificationChannel
    • Video required Video

Output

StopProjectVersion

amazonaws_rekognition.StopProjectVersion({
  "ProjectVersionArn": null
}, context)

Input

  • input object
    • ProjectVersionArn required

Output

StopStreamProcessor

amazonaws_rekognition.StopStreamProcessor({
  "Name": null
}, context)

Input

  • input object
    • Name required

Output

Definitions

AccessDeniedException

AgeRange

  • AgeRange object:

    Structure containing the estimated age range, in years, for a face.

    Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.

    • High
    • Low

Asset

  • Asset object: Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.

Assets

  • Assets array

Attribute

  • Attribute string (values: DEFAULT, ALL)

Attributes

AudioMetadata

  • AudioMetadata object: Metadata information about an audio stream. An array of AudioMetadata objects for the audio streams found in a stored video is returned by GetSegmentDetection.
    • Codec
    • DurationMillis
    • NumberOfChannels
    • SampleRate

AudioMetadataList

Beard

  • Beard object: Indicates whether or not the face has a beard, and the confidence level in the determination.
    • Confidence
    • Value

BodyPart

  • BodyPart string (values: FACE, HEAD, LEFT_HAND, RIGHT_HAND)

BodyParts

Boolean

  • Boolean boolean

BoundingBox

  • BoundingBox object:

    Identifies the bounding box around the label, face, text or personal protective equipment. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).

    The top and left values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200).

    The width and height values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.

    The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the left or top values.

    • Height
    • Left
    • Top
    • Width

BoundingBoxHeight

  • BoundingBoxHeight number

BoundingBoxWidth

  • BoundingBoxWidth number

Celebrity

  • Celebrity object: Provides information about a celebrity recognized by the RecognizeCelebrities operation.
    • Face
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Landmarks
      • Pose
        • Pitch
        • Roll
        • Yaw
      • Quality
        • Brightness
        • Sharpness
    • Id
    • MatchConfidence
    • Name
    • Urls

CelebrityDetail

  • CelebrityDetail object: Information about a recognized celebrity.
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Confidence
    • Face
      • AgeRange
        • High
        • Low
      • Beard
        • Confidence
        • Value
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Emotions
      • Eyeglasses
        • Confidence
        • Value
      • EyesOpen
        • Confidence
        • Value
      • Gender
        • Confidence
        • Value
      • Landmarks
      • MouthOpen
        • Confidence
        • Value
      • Mustache
        • Confidence
        • Value
      • Pose
        • Pitch
        • Roll
        • Yaw
      • Quality
        • Brightness
        • Sharpness
      • Smile
        • Confidence
        • Value
      • Sunglasses
        • Confidence
        • Value
    • Id
    • Name
    • Urls

CelebrityList

CelebrityRecognition

  • CelebrityRecognition object: Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
    • Celebrity
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Face
        • AgeRange
          • High
          • Low
        • Beard
          • Confidence
          • Value
        • BoundingBox
          • Height
          • Left
          • Top
          • Width
        • Confidence
        • Emotions
        • Eyeglasses
          • Confidence
          • Value
        • EyesOpen
          • Confidence
          • Value
        • Gender
          • Confidence
          • Value
        • Landmarks
        • MouthOpen
          • Confidence
          • Value
        • Mustache
          • Confidence
          • Value
        • Pose
          • Pitch
          • Roll
          • Yaw
        • Quality
          • Brightness
          • Sharpness
        • Smile
          • Confidence
          • Value
        • Sunglasses
          • Confidence
          • Value
      • Id
      • Name
      • Urls
    • Timestamp

CelebrityRecognitionSortBy

  • CelebrityRecognitionSortBy string (values: ID, TIMESTAMP)

CelebrityRecognitions

ClientRequestToken

  • ClientRequestToken string

CollectionId

  • CollectionId string

CollectionIdList

CompareFacesMatch

  • CompareFacesMatch object: Provides information about a face in a target image that matches the source image face analyzed by CompareFaces. The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.
    • Face
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Landmarks
      • Pose
        • Pitch
        • Roll
        • Yaw
      • Quality
        • Brightness
        • Sharpness
    • Similarity

CompareFacesMatchList

CompareFacesRequest

  • CompareFacesRequest object
    • QualityFilter
    • SimilarityThreshold
    • SourceImage required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • TargetImage required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version

CompareFacesResponse

  • CompareFacesResponse object
    • FaceMatches
    • SourceImageFace
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
    • SourceImageOrientationCorrection
    • TargetImageOrientationCorrection
    • UnmatchedFaces

CompareFacesUnmatchList

ComparedFace

  • ComparedFace object: Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities.
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Confidence
    • Landmarks
    • Pose
      • Pitch
      • Roll
      • Yaw
    • Quality
      • Brightness
      • Sharpness

ComparedFaceList

ComparedSourceImageFace

  • ComparedSourceImageFace object: Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Confidence

ContentClassifier

  • ContentClassifier string (values: FreeOfPersonallyIdentifiableInformation, FreeOfAdultContent)

ContentClassifiers

ContentModerationDetection

  • ContentModerationDetection object: Information about an unsafe content label detection in a stored video.
    • ModerationLabel
      • Confidence
      • Name
      • ParentName
    • Timestamp

ContentModerationDetections

ContentModerationSortBy

  • ContentModerationSortBy string (values: NAME, TIMESTAMP)

CoversBodyPart

  • CoversBodyPart object: Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.
    • Confidence
    • Value

CreateCollectionRequest

  • CreateCollectionRequest object
    • CollectionId required

CreateCollectionResponse

  • CreateCollectionResponse object
    • CollectionArn
    • FaceModelVersion
    • StatusCode

CreateProjectRequest

  • CreateProjectRequest object
    • ProjectName required

CreateProjectResponse

  • CreateProjectResponse object
    • ProjectArn

CreateProjectVersionRequest

  • CreateProjectVersionRequest object
    • OutputConfig required
      • S3Bucket
      • S3KeyPrefix
    • ProjectArn required
    • TestingData required
      • Assets
      • AutoCreate
    • TrainingData required
    • VersionName required

CreateProjectVersionResponse

  • CreateProjectVersionResponse object
    • ProjectVersionArn

CreateStreamProcessorRequest

  • CreateStreamProcessorRequest object
    • Input required
      • KinesisVideoStream
        • Arn
    • Name required
    • Output required
      • KinesisDataStream
        • Arn
    • RoleArn required
    • Settings required
      • FaceSearch
        • CollectionId
        • FaceMatchThreshold

CreateStreamProcessorResponse

  • CreateStreamProcessorResponse object
    • StreamProcessorArn

CustomLabel

  • CustomLabel object: A custom label detected in an image by a call to DetectCustomLabels.
    • Confidence
    • Geometry
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Polygon
    • Name

CustomLabels

DateTime

  • DateTime string

Degree

  • Degree number

DeleteCollectionRequest

  • DeleteCollectionRequest object
    • CollectionId required

DeleteCollectionResponse

  • DeleteCollectionResponse object
    • StatusCode

DeleteFacesRequest

  • DeleteFacesRequest object
    • CollectionId required
    • FaceIds required

DeleteFacesResponse

  • DeleteFacesResponse object

DeleteProjectRequest

  • DeleteProjectRequest object
    • ProjectArn required

DeleteProjectResponse

  • DeleteProjectResponse object
    • Status

DeleteProjectVersionRequest

  • DeleteProjectVersionRequest object
    • ProjectVersionArn required

DeleteProjectVersionResponse

  • DeleteProjectVersionResponse object
    • Status

DeleteStreamProcessorRequest

  • DeleteStreamProcessorRequest object
    • Name required

DeleteStreamProcessorResponse

  • DeleteStreamProcessorResponse object

DescribeCollectionRequest

  • DescribeCollectionRequest object
    • CollectionId required

DescribeCollectionResponse

  • DescribeCollectionResponse object
    • CollectionARN
    • CreationTimestamp
    • FaceCount
    • FaceModelVersion

DescribeProjectVersionsRequest

  • DescribeProjectVersionsRequest object
    • MaxResults
    • NextToken
    • ProjectArn required
    • VersionNames

DescribeProjectVersionsResponse

DescribeProjectsRequest

  • DescribeProjectsRequest object
    • MaxResults
    • NextToken

DescribeProjectsResponse

DescribeStreamProcessorRequest

  • DescribeStreamProcessorRequest object
    • Name required

DescribeStreamProcessorResponse

  • DescribeStreamProcessorResponse object
    • CreationTimestamp
    • Input
      • KinesisVideoStream
        • Arn
    • LastUpdateTimestamp
    • Name
    • Output
      • KinesisDataStream
        • Arn
    • RoleArn
    • Settings
      • FaceSearch
        • CollectionId
        • FaceMatchThreshold
    • Status
    • StatusMessage
    • StreamProcessorArn

DetectCustomLabelsRequest

  • DetectCustomLabelsRequest object
    • Image required Image
    • MaxResults
    • MinConfidence
    • ProjectVersionArn required

DetectCustomLabelsResponse

  • DetectCustomLabelsResponse object

DetectFacesRequest

  • DetectFacesRequest object
    • Attributes
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version

DetectFacesResponse

  • DetectFacesResponse object
    • FaceDetails
    • OrientationCorrection

DetectLabelsRequest

  • DetectLabelsRequest object
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MaxLabels
    • MinConfidence

DetectLabelsResponse

  • DetectLabelsResponse object
    • LabelModelVersion
    • Labels
    • OrientationCorrection

DetectModerationLabelsRequest

  • DetectModerationLabelsRequest object
    • HumanLoopConfig
      • DataAttributes
      • FlowDefinitionArn required
      • HumanLoopName required
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MinConfidence

DetectModerationLabelsResponse

  • DetectModerationLabelsResponse object
    • HumanLoopActivationOutput
    • ModerationLabels
    • ModerationModelVersion

DetectProtectiveEquipmentRequest

  • DetectProtectiveEquipmentRequest object
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • SummarizationAttributes

DetectProtectiveEquipmentResponse

  • DetectProtectiveEquipmentResponse object

DetectTextFilters

  • DetectTextFilters object: A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.

DetectTextRequest

DetectTextResponse

  • DetectTextResponse object

DetectionFilter

  • DetectionFilter object: A set of parameters that allow you to filter out certain results from your returned results.
    • MinBoundingBoxHeight
    • MinBoundingBoxWidth
    • MinConfidence

Emotion

  • Emotion object: The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
    • Confidence
    • Type

EmotionName

  • EmotionName string (values: HAPPY, SAD, ANGRY, CONFUSED, DISGUSTED, SURPRISED, CALM, UNKNOWN, FEAR)

Emotions

EquipmentDetection

  • EquipmentDetection object: Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Confidence
    • CoversBodyPart
      • Confidence
      • Value
    • Type

EquipmentDetections

EvaluationResult

  • EvaluationResult object: The evaluation results for the training of a model.

ExtendedPaginationToken

  • ExtendedPaginationToken string

ExternalImageId

  • ExternalImageId string

EyeOpen

  • EyeOpen object: Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
    • Confidence
    • Value

Eyeglasses

  • Eyeglasses object: Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
    • Confidence
    • Value

Face

  • Face object: Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Confidence
    • ExternalImageId
    • FaceId
    • ImageId

FaceAttributes

  • FaceAttributes string (values: DEFAULT, ALL)

FaceDetail

  • FaceDetail object:

    Structure containing attributes of the face that the algorithm detected.

    A FaceDetail object contains either the default facial attributes or all facial attributes. The default attributes are BoundingBox, Confidence, Landmarks, Pose, and Quality.

    GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a FaceDetail object with all attributes. To specify which attributes to return, use the FaceAttributes input parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don't have a FaceAttributes input parameter.

    • GetCelebrityRecognition

    • GetPersonTracking

    • GetFaceSearch

    The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the Attributes input parameter for DetectFaces. For IndexFaces, use the DetectAttributes input parameter.

    • AgeRange
      • High
      • Low
    • Beard
      • Confidence
      • Value
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Confidence
    • Emotions
    • Eyeglasses
      • Confidence
      • Value
    • EyesOpen
      • Confidence
      • Value
    • Gender
      • Confidence
      • Value
    • Landmarks
    • MouthOpen
      • Confidence
      • Value
    • Mustache
      • Confidence
      • Value
    • Pose
      • Pitch
      • Roll
      • Yaw
    • Quality
      • Brightness
      • Sharpness
    • Smile
      • Confidence
      • Value
    • Sunglasses
      • Confidence
      • Value

FaceDetailList

FaceDetection

  • FaceDetection object: Information about a face detected in a video analysis request and the time the face was detected in the video.
    • Face
      • AgeRange
        • High
        • Low
      • Beard
        • Confidence
        • Value
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Emotions
      • Eyeglasses
        • Confidence
        • Value
      • EyesOpen
        • Confidence
        • Value
      • Gender
        • Confidence
        • Value
      • Landmarks
      • MouthOpen
        • Confidence
        • Value
      • Mustache
        • Confidence
        • Value
      • Pose
        • Pitch
        • Roll
        • Yaw
      • Quality
        • Brightness
        • Sharpness
      • Smile
        • Confidence
        • Value
      • Sunglasses
        • Confidence
        • Value
    • Timestamp

FaceDetections

FaceId

  • FaceId string

FaceIdList

  • FaceIdList array

FaceList

  • FaceList array

FaceMatch

  • FaceMatch object: Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.
    • Face
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • ExternalImageId
      • FaceId
      • ImageId
    • Similarity

FaceMatchList

FaceModelVersionList

  • FaceModelVersionList array

FaceRecord

  • FaceRecord object: Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.
    • Face
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • ExternalImageId
      • FaceId
      • ImageId
    • FaceDetail
      • AgeRange
        • High
        • Low
      • Beard
        • Confidence
        • Value
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Emotions
      • Eyeglasses
        • Confidence
        • Value
      • EyesOpen
        • Confidence
        • Value
      • Gender
        • Confidence
        • Value
      • Landmarks
      • MouthOpen
        • Confidence
        • Value
      • Mustache
        • Confidence
        • Value
      • Pose
        • Pitch
        • Roll
        • Yaw
      • Quality
        • Brightness
        • Sharpness
      • Smile
        • Confidence
        • Value
      • Sunglasses
        • Confidence
        • Value

FaceRecordList

FaceSearchSettings

  • FaceSearchSettings object: Input face recognition parameters for an Amazon Rekognition stream processor. FaceRecognitionSettings is a request parameter for CreateStreamProcessor.
    • CollectionId
    • FaceMatchThreshold

FaceSearchSortBy

  • FaceSearchSortBy string (values: INDEX, TIMESTAMP)

Float

  • Float number

FlowDefinitionArn

  • FlowDefinitionArn string

Gender

  • Gender object:

    The predicted gender of a detected face.

    Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.

    Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.

    We don't recommend using gender binary predictions to make decisions that impact
 an individual's rights, privacy, or access to services.

    • Confidence
    • Value

GenderType

  • GenderType string (values: Male, Female)

Geometry

  • Geometry object: Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Polygon

GetCelebrityInfoRequest

  • GetCelebrityInfoRequest object
    • Id required

GetCelebrityInfoResponse

  • GetCelebrityInfoResponse object
    • Name
    • Urls

GetCelebrityRecognitionRequest

  • GetCelebrityRecognitionRequest object
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

GetCelebrityRecognitionResponse

  • GetCelebrityRecognitionResponse object
    • Celebrities
    • JobStatus
    • NextToken
    • StatusMessage
    • VideoMetadata
      • Codec
      • DurationMillis
      • Format
      • FrameHeight
      • FrameRate
      • FrameWidth

GetContentModerationRequest

  • GetContentModerationRequest object
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

GetContentModerationResponse

  • GetContentModerationResponse object
    • JobStatus
    • ModerationLabels
    • ModerationModelVersion
    • NextToken
    • StatusMessage
    • VideoMetadata
      • Codec
      • DurationMillis
      • Format
      • FrameHeight
      • FrameRate
      • FrameWidth

GetFaceDetectionRequest

  • GetFaceDetectionRequest object
    • JobId required
    • MaxResults
    • NextToken

GetFaceDetectionResponse

  • GetFaceDetectionResponse object
    • Faces
    • JobStatus
    • NextToken
    • StatusMessage
    • VideoMetadata
      • Codec
      • DurationMillis
      • Format
      • FrameHeight
      • FrameRate
      • FrameWidth

GetFaceSearchRequest

  • GetFaceSearchRequest object
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

GetFaceSearchResponse

  • GetFaceSearchResponse object
    • JobStatus
    • NextToken
    • Persons
    • StatusMessage
    • VideoMetadata
      • Codec
      • DurationMillis
      • Format
      • FrameHeight
      • FrameRate
      • FrameWidth

GetLabelDetectionRequest

  • GetLabelDetectionRequest object
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

GetLabelDetectionResponse

  • GetLabelDetectionResponse object
    • JobStatus
    • LabelModelVersion
    • Labels
    • NextToken
    • StatusMessage
    • VideoMetadata
      • Codec
      • DurationMillis
      • Format
      • FrameHeight
      • FrameRate
      • FrameWidth

GetPersonTrackingRequest

  • GetPersonTrackingRequest object
    • JobId required
    • MaxResults
    • NextToken
    • SortBy

GetPersonTrackingResponse

  • GetPersonTrackingResponse object
    • JobStatus
    • NextToken
    • Persons
    • StatusMessage
    • VideoMetadata
      • Codec
      • DurationMillis
      • Format
      • FrameHeight
      • FrameRate
      • FrameWidth

GetSegmentDetectionRequest

  • GetSegmentDetectionRequest object
    • JobId required
    • MaxResults
    • NextToken

GetSegmentDetectionResponse

GetTextDetectionRequest

  • GetTextDetectionRequest object
    • JobId required
    • MaxResults
    • NextToken

GetTextDetectionResponse

GroundTruthManifest

  • GroundTruthManifest object: The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.

HumanLoopActivationConditionsEvaluationResults

  • HumanLoopActivationConditionsEvaluationResults string

HumanLoopActivationOutput

  • HumanLoopActivationOutput object: Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

HumanLoopActivationReason

  • HumanLoopActivationReason string

HumanLoopActivationReasons

HumanLoopArn

  • HumanLoopArn string

HumanLoopConfig

  • HumanLoopConfig object: Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
    • DataAttributes
    • FlowDefinitionArn required
    • HumanLoopName required

HumanLoopDataAttributes

  • HumanLoopDataAttributes object: Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.

HumanLoopName

  • HumanLoopName string

HumanLoopQuotaExceededException

IdempotentParameterMismatchException

Image

  • Image object:

    Provides the input image either as bytes or an S3 object.

    You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

    For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

    You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

    The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

    If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

    For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.

    • Bytes
    • S3Object
      • Bucket
      • Name
      • Version

ImageBlob

  • ImageBlob string

ImageId

  • ImageId string

ImageQuality

  • ImageQuality object: Identifies face image brightness and sharpness.
    • Brightness
    • Sharpness

ImageTooLargeException

IndexFacesRequest

  • IndexFacesRequest object
    • CollectionId required
    • DetectionAttributes
    • ExternalImageId
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MaxFaces
    • QualityFilter

IndexFacesResponse

  • IndexFacesResponse object

InferenceUnits

  • InferenceUnits integer

Instance

  • Instance object: An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Confidence

Instances

InternalServerError

InvalidImageFormatException

InvalidPaginationTokenException

InvalidParameterException

InvalidS3ObjectException

JobId

  • JobId string

JobTag

  • JobTag string

KinesisDataArn

  • KinesisDataArn string

KinesisDataStream

  • KinesisDataStream object: The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
    • Arn

KinesisVideoArn

  • KinesisVideoArn string

KinesisVideoStream

  • KinesisVideoStream object: Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
    • Arn

Label

  • Label object:

    Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.

LabelDetection

  • LabelDetection object: Information about a label detected in a video analysis request and the time the label was detected in the video.
    • Label
    • Timestamp

LabelDetectionSortBy

  • LabelDetectionSortBy string (values: NAME, TIMESTAMP)

LabelDetections

Labels

  • Labels array

Landmark

  • Landmark object: Indicates the location of the landmark on the face.
    • Type
    • X
    • Y

LandmarkType

  • LandmarkType string (values: eyeLeft, eyeRight, nose, mouthLeft, mouthRight, leftEyeBrowLeft, leftEyeBrowRight, leftEyeBrowUp, rightEyeBrowLeft, rightEyeBrowRight, rightEyeBrowUp, leftEyeLeft, leftEyeRight, leftEyeUp, leftEyeDown, rightEyeLeft, rightEyeRight, rightEyeUp, rightEyeDown, noseLeft, noseRight, mouthUp, mouthDown, leftPupil, rightPupil, upperJawlineLeft, midJawlineLeft, chinBottom, midJawlineRight, upperJawlineRight)

Landmarks

LimitExceededException

ListCollectionsRequest

  • ListCollectionsRequest object
    • MaxResults
    • NextToken

ListCollectionsResponse

  • ListCollectionsResponse object

ListFacesRequest

  • ListFacesRequest object
    • CollectionId required
    • MaxResults
    • NextToken

ListFacesResponse

  • ListFacesResponse object
    • FaceModelVersion
    • Faces
    • NextToken

ListStreamProcessorsRequest

  • ListStreamProcessorsRequest object
    • MaxResults
    • NextToken

ListStreamProcessorsResponse

  • ListStreamProcessorsResponse object

MaxFaces

  • MaxFaces integer

MaxFacesToIndex

  • MaxFacesToIndex integer

MaxResults

  • MaxResults integer

ModerationLabel

  • ModerationLabel object: Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
    • Confidence
    • Name
    • ParentName

ModerationLabels

MouthOpen

  • MouthOpen object: Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
    • Confidence
    • Value

Mustache

  • Mustache object: Indicates whether or not the face has a mustache, and the confidence level in the determination.
    • Confidence
    • Value

NotificationChannel

  • NotificationChannel object: The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video.
    • RoleArn required
    • SNSTopicArn required

OrientationCorrection

  • OrientationCorrection string (values: ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270)

OutputConfig

  • OutputConfig object: The S3 bucket and folder location where training output is placed.
    • S3Bucket
    • S3KeyPrefix

PageSize

  • PageSize integer

PaginationToken

  • PaginationToken string

Parent

  • Parent object: A parent label for a label. A label can have 0, 1, or more parents.
    • Name

Parents

Percent

  • Percent number

PersonDetail

  • PersonDetail object: Details about a person detected in a video analysis request.
    • BoundingBox
      • Height
      • Left
      • Top
      • Width
    • Face
      • AgeRange
        • High
        • Low
      • Beard
        • Confidence
        • Value
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Emotions
      • Eyeglasses
        • Confidence
        • Value
      • EyesOpen
        • Confidence
        • Value
      • Gender
        • Confidence
        • Value
      • Landmarks
      • MouthOpen
        • Confidence
        • Value
      • Mustache
        • Confidence
        • Value
      • Pose
        • Pitch
        • Roll
        • Yaw
      • Quality
        • Brightness
        • Sharpness
      • Smile
        • Confidence
        • Value
      • Sunglasses
        • Confidence
        • Value
    • Index

PersonDetection

  • PersonDetection object:

    Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of PersonDetection objects with elements for each time a person's path is tracked in a video.

    For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.

    • Person
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Face
        • AgeRange
          • High
          • Low
        • Beard
          • Confidence
          • Value
        • BoundingBox
          • Height
          • Left
          • Top
          • Width
        • Confidence
        • Emotions
        • Eyeglasses
          • Confidence
          • Value
        • EyesOpen
          • Confidence
          • Value
        • Gender
          • Confidence
          • Value
        • Landmarks
        • MouthOpen
          • Confidence
          • Value
        • Mustache
          • Confidence
          • Value
        • Pose
          • Pitch
          • Roll
          • Yaw
        • Quality
          • Brightness
          • Sharpness
        • Smile
          • Confidence
          • Value
        • Sunglasses
          • Confidence
          • Value
      • Index
    • Timestamp

PersonDetections

PersonIndex

  • PersonIndex integer

PersonMatch

  • PersonMatch object: Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by GetFaceSearch.
    • FaceMatches
    • Person
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Face
        • AgeRange
          • High
          • Low
        • Beard
          • Confidence
          • Value
        • BoundingBox
          • Height
          • Left
          • Top
          • Width
        • Confidence
        • Emotions
        • Eyeglasses
          • Confidence
          • Value
        • EyesOpen
          • Confidence
          • Value
        • Gender
          • Confidence
          • Value
        • Landmarks
        • MouthOpen
          • Confidence
          • Value
        • Mustache
          • Confidence
          • Value
        • Pose
          • Pitch
          • Roll
          • Yaw
        • Quality
          • Brightness
          • Sharpness
        • Smile
          • Confidence
          • Value
        • Sunglasses
          • Confidence
          • Value
      • Index
    • Timestamp

PersonMatches

PersonTrackingSortBy

  • PersonTrackingSortBy string (values: INDEX, TIMESTAMP)

Point

  • Point object:

    The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.

    An array of Point objects, Polygon, is returned by DetectText and by DetectCustomLabels. Polygon represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.

    • X
    • Y

Polygon

  • Polygon array

Pose

  • Pose object: Indicates the pose of the face as determined by its pitch, roll, and yaw.
    • Pitch
    • Roll
    • Yaw

ProjectArn

  • ProjectArn string

ProjectDescription

  • ProjectDescription object: A description of a Amazon Rekognition Custom Labels project.
    • CreationTimestamp
    • ProjectArn
    • Status

ProjectDescriptions

ProjectName

  • ProjectName string

ProjectStatus

  • ProjectStatus string (values: CREATING, CREATED, DELETING)

ProjectVersionArn

  • ProjectVersionArn string

ProjectVersionDescription

  • ProjectVersionDescription object: The description of a version of a model.
    • BillableTrainingTimeInSeconds
    • CreationTimestamp
    • EvaluationResult
    • ManifestSummary
    • MinInferenceUnits
    • OutputConfig
      • S3Bucket
      • S3KeyPrefix
    • ProjectVersionArn
    • Status
    • StatusMessage
    • TestingDataResult
      • Input
        • Assets
        • AutoCreate
      • Output
        • Assets
        • AutoCreate
      • Validation
    • TrainingDataResult
      • Input
      • Output
      • Validation
    • TrainingEndTimestamp

ProjectVersionDescriptions

ProjectVersionStatus

  • ProjectVersionStatus string (values: TRAINING_IN_PROGRESS, TRAINING_COMPLETED, TRAINING_FAILED, STARTING, RUNNING, FAILED, STOPPING, STOPPED, DELETING)

ProjectVersionsPageSize

  • ProjectVersionsPageSize integer

ProjectsPageSize

  • ProjectsPageSize integer

ProtectiveEquipmentBodyPart

  • ProtectiveEquipmentBodyPart object: Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of ProtectiveEquipmentBodyPart objects is returned for each person detected by DetectProtectiveEquipment.

ProtectiveEquipmentPerson

  • ProtectiveEquipmentPerson object: A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson objects.

ProtectiveEquipmentPersonIds

  • ProtectiveEquipmentPersonIds array

ProtectiveEquipmentPersons

ProtectiveEquipmentSummarizationAttributes

  • ProtectiveEquipmentSummarizationAttributes object: Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary (ProtectiveEquipmentSummary) field of the response from DetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.

ProtectiveEquipmentSummary

  • ProtectiveEquipmentSummary object:

    Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the SummarizationAttributes (ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).

    To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the PersonsWithRequiredEquipment array. If you want to find out more about a person, such as the location (BoundingBox) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in the Persons array by DetectProtectiveEquipment.

    • PersonsIndeterminate
    • PersonsWithRequiredEquipment
    • PersonsWithoutRequiredEquipment

ProtectiveEquipmentType

  • ProtectiveEquipmentType string (values: FACE_COVER, HAND_COVER, HEAD_COVER)

ProtectiveEquipmentTypes

ProvisionedThroughputExceededException

QualityFilter

  • QualityFilter string (values: NONE, AUTO, LOW, MEDIUM, HIGH)

Reason

  • Reason string (values: EXCEEDS_MAX_FACES, EXTREME_POSE, LOW_BRIGHTNESS, LOW_SHARPNESS, LOW_CONFIDENCE, SMALL_BOUNDING_BOX, LOW_FACE_QUALITY)

Reasons

RecognizeCelebritiesRequest

  • RecognizeCelebritiesRequest object
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version

RecognizeCelebritiesResponse

  • RecognizeCelebritiesResponse object

RegionOfInterest

  • RegionOfInterest object:

    Specifies a location within the frame that Rekognition checks for text. Uses a BoundingBox object to set a region of the screen.

    A word is included in the region if the word is more than half in that region. If there is more than one region, the word will be compared with all regions of the screen. Any word more than half in a region is kept in the results.

    • BoundingBox
      • Height
      • Left
      • Top
      • Width

RegionsOfInterest

RekognitionUniqueId

  • RekognitionUniqueId string

ResourceAlreadyExistsException

ResourceInUseException

ResourceNotFoundException

ResourceNotReadyException

RoleArn

  • RoleArn string

S3Bucket

  • S3Bucket string

S3KeyPrefix

  • S3KeyPrefix string

S3Object

  • S3Object object:

    Provides the S3 bucket name and object name.

    The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

    For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition Developer Guide.

    • Bucket
    • Name
    • Version

S3ObjectName

  • S3ObjectName string

S3ObjectVersion

  • S3ObjectVersion string

SNSTopicArn

  • SNSTopicArn string

SearchFacesByImageRequest

  • SearchFacesByImageRequest object
    • CollectionId required
    • FaceMatchThreshold
    • Image required
      • Bytes
      • S3Object
        • Bucket
        • Name
        • Version
    • MaxFaces
    • QualityFilter

SearchFacesByImageResponse

  • SearchFacesByImageResponse object
    • FaceMatches
    • FaceModelVersion
    • SearchedFaceBoundingBox
      • Height
      • Left
      • Top
      • Width
    • SearchedFaceConfidence

SearchFacesRequest

  • SearchFacesRequest object
    • CollectionId required
    • FaceId required
    • FaceMatchThreshold
    • MaxFaces

SearchFacesResponse

  • SearchFacesResponse object
    • FaceMatches
    • FaceModelVersion
    • SearchedFaceId

SegmentConfidence

  • SegmentConfidence number

SegmentDetection

  • SegmentDetection object: A technical cue or shot detection segment detected in a video. An array of SegmentDetection objects containing all segments detected in a stored video is returned by GetSegmentDetection.
    • DurationMillis
    • DurationSMPTE
    • EndTimecodeSMPTE
    • EndTimestampMillis
    • ShotSegment
      • Confidence
      • Index
    • StartTimecodeSMPTE
    • StartTimestampMillis
    • TechnicalCueSegment
      • Confidence
      • Type
    • Type

SegmentDetections

SegmentType

  • SegmentType string (values: TECHNICAL_CUE, SHOT)

SegmentTypeInfo

  • SegmentTypeInfo object: Information about the type of a segment requested in a call to StartSegmentDetection. An array of SegmentTypeInfo objects is returned by the response from GetSegmentDetection.
    • ModelVersion
    • Type

SegmentTypes

SegmentTypesInfo

ServiceQuotaExceededException

ShotSegment

  • ShotSegment object: Information about a shot detection segment detected in a video. For more information, see SegmentDetection.
    • Confidence
    • Index

Smile

  • Smile object: Indicates whether or not the face is smiling, and the confidence level in the determination.
    • Confidence
    • Value

StartCelebrityRecognitionRequest

  • StartCelebrityRecognitionRequest object
    • ClientRequestToken
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

StartCelebrityRecognitionResponse

  • StartCelebrityRecognitionResponse object
    • JobId

StartContentModerationRequest

  • StartContentModerationRequest object
    • ClientRequestToken
    • JobTag
    • MinConfidence
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

StartContentModerationResponse

  • StartContentModerationResponse object
    • JobId

StartFaceDetectionRequest

  • StartFaceDetectionRequest object
    • ClientRequestToken
    • FaceAttributes
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

StartFaceDetectionResponse

  • StartFaceDetectionResponse object
    • JobId

StartFaceSearchRequest

  • StartFaceSearchRequest object
    • ClientRequestToken
    • CollectionId required
    • FaceMatchThreshold
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

StartFaceSearchResponse

  • StartFaceSearchResponse object
    • JobId

StartLabelDetectionRequest

  • StartLabelDetectionRequest object
    • ClientRequestToken
    • JobTag
    • MinConfidence
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

StartLabelDetectionResponse

  • StartLabelDetectionResponse object
    • JobId

StartPersonTrackingRequest

  • StartPersonTrackingRequest object
    • ClientRequestToken
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • Video required
      • S3Object
        • Bucket
        • Name
        • Version

StartPersonTrackingResponse

  • StartPersonTrackingResponse object
    • JobId

StartProjectVersionRequest

  • StartProjectVersionRequest object
    • MinInferenceUnits required
    • ProjectVersionArn required

StartProjectVersionResponse

  • StartProjectVersionResponse object
    • Status

StartSegmentDetectionFilters

  • StartSegmentDetectionFilters object: Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.
    • ShotFilter
      • MinSegmentConfidence
    • TechnicalCueFilter
      • MinSegmentConfidence

StartSegmentDetectionRequest

  • StartSegmentDetectionRequest object
    • ClientRequestToken
    • Filters
      • ShotFilter
        • MinSegmentConfidence
      • TechnicalCueFilter
        • MinSegmentConfidence
    • JobTag
    • NotificationChannel
      • RoleArn required
      • SNSTopicArn required
    • SegmentTypes required
    • Video required Video

StartSegmentDetectionResponse

  • StartSegmentDetectionResponse object
    • JobId

StartShotDetectionFilter

  • StartShotDetectionFilter object: Filters for the shot detection segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
    • MinSegmentConfidence

StartStreamProcessorRequest

  • StartStreamProcessorRequest object
    • Name required

StartStreamProcessorResponse

  • StartStreamProcessorResponse object

StartTechnicalCueDetectionFilter

  • StartTechnicalCueDetectionFilter object: Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
    • MinSegmentConfidence

StartTextDetectionFilters

  • StartTextDetectionFilters object: Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word's height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.
    • RegionsOfInterest
    • WordFilter
      • MinBoundingBoxHeight
      • MinBoundingBoxWidth
      • MinConfidence

StartTextDetectionRequest

  • StartTextDetectionRequest object
    • ClientRequestToken
    • Filters
      • RegionsOfInterest
      • WordFilter
        • MinBoundingBoxHeight
        • MinBoundingBoxWidth
        • MinConfidence
    • JobTag
    • NotificationChannel NotificationChannel
    • Video required Video

StartTextDetectionResponse

  • StartTextDetectionResponse object
    • JobId

StatusMessage

  • StatusMessage string

StopProjectVersionRequest

  • StopProjectVersionRequest object
    • ProjectVersionArn required

StopProjectVersionResponse

  • StopProjectVersionResponse object
    • Status

StopStreamProcessorRequest

  • StopStreamProcessorRequest object
    • Name required

StopStreamProcessorResponse

  • StopStreamProcessorResponse object

StreamProcessor

  • StreamProcessor object: An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.
    • Name
    • Status

StreamProcessorArn

  • StreamProcessorArn string

StreamProcessorInput

  • StreamProcessorInput object: Information about the source streaming video.
    • KinesisVideoStream
      • Arn

StreamProcessorList

StreamProcessorName

  • StreamProcessorName string

StreamProcessorOutput

  • StreamProcessorOutput object: Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
    • KinesisDataStream
      • Arn

StreamProcessorSettings

  • StreamProcessorSettings object: Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.
    • FaceSearch
      • CollectionId
      • FaceMatchThreshold

StreamProcessorStatus

  • StreamProcessorStatus string (values: STOPPED, STARTING, RUNNING, FAILED, STOPPING)

String

  • String string

Summary

  • Summary object:

    The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.

    You get the training summary S3 bucket location by calling DescribeProjectVersions.

Sunglasses

  • Sunglasses object: Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
    • Confidence
    • Value

TechnicalCueSegment

  • TechnicalCueSegment object: Information about a technical cue segment. For more information, see SegmentDetection.
    • Confidence
    • Type

TechnicalCueType

  • TechnicalCueType string (values: ColorBars, EndCredits, BlackFrames)

TestingData

  • TestingData object: The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition Custom Labels creates a testing dataset using an 80/20 split of the training dataset.
    • Assets
    • AutoCreate

TestingDataResult

  • TestingDataResult object: Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
    • Input
      • Assets
      • AutoCreate
    • Output
      • Assets
      • AutoCreate
    • Validation

TextDetection

  • TextDetection object:

    Information about a word or line of text detected by DetectText.

    The DetectedText field contains the text that Amazon Rekognition detected in the image.

    Every word and line has an identifier (Id). Each word belongs to a line and has a parent identifier (ParentId) that identifies the line of text in which the word appears. The word Id is also an index for the word within a line of words.

    For more information, see Detecting Text in the Amazon Rekognition Developer Guide.

    • Confidence
    • DetectedText
    • Geometry
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Polygon
    • Id
    • ParentId
    • Type

TextDetectionList

TextDetectionResult

  • TextDetectionResult object: Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
    • TextDetection
      • Confidence
      • DetectedText
      • Geometry
        • BoundingBox
          • Height
          • Left
          • Top
          • Width
        • Polygon
      • Id
      • ParentId
      • Type
    • Timestamp

TextDetectionResults

TextTypes

  • TextTypes string (values: LINE, WORD)

ThrottlingException

Timecode

  • Timecode string

Timestamp

  • Timestamp integer

TrainingData

  • TrainingData object: The dataset used for training.

TrainingDataResult

  • TrainingDataResult object: Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
    • Input
    • Output
    • Validation

UInteger

  • UInteger integer

ULong

  • ULong integer

UnindexedFace

  • UnindexedFace object: A face that IndexFaces detected, but didn't index. Use the Reasons response attribute to determine why a face wasn't indexed.
    • FaceDetail
      • AgeRange
        • High
        • Low
      • Beard
        • Confidence
        • Value
      • BoundingBox
        • Height
        • Left
        • Top
        • Width
      • Confidence
      • Emotions
      • Eyeglasses
        • Confidence
        • Value
      • EyesOpen
        • Confidence
        • Value
      • Gender
        • Confidence
        • Value
      • Landmarks
      • MouthOpen
        • Confidence
        • Value
      • Mustache
        • Confidence
        • Value
      • Pose
        • Pitch
        • Roll
        • Yaw
      • Quality
        • Brightness
        • Sharpness
      • Smile
        • Confidence
        • Value
      • Sunglasses
        • Confidence
        • Value
    • Reasons

UnindexedFaces

Url

  • Url string

Urls

  • Urls array

ValidationData

  • ValidationData object:

    Contains the Amazon S3 bucket location of the validation data for a model training job.

    The validation data includes error information for individual JSON lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.

    You get the ValidationData object for the training dataset (TrainingDataResult) and the test dataset (TestingDataResult) by calling DescribeProjectVersions.

    The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.

VersionName

  • VersionName string

VersionNames

Video

  • Video object: Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.
    • S3Object
      • Bucket
      • Name
      • Version

VideoJobStatus

  • VideoJobStatus string (values: IN_PROGRESS, SUCCEEDED, FAILED)

VideoMetadata

  • VideoMetadata object: Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.
    • Codec
    • DurationMillis
    • Format
    • FrameHeight
    • FrameRate
    • FrameWidth

VideoMetadataList

VideoTooLargeException