Skip to content

Commit

Permalink
Merge branch 'release-1.10.80'
Browse files Browse the repository at this point in the history
* release-1.10.80:
  Bumping version to 1.10.80
  Update to latest models
  • Loading branch information
awstools committed Aug 17, 2018
2 parents 863f91e + 25a2343 commit 4a7b184
Show file tree
Hide file tree
Showing 7 changed files with 63 additions and 28 deletions.
17 changes: 17 additions & 0 deletions .changes/1.10.80.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
[
{
"category": "``dax``",
"description": "Update dax client to latest version",
"type": "api-change"
},
{
"category": "``secretsmanager``",
"description": "Update secretsmanager client to latest version",
"type": "api-change"
},
{
"category": "``sagemaker``",
"description": "Update sagemaker client to latest version",
"type": "api-change"
}
]
8 changes: 8 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,14 @@
CHANGELOG
=========

1.10.80
=======

* api-change:``dax``: Update dax client to latest version
* api-change:``secretsmanager``: Update secretsmanager client to latest version
* api-change:``sagemaker``: Update sagemaker client to latest version


1.10.79
=======

Expand Down
2 changes: 1 addition & 1 deletion botocore/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
import re
import logging

__version__ = '1.10.79'
__version__ = '1.10.80'


class NullHandler(logging.Handler):
Expand Down
3 changes: 2 additions & 1 deletion botocore/data/dax/2017-04-19/service-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -503,7 +503,8 @@
"required":[
"ClusterName",
"NodeType",
"ReplicationFactor"
"ReplicationFactor",
"IamRoleArn"
],
"members":{
"ClusterName":{
Expand Down
27 changes: 18 additions & 9 deletions botocore/data/sagemaker/2017-07-24/service-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@
"errors":[
{"shape":"ResourceLimitExceeded"}
],
"documentation":"<p>Creates a model in Amazon SageMaker. In the request, you name the model and describe one or more containers. For each container, you specify the docker image containing inference code, artifacts (from prior training), and custom environment map that the inference code uses when you deploy the model into production. </p> <p>Use this API to create a model only if you want to use Amazon SageMaker hosting services. To host your model, you create an endpoint configuration with the <code>CreateEndpointConfig</code> API, and then create an endpoint with the <code>CreateEndpoint</code> API. </p> <p>Amazon SageMaker then deploys all of the containers that you defined for the model in the hosting environment. </p> <p>In the <code>CreateModel</code> request, you must define a container with the <code>PrimaryContainer</code> parameter. </p> <p>In the request, you also provide an IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other AWS resources, you grant necessary permissions via this role.</p>"
"documentation":"<p>Creates a model in Amazon SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the docker image containing inference code, artifacts (from prior training), and custom environment map that the inference code uses when you deploy the model for predictions.</p> <p>Use this API to create a model if you want to use Amazon SageMaker hosting services or run a batch transform job.</p> <p>To host your model, you create an endpoint configuration with the <code>CreateEndpointConfig</code> API, and then create an endpoint with the <code>CreateEndpoint</code> API. Amazon SageMaker then deploys all of the containers that you defined for the model in the hosting environment. </p> <p>To run a batch transform using your model, you start a job with the <code>CreateTransformJob</code> API. Amazon SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location.</p> <p>In the <code>CreateModel</code> request, you must define a container with the <code>PrimaryContainer</code> parameter.</p> <p>In the request, you also provide an IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other AWS resources, you grant necessary permissions via this role.</p>"
},
"CreateNotebookInstance":{
"name":"CreateNotebookInstance",
Expand Down Expand Up @@ -139,7 +139,7 @@
{"shape":"ResourceInUse"},
{"shape":"ResourceLimitExceeded"}
],
"documentation":"<p>Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.</p> <p>To perform batch transformations, you create a transform job and use the data that you have readily available.</p> <p>In the request body, you provide the following:</p> <ul> <li> <p> <code>TransformJobName</code> - Identifies the transform job. The name must be unique within an AWS Region in an AWS account.</p> </li> <li> <p> <code>ModelName</code> - Identifies the model to use. <code>ModelName</code> must be the name of an existing Amazon SageMaker model within an AWS Region in an AWS account.</p> </li> <li> <p> <code>TransformInput</code> - Describes the dataset to be transformed and the Amazon S3 location where it is stored.</p> </li> <li> <p> <code>TransformOutput</code> - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.</p> </li> <li> <p> <code>TransformResources</code> - Identifies the ML compute instances for the transform job.</p> </li> </ul> <p> For more information about how batch transformation works Amazon SageMaker, see <a href=\"http://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html\">How It Works</a>. </p>"
"documentation":"<p>Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.</p> <p>To perform batch transformations, you create a transform job and use the data that you have readily available.</p> <p>In the request body, you provide the following:</p> <ul> <li> <p> <code>TransformJobName</code> - Identifies the transform job. The name must be unique within an AWS Region in an AWS account.</p> </li> <li> <p> <code>ModelName</code> - Identifies the model to use. <code>ModelName</code> must be the name of an existing Amazon SageMaker model in the same AWS Region and AWS account. For information on creating a model, see <a>CreateModel</a>.</p> </li> <li> <p> <code>TransformInput</code> - Describes the dataset to be transformed and the Amazon S3 location where it is stored.</p> </li> <li> <p> <code>TransformOutput</code> - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.</p> </li> <li> <p> <code>TransformResources</code> - Identifies the ML compute instances for the transform job.</p> </li> </ul> <p> For more information about how batch transformation works Amazon SageMaker, see <a href=\"http://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html\">How It Works</a>. </p>"
},
"DeleteEndpoint":{
"name":"DeleteEndpoint",
Expand Down Expand Up @@ -809,19 +809,19 @@
},
"PrimaryContainer":{
"shape":"ContainerDefinition",
"documentation":"<p>The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed into production. </p>"
"documentation":"<p>The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions. </p>"
},
"ExecutionRoleArn":{
"shape":"RoleArn",
"documentation":"<p>The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances. Deploying on ML compute instances is part of model hosting. For more information, see <a href=\"http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html\">Amazon SageMaker Roles</a>. </p> <note> <p>To be able to pass this role to Amazon SageMaker, the caller of this API must have the <code>iam:PassRole</code> permission.</p> </note>"
"documentation":"<p>The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see <a href=\"http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html\">Amazon SageMaker Roles</a>. </p> <note> <p>To be able to pass this role to Amazon SageMaker, the caller of this API must have the <code>iam:PassRole</code> permission.</p> </note>"
},
"Tags":{
"shape":"TagList",
"documentation":"<p>An array of key-value pairs. For more information, see <a href=\"http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what\">Using Cost Allocation Tags</a> in the <i>AWS Billing and Cost Management User Guide</i>. </p>"
},
"VpcConfig":{
"shape":"VpcConfig",
"documentation":"<p>A <a>VpcConfig</a> object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. For more information, see <a>host-vpc</a>.</p>"
"documentation":"<p>A <a>VpcConfig</a> object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. <code>VpcConfig</code> is currently used in hosting services but not in batch transform. For more information, see <a>host-vpc</a>.</p>"
}
}
},
Expand Down Expand Up @@ -1032,7 +1032,7 @@
},
"BatchStrategy":{
"shape":"BatchStrategy",
"documentation":"<p>Determines the number of records included in a single mini-batch. <code>SingleRecord</code> means only one record is used per mini-batch. <code>MultiRecord</code> means a mini-batch is set to contain as many records that can fit within the <code>MaxPayloadInMB</code> limit.</p>"
"documentation":"<p>Determines the number of records included in a single mini-batch. <code>SingleRecord</code> means only one record is used per mini-batch. <code>MultiRecord</code> means a mini-batch is set to contain as many records that can fit within the <code>MaxPayloadInMB</code> limit.</p> <p>Batch transform will automatically split your input data into whatever payload size is specified if you set <code>SplitType</code> to <code>Line</code> and <code>BatchStrategy</code> to <code>MultiRecord</code>. There's no need to split the dataset into smaller files or to use larger payload sizes unless the records in your dataset are very large.</p>"
},
"Environment":{
"shape":"TransformEnvironmentMap",
Expand Down Expand Up @@ -1548,7 +1548,7 @@
},
"SecondaryStatus":{
"shape":"SecondaryStatus",
"documentation":"<p> Provides granular information about the system state. For more information, see <code>TrainingJobStatus</code>. </p> <ul> <li> <p> <code>Starting</code> - starting the training job.</p> </li> <li> <p> <code>LaunchingMLInstances</code> - launching ML instances for the training job.</p> </li> <li> <p> <code>PreparingTrainingStack</code> - preparing the ML instances for the training job.</p> </li> <li> <p> <code>Downloading</code> - downloading the input data.</p> </li> <li> <p> <code>DownloadingTrainingImage</code> - downloading the training algorithm image.</p> </li> <li> <p> <code>Training</code> - model training is in progress.</p> </li> <li> <p> <code>Uploading</code> - uploading the trained model.</p> </li> <li> <p> <code>Stopping</code> - stopping the training job.</p> </li> <li> <p> <code>Stopped</code> - the training job has stopped.</p> </li> <li> <p> <code>MaxRuntimeExceeded</code> - the training exceed the specified the max run time, which means the training job is stopping.</p> </li> <li> <p> <code>Completed</code> - the training job has completed.</p> </li> <li> <p> <code>Failed</code> - the training job has failed. The failure reason is provided in the <code>StatusMessage</code>.</p> </li> </ul> <important> <p>The valid values for <code>SecondaryStatus</code> are subject to change. They primary provide information on the progress of the training job.</p> </important>"
"documentation":"<p> Provides granular information about the system state. For more information, see <code>TrainingJobStatus</code>. </p> <ul> <li> <p> <code>Starting</code> - starting the training job.</p> </li> <li> <p> <code>Downloading</code> - downloading the input data.</p> </li> <li> <p> <code>Training</code> - model training is in progress.</p> </li> <li> <p> <code>Uploading</code> - uploading the trained model.</p> </li> <li> <p> <code>Stopping</code> - stopping the training job.</p> </li> <li> <p> <code>Stopped</code> - the training job has stopped.</p> </li> <li> <p> <code>MaxRuntimeExceeded</code> - the training job exceeded the specified max run time and has been stopped.</p> </li> <li> <p> <code>Completed</code> - the training job has completed.</p> </li> <li> <p> <code>Failed</code> - the training job has failed. The failure reason is stored in the <code>FailureReason</code> field of <code>DescribeTrainingJobResponse</code>.</p> </li> </ul> <important> <p>The valid values for <code>SecondaryStatus</code> are subject to change. They primarily provide information on the progress of the training job.</p> </important>"
},
"FailureReason":{
"shape":"FailureReason",
Expand Down Expand Up @@ -1604,7 +1604,7 @@
},
"SecondaryStatusTransitions":{
"shape":"SecondaryStatusTransitions",
"documentation":"<p>A log of time-ordered secondary statuses that a training job has transitioned.</p>"
"documentation":"<p>To give an overview of the training job lifecycle, <code>SecondaryStatusTransitions</code> is a log of time-ordered secondary statuses that a training job has transitioned.</p>"
}
}
},
Expand Down Expand Up @@ -1723,6 +1723,7 @@
"Disabled"
]
},
"DisassociateNotebookInstanceLifecycleConfig":{"type":"boolean"},
"EndpointArn":{
"type":"string",
"max":2048,
Expand Down Expand Up @@ -3407,7 +3408,7 @@
},
"EndTime":{
"shape":"Timestamp",
"documentation":"<p>A timestamp that shows when the secondary status has ended and the job has transitioned into another secondary status. </p>"
"documentation":"<p>A timestamp that shows when the secondary status has ended and the job has transitioned into another secondary status. The <code>EndTime</code> timestamp is also set after the training job has ended.</p>"
},
"StatusMessage":{
"shape":"StatusMessage",
Expand Down Expand Up @@ -3994,6 +3995,14 @@
"RoleArn":{
"shape":"RoleArn",
"documentation":"<p>The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access the notebook instance. For more information, see <a href=\"http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html\">Amazon SageMaker Roles</a>. </p> <note> <p>To be able to pass this role to Amazon SageMaker, the caller of this API must have the <code>iam:PassRole</code> permission.</p> </note>"
},
"LifecycleConfigName":{
"shape":"NotebookInstanceLifecycleConfigName",
"documentation":"<p>The name of a lifecycle configuration to associate with the notebook instance. For information about lifestyle configurations, see <a>notebook-lifecycle-config</a>.</p>"
},
"DisassociateLifecycleConfig":{
"shape":"DisassociateNotebookInstanceLifecycleConfig",
"documentation":"<p>Set to <code>true</code> to remove the notebook instance lifecycle configuration currently associated with the notebook instance.</p>"
}
}
},
Expand Down
Loading

0 comments on commit 4a7b184

Please sign in to comment.