You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/amazon_sagemaker_processing.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,14 +10,14 @@ Amazon SageMaker Processing allows you to run steps for data pre- or post-proces
10
10
Background
11
11
==========
12
12
13
-
Amazon SageMaker lets developers and data scientists train and deploy machine learning models. With Amazon SageMaker Processing, you can run processing jobs on for data processing steps in your machine learning pipeline, which accept data from Amazon S3 as input, and put data into Amazon S3 as output.
13
+
Amazon SageMaker lets developers and data scientists train and deploy machine learning models. With Amazon SageMaker Processing, you can run processing jobs for data processing steps in your machine learning pipeline. Processing jobs accept data from Amazon S3 as input and store data into Amazon S3 as output.
The fastest way to run get started with Amazon SageMaker Processing is by running a Jupyter notebook. You can follow the `Getting Started with Amazon SageMaker`_ guide to start running notebooks on Amazon SageMaker.
20
+
The fastest way to get started with Amazon SageMaker Processing is by running a Jupyter notebook. You can follow the `Getting Started with Amazon SageMaker`_ guide to start running notebooks on Amazon SageMaker.
21
21
22
22
.. _Getting Started with Amazon SageMaker: https://docs.aws.amazon.com/sagemaker/latest/dg/gs.html
Copy file name to clipboardExpand all lines: doc/frameworks/mxnet/using_mxnet.rst
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -159,13 +159,14 @@ If there are other packages you want to use with your script, you can include a
159
159
Both ``requirements.txt`` and your training script should be put in the same folder.
160
160
You must specify this folder in ``source_dir`` argument when creating an MXNet estimator.
161
161
162
-
The function of installing packages using ``requirements.txt`` is supported for all MXNet versions during training.
162
+
The function of installing packages using ``requirements.txt`` is supported for MXNet versions 1.3.0 and higher during training.
163
+
163
164
When serving an MXNet model, support for this function varies with MXNet versions.
164
165
For MXNet 1.6.0 or newer, ``requirements.txt`` must be under folder ``code``.
165
166
The SageMaker MXNet Estimator automatically saves ``code`` in ``model.tar.gz`` after training (assuming you set up your script and ``requirements.txt`` correctly as stipulated in the previous paragraph).
166
167
In the case of bringing your own trained model for deployment, you must save ``requirements.txt`` under folder ``code`` in ``model.tar.gz`` yourself or specify it through ``dependencies``.
167
-
For MXNet 1.4.1, ``requirements.txt`` is not supported for inference.
168
-
For MXNet 0.12.1-1.3.0, ``requirements.txt`` must be in ``source_dir``.
168
+
For MXNet 0.12.1-1.2.1, 1.4.0-1.4.1, ``requirements.txt`` is not supported for inference.
169
+
For MXNet 1.3.0, ``requirements.txt`` must be in ``source_dir``.
169
170
170
171
A ``requirements.txt`` file is a text file that contains a list of items that are installed by using ``pip install``.
171
172
You can also specify the version of an item to install.
Copy file name to clipboardExpand all lines: doc/frameworks/tensorflow/using_tf.rst
+22Lines changed: 22 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -178,6 +178,28 @@ To use Python 3.7, please specify both of the args:
178
178
Where the S3 url is a path to your training data within Amazon S3.
179
179
The constructor keyword arguments define how SageMaker runs your training script.
180
180
181
+
Specify a Docker image using an Estimator
182
+
-----------------------------------------
183
+
184
+
There are use cases, such as extending an existing pre-built Amazon SageMaker images, that require specifing a Docker image when creating an Estimator by directly specifying the ECR URI instead of the Python and framework version. For a full list of available container URIs, see `Available Deep Learning Containers Images <https://github.com/aws/deep-learning-containers/blob/master/available_images.md>`__ For more information on using Docker containers, see `Use Your Own Algorithms or Models with Amazon SageMaker <https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html>`__.
185
+
186
+
When specifying the image, you must use the ``image_name=''`` arg to replace the following arg:
187
+
188
+
- ``py_version=''``
189
+
190
+
You should still specify the ``framework_version=''`` arg because the SageMaker Python SDK accomodates for differences in the images based on the version.
191
+
192
+
The following example uses the ``image_name=''`` arg to specify the container image, Python version, and framework version.
The Processing component enables you to submit processing jobs to Amazon SageMaker directly from a Kubeflow Pipelines workflow. For more information, see \ `SageMaker Processing Kubeflow Pipeline component <https://github.com/kubeflow/pipelines/tree/master/components/aws/sagemaker/process>`__.
0 commit comments