Sagemaker uploading model artifacts without compressing and in a different directory #2392
Replies: 4 comments
-
Hi @inderpartap , it looks like you are saving the model under In your training script, you need to save your model in the |
Beta Was this translation helpful? Give feedback.
-
The Also, I read somewhere that you need to have your models saved as |
Beta Was this translation helpful? Give feedback.
-
The format of the model output is configured by the your training script's model save function and SageMaker will upload contents in Yes, tensorflow-model-server will look for a version number under model-name (model name hashing) directory |
Beta Was this translation helpful? Give feedback.
-
I am facing the same issue. Anyone resolved it? |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
I have a Keras model getting trained using an entry_point script and I am using the following pieces of code to store the model artifacts (in the entry_point script).
Ideally, the model_dir should be
opt/ml/model
and Sagemaker should automatically move the contents of this folder to S3 ass3://<default_bucket>/<training_name>/output/model.tar.gz
When I run the
estimator.fit({'training': training_input_path})
, the training is successful, but the Cloudwatch logs show the following:Even then, Sagemaker does store my model artifacts, with the only difference being that instead of storing them in
s3://<default_bucket>/<training_name>/output/model.tar.gz
, they are now stored unzipped ass3://<default_bucket>/<training_name>/model/model/1/saved_model.pb
along with thevariables
andassets
folder. Because of this,estimator.deploy()
call fails as it is unable to find the artifacts in theoutput/
directory in S3.To reproduce
Estimator code:
System information
A description of your system. Please provide:
Additional context
Add any other context about the problem here.
Beta Was this translation helpful? Give feedback.
All reactions