-
Notifications
You must be signed in to change notification settings - Fork 737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide SageMaker compatible docker container #147
Comments
One thing I’m unsure about is how we dispatch between different models. Also, do we want to support training of more than one model per run? |
Not sure why we would want to do that: the way I see it, each job would be one training, and should produce one model artifact that should be servable |
Although I think there is a point to be made that it could be simpler to use, SageMaker has the concept of one training-configuration is one job. |
The approach used in #151 is to dispatch the |
Which is not available during inference. Except, if we store it as part of the model. And that would imply that all models have to go through training. Not sure I like that. |
Another thought: Since we have models which don't require offline training, we can make the shell accept Forecaster = Union[Predictor, Estimator] That way we would skip training if a |
@jaheba at inference time there should be no |
That was exactly my point. During training we can pass a hyper-parameter to select the right |
Description
We should think about creating a GluonTS container, which can be used in Amazon SageMaker. It could behave similar to SageMaker DeepAR w.r.t data-loading (
train
andtest
channels) and evaluation of models.Having these containers could make it a lot easier for customers to try out GluonTS models or even use them in a production setting.
This would require two things:
References
The text was updated successfully, but these errors were encountered: