-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Endpoint inference with trained HuggingFaceEstimator fails #13
Comments
@vdantu @ahsan-z-khan i thought when running File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 207, in handle
self.initialize(context)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 75, in initialize
self.validate_and_initialize_user_module()
File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 239, in validate_and_initialize_user_module
user_module = importlib.import_module(user_module_name)
File "/opt/conda/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
Backend response time: 4
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/.sagemaker/mms/models/model/code/train.py", line 2, in <module>
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
ModuleNotFoundError: No module named 'sklearn'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/mms/service.py", line 108, in predict
ret = self._entry_point(input_batch, self.context)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py", line 231, in handle
raise PredictionException(str(e), 400)
No module named 'sklearn' : 400 It is even more interesting since the |
@la-cruche : Can this issue be closed? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I .deploy() the model.tar.gz created by 2 sample notebooks, and both fail. It seems that dependencies are not the same between training and inference. Is this something that could be automated? Or documented? I used to think that the config.json would be enough for inference, I don't understand why SM Hosting wants to use the training script (it actually doesn't need to in theory)
.deploy()
works correctly both on CPU and GPU, but GPU inference returns fails withNo module named sklearn
.deploy()
works correctly both on CPU and GPU, but GPU inference fails with aNo module named datasets
The text was updated successfully, but these errors were encountered: