Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Deploy our custom models on seldon-core #104

Closed
irrationaldomain opened this issue Mar 1, 2018 · 5 comments
Closed

How to Deploy our custom models on seldon-core #104

irrationaldomain opened this issue Mar 1, 2018 · 5 comments

Comments

@irrationaldomain
Copy link

hi,
I am able to get seldon core up and running on kubernetes cluster and tried deploying your example models , got successful . Now my aim is to deploy our custom model on seldon-core, so what is the procedure/requirements in order to deploy our model plus are there any conventions we have to follow . FYI I have simple model which uses tensorflow and flask framework, so what are the things need to consider before wrapping model.
One more thing I want to know that is it necessary that model we are deploying will have to be a saved model only?
thanks in advance

@ukclivecox
Copy link
Contributor

Wrapping your tensorflow python model is very easy.

There are detailed instructions here
On our kubeflow end-to-end example you can see an example tensorflow runtime component and the script to wrap it using our docker image for python

To answer your last question if I understand it, you can get your model parameters and weights from any file system that can be attached to kubernetes. Again see this deployment file which uses a volume which contains the saved model weights which the runtime will load on startup.

@irrationaldomain
Copy link
Author

After python wrapping got ImportError pointing to microservice.py
showing as follows

Traceback (most recent call last):
File "microservice.py", line 154, in
interface_file = importlib.import_module(args.interface_name)
File "/usr/local/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
ImportError: No module named demo-model

here demo-model is my docker image name

@ukclivecox
Copy link
Contributor

This looks like an issue in the wrapping process. In the example in the docs we have

docker run -v /path/to/model/dir:/my_model seldonio/core-python-wrapper:0.7 /my_model MnistClassifier 0.1 seldonio

which refers to a class:

class MnistClassifier(object): # The file is called MnistClassifier.py

Note the class name and file match. Is this the case in your test?

@irrationaldomain
Copy link
Author

I have Deploy custom model which has simple request( for eg. [2000] ). which is running fine on local but I am not able to get response when i am running it on seldon. So Is there any specific format to send request ??

@ukclivecox
Copy link
Contributor

Please reopen if still an issue.

agrski pushed a commit that referenced this issue Dec 2, 2022
* initial kafka commit

* Allow headers to define type of payload

* refactor to separate pkg

* refactor to independent kafka consumer cmd

* lint and fix tests

* lint and fix tests

* push models on subscribe

* change to stream name and add compose kafka

* Updates for kafka envoy use

* initial k8s updates

* update notebook

* review comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants