New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Save models with SavedModel #329
Comments
@upalchowdhury Ludwig at the moment saves models using |
@w4nderlust mlflow, for example, use SavedModel as a default format for saving/serving https://mlflow.org/docs/latest/python_api/mlflow.tensorflow.html#mlflow.tensorflow.save_model, and it's kinda standard now:
|
@ifokeev thanks for the additional info. I was already pretty convinced, now I'm even more convinced. |
@w4nderlust I'm working now on converting Saver to SavedModel, I could put the code in this thread if it can help. But sorry I don't have time for a full PR |
@ifokeev that would be appreciated. s long as you can run the test and confirm that it works, you can still do a PR even if it's not 100% complete. Either way, any help is welcome. |
@w4nderlust i'm using this code to build SavedModel, but getting empty
it feels like I get input/output tensors wrong =\ |
added
to
but
getting
=\ |
Are you able to load the model you saved with |
@w4nderlust
but why? |
Not 100% sure, it probably has to do with how builder internally uses both the provided session. I guess the problem is that by doing this you close the session at the end of the of the block (as my understanding is tha |
looks like it's ok to remain the session open I will check with forced |
Mmm I thought the context manager would close it... anyway, better like this I guess :) |
@ifokeev Is this going to be part of a new PR? I would love to test it with TF serving or AI Platform serving. |
@gogasca I'm going to return to this task in 2-3 days. The code from #329 (comment) works, the full script I will add later |
Hey @ifokeev |
@jadjoubran my full script looks like this:
|
Thank you so much! It's working ✅ |
@ifokeev thanks for the code. Just to be sure: with |
@w4nderlust MLFlow accepts only |
…odel.py and exposes it in the API with a save_for_serving() function
@ifokeev and @upalchowdhury I would be glad if you could check out this branch and tell me if it works for you; https://github.com/uber/ludwig/tree/save_for_serving |
…odel.py and exposes it in the API with a save_for_serving() function (#425)
Dear @w4nderlust and @ifokeev, I am trying to convert the saved model of MNIST to onnx format. I have tried the new implementation and it still is not working properly. On problem seems to be the way how the session is used. The current implementation:
leads to a error "The Session graph is empty." when converting the model with
When changing the structure to (like in the example of @ifokeev):
The a graph is exported. However the inputs are not found in the graph. As the following error suggests:
BTW. tf2onnx.convert works stable with other saved_models I tried. |
@w4nderlust your code works correct, but only with session closing:
Without it you'll get Also i'm getting the same error with output placeholder (why?)
I understand this as you have but i don't understand how to create savedModel with inputs and outputs separated some info: https://stackoverflow.com/a/50899659 |
fast note after full day of research I found the solution: my previous function about collecting tensors was wrong and @w4nderlust copied it as is. Something that could be:
The output tensors have different names in Ludwig (not |
There's an error in that code:
This adds the outputs to the dict of the inputs. As other people are referring to this code, it's important to correct it. |
@w4nderlust I use it in this way, because I want to input all fields (inputs and outputs). Output field could be empty. FYI: if you don't need it, just remove
|
@ifokeev i believe there's a misunderstanding about what an input feature and an output feature is. Output features cannot be inputs, they would be called inputs instead. And if for some reason you want to provide the save values as both input and output (seems weird to me as the best model will be the one that ignores everything else and only uses that input for predicting the output, making it kinda useless, but maybe you have other reasons for doing it) then you need to have two identical columns in your data with different names, one to use as input and one to use as output (line in an autoencoder for instance). I'm considering introducing an optional "id" field to allow loading two features from the same column and having them have a different identifier, which will solve the issue, but I haven't implemented it yet nor have i decided which is the best way to implement it. |
The latest merged PR solves the issues with SavedModel. There's also a test now one could use as a reference example code for using it: https://github.com/uber/ludwig/blob/master/tests/integration_tests/test_savedmodel.py |
Hello ,
I know there is #55 issue for exposing the model . i have read the issue . it was suggested that you can use a ludwig developed model in tensorflow serving . I have been trying to convert it to in a format so i can use it in tensorflow serving.
For example , i have followed , readallcat.csv example and develped model in ludwig which is in the format ,
checkpoint model_weights.data-00000-of-00001 model_weights.meta
model_hyperparameters.json model_weights.index train_set_metadata.json
Now for tensorflow serving the model needs to be in the below format
as mentioned https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md
,
assets/
variables/
variables.data--of-
variables.index
saved_model.pb
I have been trying to use tensorflow "SavedModel api" , but no luck for the conversion . if you please provide some guidance will be a big help . Thank you
The text was updated successfully, but these errors were encountered: