-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for custom predict function in non-colab environment #94
Conversation
@lanpa thanks for your PR. Can you please create an issue describing your usecase & link this PR. |
Thanks @lanpa , it is really cool to see WIT custom prediction functions working in Tensorboard! I'm checking with the TensorBoard team about this change, as its possible they might want to avoid having the default build of TensorBoard (which WIT is a part of) load and execute arbitrary python functions not included in their standard build process. Will update this thread when I have more info. @stephanwlee FYI |
Hi, @dhanainme A working environment requires two components: The pip package and the custom_wit_predict_fn. The pip package can be build with Note that this overwrites the old TensorBoard in the env:) And a here is a minimal import random
NUM_POSSIBLE_CLASSES = 3
def custom_predict_fn(examples, serving_bundle):
number_of_examples = len(examples)
results = []
for _ in range(number_of_examples):
scores = []
for clsid in range(NUM_POSSIBLE_CLASSES):
scores.append(random.random())
results.append(scores) # classification
# results.append(result[0][0]) # this make a regression result
return results Finally, in the TensorBoard front end, fill up the address (localhost:8080), model name(iris_v100), and path to examples(iris.csv) and you should able to see the random prediction visualization. |
tensorboard_plugin_wit/wit_plugin.py
Outdated
@@ -318,7 +329,8 @@ def _infer(self, request): | |||
model_signatures[model_num], | |||
request.args.get('use_predict') == 'true', | |||
request.args.get('predict_input_tensor'), | |||
request.args.get('predict_output_tensor')) | |||
request.args.get('predict_output_tensor'), | |||
custom_predict_fn=custom_predict_fn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you set up WIT to have two models, then this would use the same predict fn for both models. instead you need a way to set/store one custom predict fn for the primary model, and separately set a second one if comparing two models. this could be just a function that is custom_wit_predict_fn.custom_compare_predict_fn or similarly named.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, I didn't know that WIT can compare the result of different models concurrently. Did you mean "ANOTHER MODEL FOR COMPARISON"? How about adding another textbox in the "Set up your data and model" page and let user fill the function names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes if the user adds another model through that button, the inference address and model name will be set for that second model in this serving_bundle when calling into the second model.
So your example code that uses the serving bundle info to make the pyserve call should actually work correctness for model comparison as written now.
You can verify that by trying with two separate pyserve'd models with different results and verifying it works in WIT as expected.
As an FYI, with this code, since a custom predict function is used, the setting of inference address, model name (and optional model version and model signature) in the UI are all ignored by the tool, although the UI still requires them to be filled out. |
@jameswex Thanks for your review! I will play with the "comparing two models" feature later and see what can be improved. |
@lanpa I've been discussing this with the TensorBoard team (thanks @wchargin), and we're still working on the best approach for pointing to the custom predict fn, as opposed to the current PR's attempted import from a hardcoded path. One possible path might be through a runtime argument passed to TensorBoard on startup, which is an approach that other plugins have used for setting dynamic parameters that they consume. |
Feedback from TB folks: They would be okay with the ability to add custom python predict fn, if the path to the Python file were given as an explicit command line flag that clearly disclaims the ACE; maybe You can define flags on Example, in the profile plugin (loaded dynamically): The What-If Tool already defines a loader, so you’ll want to expand it:
|
@jameswex Your advice is really helpful. I have changed the code so that the custom function is now specified by passing additional argument |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I just tested this and have the two comments in this review. Outside of that, will have the TensorBoard folks take a look and we'll get it into the next WIT release.
Code is almost ready to go once the comments above are taken care of and the following:
|
@jameswex Thanks for the review again. Besides the docstring, I also have updated the readme. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good to me! Will have one other WIT dev review it. I might add/adjust some of the documentation in a follow-up PR before we push a new version to pip (planning to publish a new version before end of June).
@tolga-b please check out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @lanpa I added some comments. I think it looks good otherwise!
@tolga-b Thanks for the review. I think all the requested changes are fixed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, looks good to me.
Thanks @lanpa for this great new functionality and for all the work in the review process. This new functionality will be included in the next pip release, later this month. |
This PR enables What-If-Tool to load the custom_predict_fn when running in a local TensorBoard server. The modification tries to load a python function
custom_predict_fn
from a file namedcustom_wit_predict_fn.py
in TensorBoard's launching folder. If the function exists, the inference request will be redirected to that function.Tested locally with a compiled python wheel file along with the demo code in
pytorch/serve#418