Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for custom predict function in non-colab environment #94

Merged
merged 11 commits into from
Jun 18, 2020

Conversation

lanpa
Copy link
Contributor

@lanpa lanpa commented Jun 1, 2020

This PR enables What-If-Tool to load the custom_predict_fn when running in a local TensorBoard server. The modification tries to load a python function custom_predict_fn from a file named custom_wit_predict_fn.py in TensorBoard's launching folder. If the function exists, the inference request will be redirected to that function.

Tested locally with a compiled python wheel file along with the demo code in
pytorch/serve#418

@dhanainme
Copy link

@lanpa thanks for your PR.

Can you please create an issue describing your usecase & link this PR.
Also can you provide detailed steps on testing this PR in a minimal env.

@jameswex
Copy link
Collaborator

jameswex commented Jun 2, 2020

Thanks @lanpa , it is really cool to see WIT custom prediction functions working in Tensorboard! I'm checking with the TensorBoard team about this change, as its possible they might want to avoid having the default build of TensorBoard (which WIT is a part of) load and execute arbitrary python functions not included in their standard build process.

Will update this thread when I have more info.

@stephanwlee FYI

@lanpa
Copy link
Contributor Author

lanpa commented Jun 2, 2020

Hi, @dhanainme

A working environment requires two components: The pip package and the custom_wit_predict_fn.

The pip package can be build with
bazelisk run tensorboard_plugin_wit/pip_package:build_pip_package
And follow by something like pip install --force /tmp/wit-pip/release/dist/tensorboard_plugin_wit-1.6.0-py3-none-any.whl

Note that this overwrites the old TensorBoard in the env:)

And a here is a minimal custom_wit_predict_fn.py that returns random classification score for each example. (Just a mock, so no torchserve is required) Put this file in where you launch the TesnorBoard server.

import random

NUM_POSSIBLE_CLASSES = 3

def custom_predict_fn(examples, serving_bundle):

  number_of_examples = len(examples)
  
  results = []
  for _ in range(number_of_examples):
    scores = []
    for clsid in range(NUM_POSSIBLE_CLASSES):
      scores.append(random.random())
    results.append(scores)  # classification
    # results.append(result[0][0])  # this make a regression result
  return results

Finally, in the TensorBoard front end, fill up the address (localhost:8080), model name(iris_v100), and path to examples(iris.csv) and you should able to see the random prediction visualization.
(https://gist.githubusercontent.com/curran/a08a1080b88344b0c8a7/raw/639388c2cbc2120a14dcf466e85730eb8be498bb/iris.csv)

@@ -318,7 +329,8 @@ def _infer(self, request):
model_signatures[model_num],
request.args.get('use_predict') == 'true',
request.args.get('predict_input_tensor'),
request.args.get('predict_output_tensor'))
request.args.get('predict_output_tensor'),
custom_predict_fn=custom_predict_fn)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you set up WIT to have two models, then this would use the same predict fn for both models. instead you need a way to set/store one custom predict fn for the primary model, and separately set a second one if comparing two models. this could be just a function that is custom_wit_predict_fn.custom_compare_predict_fn or similarly named.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, I didn't know that WIT can compare the result of different models concurrently. Did you mean "ANOTHER MODEL FOR COMPARISON"? How about adding another textbox in the "Set up your data and model" page and let user fill the function names?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes if the user adds another model through that button, the inference address and model name will be set for that second model in this serving_bundle when calling into the second model.

So your example code that uses the serving bundle info to make the pyserve call should actually work correctness for model comparison as written now.

You can verify that by trying with two separate pyserve'd models with different results and verifying it works in WIT as expected.

utils/inference_utils.py Outdated Show resolved Hide resolved
@jameswex
Copy link
Collaborator

jameswex commented Jun 2, 2020

As an FYI, with this code, since a custom predict function is used, the setting of inference address, model name (and optional model version and model signature) in the UI are all ignored by the tool, although the UI still requires them to be filled out.

@lanpa
Copy link
Contributor Author

lanpa commented Jun 2, 2020

@jameswex Thanks for your review! I will play with the "comparing two models" feature later and see what can be improved.

@jameswex
Copy link
Collaborator

jameswex commented Jun 3, 2020

Thanks @lanpa , it is really cool to see WIT custom prediction functions working in Tensorboard! I'm checking with the TensorBoard team about this change, as its possible they might want to avoid having the default build of TensorBoard (which WIT is a part of) load and execute arbitrary python functions not included in their standard build process.

Will update this thread when I have more info.

@stephanwlee FYI

@lanpa I've been discussing this with the TensorBoard team (thanks @wchargin), and we're still working on the best approach for pointing to the custom predict fn, as opposed to the current PR's attempted import from a hardcoded path. One possible path might be through a runtime argument passed to TensorBoard on startup, which is an approach that other plugins have used for setting dynamic parameters that they consume.

@jameswex
Copy link
Collaborator

jameswex commented Jun 4, 2020

Feedback from TB folks:

They would be okay with the ability to add custom python predict fn, if the path to the Python file were given as an explicit command line flag that clearly disclaims the ACE; maybe --whatif-execute-unsafe-custom-prediction=path/to/foo.py. In particular it seems prudent to take a file path (to be read and compile/execed) rather than a module path (to be import_moduled), because the latter could be disguised as something like --execute_prediction_code=false with evil false.py or false/__init__.py.

You can define flags on TBLoader by defining the define_flags method, and then the plugin can pull those from the TBContext by referring to context.flags.

Docs:
https://github.com/tensorflow/tensorboard/blob/f8543cfdbc243bc24474bf3e4ad49dc94072f1fa/tensorboard/plugins/base_plugin.py#L316-L328

Example, in the profile plugin (loaded dynamically):
https://github.com/tensorflow/profiler/blob/2e0c082ddc69e87995e92620126113f2eaeb6cfb/plugin/tensorboard_plugin_profile/profile_plugin_loader.py#L26-L33
https://github.com/tensorflow/profiler/blob/2e0c082ddc69e87995e92620126113f2eaeb6cfb/plugin/tensorboard_plugin_profile/profile_plugin.py#L238

The What-If Tool already defines a loader, so you’ll want to expand it:

class WhatIfToolPluginLoader(base_plugin.TBLoader):

@lanpa
Copy link
Contributor Author

lanpa commented Jun 15, 2020

@jameswex Your advice is really helpful. I have changed the code so that the custom function is now specified by passing additional argument --wit-use-unsafe-custom-prediction YOUR_CUSTOM_PREDICT_FUNCTION.py according to the docs and examples. Thanks.

Copy link
Collaborator

@jameswex jameswex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I just tested this and have the two comments in this review. Outside of that, will have the TensorBoard folks take a look and we'll get it into the next WIT release.

tensorboard_plugin_wit/wit_plugin.py Show resolved Hide resolved
utils/inference_utils.py Outdated Show resolved Hide resolved
@jameswex
Copy link
Collaborator

Code is almost ready to go once the comments above are taken care of and the following:
Additional feedback from TB folks:

  • the flag prefix should be exactly the string used as the plugin name in TensorBoard, which is "whatif" not "wit"
  • the docstrings need to better make it clear (beyond the name of the flag itself) that the flag executes arbitrary code, or warn about why it should be used with caution. Also needs more documentation in general, i.e. there are no examples for the required function signature, it doesn't make it clear that the function must specifically be named "custom_predict_fn", no usage example, etc.

@lanpa
Copy link
Contributor Author

lanpa commented Jun 16, 2020

@jameswex Thanks for the review again. Besides the docstring, I also have updated the readme.

@lanpa lanpa requested a review from jameswex June 17, 2020 13:17
Copy link
Collaborator

@jameswex jameswex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good to me! Will have one other WIT dev review it. I might add/adjust some of the documentation in a follow-up PR before we push a new version to pip (planning to publish a new version before end of June).

@jameswex
Copy link
Collaborator

@tolga-b please check out

Copy link
Collaborator

@tolga-b tolga-b left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @lanpa I added some comments. I think it looks good otherwise!

tensorboard_plugin_wit/wit_plugin.py Outdated Show resolved Hide resolved
tensorboard_plugin_wit/wit_plugin.py Outdated Show resolved Hide resolved
tensorboard_plugin_wit/wit_plugin.py Show resolved Hide resolved
utils/inference_utils.py Outdated Show resolved Hide resolved
@lanpa
Copy link
Contributor Author

lanpa commented Jun 18, 2020

@tolga-b Thanks for the review. I think all the requested changes are fixed.

Copy link
Collaborator

@tolga-b tolga-b left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, looks good to me.

@jameswex jameswex merged commit cfee99f into PAIR-code:master Jun 18, 2020
@jameswex
Copy link
Collaborator

Thanks @lanpa for this great new functionality and for all the work in the review process. This new functionality will be included in the next pip release, later this month.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants