Skip to content

Commit 1b65af1

Browse files
shadowdragon89tensorflow-copybara
authored andcommitted
Add a section in the documentation for testing custom op manually.
PiperOrigin-RevId: 271370683
1 parent d16ceaf commit 1b65af1

File tree

1 file changed

+15
-1
lines changed

1 file changed

+15
-1
lines changed

tensorflow_serving/g3doc/custom_op.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,10 +91,24 @@ tools/run_in_docker.sh bazel build tensorflow_serving/model_servers:tensorflow_m
9191

9292
## Serve a model containing your custom op
9393

94-
You can now run the ModelServer binary and start serving your model:
94+
You can now run the ModelServer binary and start serving a model that contains
95+
this custom op:
9596

9697
```bash
9798
tools/run_in_docker.sh -o "-p 8501:8501" \
9899
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server \
99100
--rest_api_port=8501 --model_name=<model_name> --model_base_path=<model_base_path>
100101
```
102+
103+
## Send an inference request to test op manually
104+
105+
You can now send an inference request to the model server to test your custom
106+
op:
107+
108+
```bash
109+
curl http://localhost:8501/v1/models/<model_name>:predict -X POST \
110+
-d '{"inputs": [[1,2], [3,4]]}'
111+
```
112+
113+
[This page](https://www.tensorflow.org/tfx/serving/api_rest#top_of_page)
114+
contains a more complete API for sending REST requests to the model server.

0 commit comments

Comments
 (0)