Skip to content

Commit

Permalink
Merge pull request #45061 from dev0x13:patch-1
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 348731453
Change-Id: I55e068a529e27040e1c5872ec90d3639ef0d33fb
  • Loading branch information
tensorflower-gardener committed Dec 23, 2020
2 parents d16486b + ae33193 commit d0a597a
Showing 1 changed file with 21 additions and 0 deletions.
21 changes: 21 additions & 0 deletions tensorflow/lite/delegates/xnnpack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,27 @@ bazel build -c opt --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a \
//tensorflow/lite/java:tensorflow-lite
```
Note that in this case `Interpreter::SetNumThreads` invocation does not take
effect on number of threads used by XNNPACK engine. In order to specify number
of threads available for XNNPACK engine you should manually pass the value when
constructing the interpreter. The snippet below illustrates this assuming you
are using `InterpreterBuilder` to construct the interpreter:
```c++
// Load model
tflite::Model* model;
...
// Construct the interprepter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
TfLiteStatus res = tflite::InterpreterBuilder(model, resolver, num_threads);
```

**XNNPACK engine used by TensorFlow Lite interpreter uses a single thread for
inference by default.**

### Enable XNNPACK via additional dependency

Another way to enable XNNPACK is to build and link the
Expand Down

0 comments on commit d0a597a

Please sign in to comment.