Skip to content
Permalink
Browse files

Project import generated by Copybara.

GitOrigin-RevId: 50714fe28298d7b707eff7304547d89d6ec34a54
  • Loading branch information
MediaPipe Team mgyong
MediaPipe Team authored and mgyong committed Nov 21, 2019
1 parent 9437483 commit 48bcbb115fb448bf0b5759f4a6a4bcca4bf23213
Showing with 1,662 additions and 111 deletions.
  1. +4 −1 README.md
  2. +7 −3 WORKSPACE
  3. +2 −1 mediapipe/calculators/image/scale_image_calculator.proto
  4. +2 −2 mediapipe/calculators/tensorflow/tfrecord_reader_calculator.cc
  5. +40 −8 mediapipe/calculators/tflite/tflite_inference_calculator.cc
  6. +1 −0 mediapipe/calculators/util/labels_to_render_data_calculator.cc
  7. +1 −1 mediapipe/docs/android_archive_library.md
  8. +17 −0 mediapipe/docs/examples.md
  9. +1 −1 mediapipe/docs/hello_world_android.md
  10. BIN mediapipe/docs/images/mobile/multi_hand_tracking_android_gpu_small.gif
  11. +1 −1 mediapipe/docs/multi_hand_tracking_desktop.md
  12. +1 −1 mediapipe/docs/multi_hand_tracking_mobile_gpu.md
  13. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/edgedetectiongpu/MainActivity.java
  14. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectioncpu/MainActivity.java
  15. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu/MainActivity.java
  16. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu/MainActivity.java
  17. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu/MainActivity.java
  18. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/handtrackinggpu/MainActivity.java
  19. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/multihandtrackinggpu/MainActivity.java
  20. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/objectdetectioncpu/MainActivity.java
  21. +1 −1 mediapipe/examples/android/src/java/com/google/mediapipe/apps/objectdetectiongpu/MainActivity.java
  22. +56 −0 mediapipe/examples/coral/BUILD
  23. +87 −0 mediapipe/examples/coral/Dockerfile
  24. +137 −0 mediapipe/examples/coral/README.md
  25. +313 −0 mediapipe/examples/coral/WORKSPACE
  26. +151 −0 mediapipe/examples/coral/demo_run_graph_main.cc
  27. +189 −0 mediapipe/examples/coral/graphs/face_detection_desktop_live.pbtxt
  28. +179 −0 mediapipe/examples/coral/graphs/object_detection_desktop_live.pbtxt
  29. BIN mediapipe/examples/coral/models/face-detector-quantized_edgetpu.tflite
  30. BIN mediapipe/examples/coral/models/object-detector-quantized_edgetpu.tflite
  31. +90 −0 mediapipe/examples/coral/models/object_detection_labelmap.txt
  32. +21 −0 mediapipe/examples/coral/setup.sh
  33. +11 −0 mediapipe/examples/coral/update_sources.sh
  34. +69 −4 mediapipe/examples/desktop/README.md
  35. +3 −3 mediapipe/examples/desktop/media_sequence/kinetics_dataset.py
  36. +2 −0 mediapipe/framework/BUILD
  37. +142 −2 mediapipe/framework/calculator_graph_bounds_test.cc
  38. +16 −24 mediapipe/framework/formats/BUILD
  39. +4 −3 mediapipe/framework/formats/annotation/BUILD
  40. +5 −0 mediapipe/framework/formats/landmark.proto
  41. +3 −2 mediapipe/framework/formats/motion/BUILD
  42. +2 −2 mediapipe/framework/formats/object_detection/BUILD
  43. +8 −1 mediapipe/framework/input_stream_manager.cc
  44. +3 −0 mediapipe/framework/input_stream_manager.h
  45. +4 −3 mediapipe/framework/stream_handler/immediate_input_stream_handler.cc
  46. +24 −0 mediapipe/framework/stream_handler/sync_set_input_stream_handler.cc
  47. +15 −5 mediapipe/framework/timestamp.cc
  48. +7 −4 mediapipe/framework/timestamp.h
  49. +1 −1 mediapipe/gpu/metal.bzl
  50. +1 −1 mediapipe/graphs/hand_tracking/multi_hand_tracking_desktop.pbtxt
  51. +1 −1 mediapipe/graphs/hand_tracking/multi_hand_tracking_desktop_live.pbtxt
  52. +1 −1 mediapipe/graphs/hand_tracking/multi_hand_tracking_mobile.pbtxt
  53. +1 −2 mediapipe/graphs/hand_tracking/subgraphs/multi_hand_detection_cpu.pbtxt
  54. +1 −3 mediapipe/util/audio_decoder.cc
  55. +5 −4 mediapipe/util/sequence/media_sequence.cc
  56. +15 −11 mediapipe/util/sequence/media_sequence_test.cc
  57. +8 −5 third_party/BUILD
  58. BIN third_party/camera-camera2-1.0.0-alpha01.aar
  59. BIN third_party/camera-core-1.0.0-alpha01.aar
  60. +1 −1 third_party/opencv_android.BUILD
@@ -10,11 +10,13 @@
## ML Solutions in MediaPipe

* [Hand Tracking](mediapipe/docs/hand_tracking_mobile_gpu.md)
* [Multi-hand Tracking](mediapipe/docs/multi_hand_tracking_mobile_gpu.md)
* [Face Detection](mediapipe/docs/face_detection_mobile_gpu.md)
* [Hair Segmentation](mediapipe/docs/hair_segmentation_mobile_gpu.md)
* [Object Detection](mediapipe/docs/object_detection_mobile_gpu.md)

![hand_tracking](mediapipe/docs/images/mobile/hand_tracking_3d_android_gpu_small.gif)
![multi-hand_tracking](mediapipe/docs/images/mobile/multi_hand_tracking_android_gpu_small.gif)
![face_detection](mediapipe/docs/images/mobile/face_detection_android_gpu_small.gif)
![hair_segmentation](mediapipe/docs/images/mobile/hair_segmentation_android_gpu_small.gif)
![object_detection](mediapipe/docs/images/mobile/object_detection_android_gpu_small.gif)
@@ -23,7 +25,7 @@
Follow these [instructions](mediapipe/docs/install.md).

## Getting started
See mobile and desktop [examples](mediapipe/docs/examples.md).
See mobile, desktop and Google Coral [examples](mediapipe/docs/examples.md).

## Documentation
[MediaPipe Read-the-Docs](https://mediapipe.readthedocs.io/) or [docs.mediapipe.dev](https://docs.mediapipe.dev)
@@ -41,6 +43,7 @@ A web-based visualizer is hosted on [viz.mediapipe.dev](https://viz.mediapipe.de
* [MediaPipe: A Framework for Building Perception Pipelines](https://arxiv.org/abs/1906.08172)

## Events
* [AI Nextcon 2020, 12-16 Feb 2020, Seattle](http://aisea20.xnextcon.com/)
* [MediaPipe Madrid Meetup, 16 Dec 2019](https://www.meetup.com/Madrid-AI-Developers-Group/events/266329088/)
* [MediaPipe London Meetup, Google 123 Building, 12 Dec 2019](https://www.meetup.com/London-AI-Tech-Talk/events/266329038)
* [ML Conference, Berlin, 11 Dec 2019](https://mlconference.ai/machine-learning-advanced-development/mediapipe-building-real-time-cross-platform-mobile-web-edge-desktop-video-audio-ml-pipelines/)
@@ -149,11 +149,10 @@ new_local_repository(

http_archive(
name = "android_opencv",
sha256 = "056b849842e4fa8751d09edbb64530cfa7a63c84ccd232d0ace330e27ba55d0b",
build_file = "@//third_party:opencv_android.BUILD",
strip_prefix = "OpenCV-android-sdk",
type = "zip",
url = "https://github.com/opencv/opencv/releases/download/4.1.0/opencv-4.1.0-android-sdk.zip",
url = "https://github.com/opencv/opencv/releases/download/3.4.3/opencv-3.4.3-android-sdk.zip",
)

# After OpenCV 3.2.0, the pre-compiled opencv2.framework has google protobuf symbols, which will
@@ -184,13 +183,18 @@ maven_install(
artifacts = [
"androidx.annotation:annotation:aar:1.1.0",
"androidx.appcompat:appcompat:aar:1.1.0-rc01",
"androidx.camera:camera-core:aar:1.0.0-alpha06",
"androidx.camera:camera-camera2:aar:1.0.0-alpha06",
"androidx.constraintlayout:constraintlayout:aar:1.1.3",
"androidx.core:core:aar:1.1.0-rc03",
"androidx.legacy:legacy-support-v4:aar:1.0.0",
"androidx.recyclerview:recyclerview:aar:1.1.0-beta02",
"com.google.android.material:material:aar:1.0.0-rc01",
],
repositories = ["https://dl.google.com/dl/android/maven2"],
repositories = [
"https://dl.google.com/dl/android/maven2",
"https://repo1.maven.org/maven2",
],
)

maven_server(
@@ -36,7 +36,8 @@ message ScaleImageCalculatorOptions {

// If ratio is positive, crop the image to this minimum and maximum
// aspect ratio (preserving the center of the frame). This is done
// before scaling.
// before scaling. The string must contain "/", so to disable cropping,
// set both to "0/1".
// For example, for a min_aspect_ratio of "9/16" and max of "16/9" the
// following cropping will occur:
// 1920x1080 (which is 16:9) is not cropped
@@ -85,7 +85,7 @@ ::mediapipe::Status TFRecordReaderCalculator::Open(CalculatorContext* cc) {
tensorflow::io::RecordReader reader(file.get(),
tensorflow::io::RecordReaderOptions());
tensorflow::uint64 offset = 0;
std::string example_str;
tensorflow::tstring example_str;
const int target_idx =
cc->InputSidePackets().HasTag(kRecordIndex)
? cc->InputSidePackets().Tag(kRecordIndex).Get<int>()
@@ -98,7 +98,7 @@ ::mediapipe::Status TFRecordReaderCalculator::Open(CalculatorContext* cc) {
if (current_idx == target_idx) {
if (cc->OutputSidePackets().HasTag(kExampleTag)) {
tensorflow::Example tf_example;
tf_example.ParseFromString(example_str);
tf_example.ParseFromArray(example_str.data(), example_str.size());
cc->OutputSidePackets()
.Tag(kExampleTag)
.Set(MakePacket<tensorflow::Example>(std::move(tf_example)));
@@ -64,6 +64,28 @@ typedef id<MTLBuffer> GpuTensor;
size_t RoundUp(size_t n, size_t m) { return ((n + m - 1) / m) * m; } // NOLINT
} // namespace

#if defined(MEDIAPIPE_EDGE_TPU)
#include "edgetpu.h"

// Creates and returns an Edge TPU interpreter to run the given edgetpu model.
std::unique_ptr<tflite::Interpreter> BuildEdgeTpuInterpreter(
const tflite::FlatBufferModel& model,
tflite::ops::builtin::BuiltinOpResolver* resolver,
edgetpu::EdgeTpuContext* edgetpu_context) {
resolver->AddCustom(edgetpu::kCustomOp, edgetpu::RegisterCustomOp());
std::unique_ptr<tflite::Interpreter> interpreter;
if (tflite::InterpreterBuilder(model, *resolver)(&interpreter) != kTfLiteOk) {
std::cerr << "Failed to build edge TPU interpreter." << std::endl;
}
interpreter->SetExternalContext(kTfLiteEdgeTpuContext, edgetpu_context);
interpreter->SetNumThreads(1);
if (interpreter->AllocateTensors() != kTfLiteOk) {
std::cerr << "Failed to allocate edge TPU tensors." << std::endl;
}
return interpreter;
}
#endif // MEDIAPIPE_EDGE_TPU

// TfLiteInferenceCalculator File Layout:
// * Header
// * Core
@@ -162,6 +184,11 @@ class TfLiteInferenceCalculator : public CalculatorBase {
TFLBufferConvert* converter_from_BPHWC4_ = nil;
#endif

#if defined(MEDIAPIPE_EDGE_TPU)
std::shared_ptr<edgetpu::EdgeTpuContext> edgetpu_context_ =
edgetpu::EdgeTpuManager::GetSingleton()->OpenDevice();
#endif

std::string model_path_ = "";
bool gpu_inference_ = false;
bool gpu_input_ = false;
@@ -425,6 +452,9 @@ ::mediapipe::Status TfLiteInferenceCalculator::Close(CalculatorContext* cc) {
#endif
delegate_ = nullptr;
}
#if defined(MEDIAPIPE_EDGE_TPU)
edgetpu_context_.reset();
#endif
return ::mediapipe::OkStatus();
}

@@ -458,16 +488,18 @@ ::mediapipe::Status TfLiteInferenceCalculator::LoadModel(
model_ = tflite::FlatBufferModel::BuildFromFile(model_path_.c_str());
RET_CHECK(model_);

tflite::ops::builtin::BuiltinOpResolver op_resolver;
if (cc->InputSidePackets().HasTag("CUSTOM_OP_RESOLVER")) {
const auto& op_resolver =
cc->InputSidePackets()
.Tag("CUSTOM_OP_RESOLVER")
.Get<tflite::ops::builtin::BuiltinOpResolver>();
tflite::InterpreterBuilder(*model_, op_resolver)(&interpreter_);
} else {
const tflite::ops::builtin::BuiltinOpResolver op_resolver;
tflite::InterpreterBuilder(*model_, op_resolver)(&interpreter_);
op_resolver = cc->InputSidePackets()
.Tag("CUSTOM_OP_RESOLVER")
.Get<tflite::ops::builtin::BuiltinOpResolver>();
}
#if defined(MEDIAPIPE_EDGE_TPU)
interpreter_ =
BuildEdgeTpuInterpreter(*model_, &op_resolver, edgetpu_context_.get());
#else
tflite::InterpreterBuilder(*model_, op_resolver)(&interpreter_);
#endif // MEDIAPIPE_EDGE_TPU

RET_CHECK(interpreter_);

@@ -93,6 +93,7 @@ ::mediapipe::Status LabelsToRenderDataCalculator::GetContract(
}

::mediapipe::Status LabelsToRenderDataCalculator::Open(CalculatorContext* cc) {
cc->SetOffset(TimestampDiff(0));
options_ = cc->Options<LabelsToRenderDataCalculatorOptions>();
num_colors_ = options_.color_size();
label_height_px_ = std::ceil(options_.font_height_px() * kFontHeightScale);
@@ -92,7 +92,7 @@ project.
MediaPipe depends on OpenCV, you will need to copy the precompiled OpenCV so
files into app/src/main/jniLibs. You can download the official OpenCV
Android SDK from
[here](https://github.com/opencv/opencv/releases/download/4.1.0/opencv-4.1.0-android-sdk.zip)
[here](https://github.com/opencv/opencv/releases/download/3.4.3/opencv-3.4.3-android-sdk.zip)
and run:

```bash
@@ -157,3 +157,20 @@ how to use MediaPipe with a TFLite model for hair segmentation on desktop using
GPU with live video from a webcam.

* [Desktop GPU](./hair_segmentation_desktop.md)

## Google Coral (machine learning acceleration with Google EdgeTPU)

Below are code samples on how to run MediaPipe on Google Coral Dev Board.

### Object Detection on Coral

[Object Detection on Coral with Webcam](https://github.com/google/mediapipe/tree/master/mediapipe/examples/coral/README.md)
shows how to run quantized object detection TFlite model accelerated with
EdgeTPU on
[Google Coral Dev Board](https://coral.withgoogle.com/products/dev-board).

### Face Detection on Coral

[Face Detection on Coral with Webcam](https://github.com/google/mediapipe/tree/master/mediapipe/examples/coral/README.md)
shows how to use quantized face detection TFlite model accelerated with EdgeTPU
on [Google Coral Dev Board](https://coral.withgoogle.com/products/dev-board).
@@ -629,7 +629,7 @@ to load both dependencies:
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}
```

Binary file not shown.
@@ -156,7 +156,7 @@ node {
output_stream: "multi_hand_rects"
node_options: {
[type.googleapis.com/mediapipe.AssociationCalculatorOptions] {
min_similarity_threshold: 0.1
min_similarity_threshold: 0.5
}
}
}
@@ -219,7 +219,7 @@ node {
output_stream: "multi_hand_rects"
node_options: {
[type.googleapis.com/mediapipe.AssociationCalculatorOptions] {
min_similarity_threshold: 0.1
min_similarity_threshold: 0.5
}
}
}
@@ -47,7 +47,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -48,7 +48,7 @@
static {
// Load all native libraries needed by the app.
System.loadLibrary("mediapipe_jni");
System.loadLibrary("opencv_java4");
System.loadLibrary("opencv_java3");
}

// {@link SurfaceTexture} where the camera-preview frames can be accessed.
@@ -0,0 +1,56 @@
# Copyright 2019 The MediaPipe Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

licenses(["notice"]) # Apache 2.0

package(default_visibility = [
"//visibility:public",
])

# Graph Runner

cc_library(
name = "demo_run_graph_main",
srcs = ["demo_run_graph_main.cc"],
deps = [
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework/formats:image_frame",
"//mediapipe/framework/formats:image_frame_opencv",
"//mediapipe/framework/port:commandlineflags",
"//mediapipe/framework/port:file_helpers",
"//mediapipe/framework/port:opencv_highgui",
"//mediapipe/framework/port:opencv_imgproc",
"//mediapipe/framework/port:opencv_video",
"//mediapipe/framework/port:parse_text_proto",
"//mediapipe/framework/port:status",
],
)

# Demos

cc_binary(
name = "object_detection_cpu",
deps = [
"//mediapipe/examples/coral:demo_run_graph_main",
"//mediapipe/graphs/object_detection:desktop_tflite_calculators",
],
)

cc_binary(
name = "face_detection_cpu",
deps = [
"//mediapipe/examples/coral:demo_run_graph_main",
"//mediapipe/graphs/face_detection:desktop_tflite_calculators",
],
)

0 comments on commit 48bcbb1

Please sign in to comment.
You can’t perform that action at this time.