Skip to content

Latest commit

 

History

History
62 lines (45 loc) · 4.38 KB

README.md

File metadata and controls

62 lines (45 loc) · 4.38 KB

MediaPipe

MediaPipe is a framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions.

Real-time Face Detection

"MediaPipe has made it extremely easy to build our 3D person pose reconstruction demo app, facilitating accelerated neural network inference on device and synchronization of our result visualization with the video capture stream. Highly recommended!" - George Papandreou, CTO, Ariel AI

ML Solutions in MediaPipe

hand_tracking multi-hand_tracking face_detection hair_segmentation object_detection

Installation

Follow these instructions.

Getting started

See mobile, desktop and Google Coral examples.

Documentation

MediaPipe Read-the-Docs or docs.mediapipe.dev

Check out the Examples page for tutorials on how to use MediaPipe. Concepts page for basic definitions

Visualizing MediaPipe graphs

A web-based visualizer is hosted on viz.mediapipe.dev. Please also see instructions here.

Community forum

  • Discuss - General community discussion around MediaPipe

Publications

Events

Alpha Disclaimer

MediaPipe is currently in alpha for v0.6. We are still making breaking API changes and expect to get to stable API by v1.0.

Contributing

We welcome contributions. Please follow these guidelines.

We use GitHub issues for tracking requests and bugs. Please post questions to the MediaPipe Stack Overflow with a 'mediapipe' tag.