Skip to content
This repository has been archived by the owner on Jun 2, 2024. It is now read-only.

Pose Detection for Flutter using iOS MLKit ๐Ÿ“ฑ ๐Ÿ–ผ ๐Ÿ•บ

License

Notifications You must be signed in to change notification settings

VisualPT/MLKitPoseDetection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

10 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

MLKitPoseDetection

This is a snippet of a Flutter iOS Method Channel for Pose Detection using MLKit.

RPReplay_Final1687757878.mov

Introduction

The snippet in this repository allows Flutter apps to leverage the capabilities of Google's ML Kit's Pose Detection API. The package supports pose detection in "real-time". Note that there is notable latency using this implementation strategy, hence the "real time" in quotes.

Features

  • Real-time pose detection
  • Extract 33 skeletal landmark points

Installation

To use this package, add MLKitPoseDetection as a dependency in your podfile.

Assemble these files in your codebase as you see fit, then in a Stateful widget, include this snippet:

...
late List<Offset> results = [];

@override
void initState() {
  super.initState();

  try {
    widget.cameraController!.startVideoRecording(onAvailable: (image) async {
      final data = await widget.inferenceService.getPoseDetection(image);
      if (mounted && data is List<Offset> && data.isNotEmpty) {
        setState(
          () => results = data,
        );
      }
    });
  } catch (e) {
    log("An error occured in the Inference Preview $e");
  }
}
...

Finally, use the widget.cameraController in a CameraPreview widget.

License

This project is licensed under the MIT License.