Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object Detector Not Working #3

Open
cubukcum opened this issue Mar 26, 2024 · 7 comments
Open

Object Detector Not Working #3

cubukcum opened this issue Mar 26, 2024 · 7 comments

Comments

@cubukcum
Copy link

cubukcum commented Mar 26, 2024

Below is my only code. It is intended to run my object detection model on a live camera feed, but for some reason, it is not functioning as expected and does not produce any errors. The camera opens up with the necessary permissions, displaying a live view without drawing any objects on the screen. I would appreciate it if you could review it and let me know what might be causing this issue.

import 'package:flutter/material.dart';
import 'package:ultralytics_yolo/ultralytics_yolo.dart';
import 'package:ultralytics_yolo/yolo_model.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: MyHomePage(),
    );
  }
}

class MyHomePage extends StatefulWidget {
  @override
  _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  late final LocalYoloModel model;
  late final ObjectDetector objectDetector;
  final String id = "your_model_id";
  final String modelPath = "assets/yolov8n_int8.tflite";
  final String metadataPath = "assets/metadata.yaml";

  @override
  void initState() {
    super.initState();
    model = LocalYoloModel(
      id: id,
      task: Task.detect,
      format: Format.tflite,
      modelPath: modelPath,
      metadataPath: metadataPath,
    );
    objectDetector = ObjectDetector(model: model);
    loadModel();
  }

  Future<void> loadModel() async {
    await objectDetector.loadModel();
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text("YOLO Model Example"),
      ),
      body: UltralyticsYoloCameraPreview(
        predictor: objectDetector,
        controller: UltralyticsYoloCameraController(),
        onCameraCreated: () {}, // Add an empty callback here
        loadingPlaceholder: Center(
          child: CircularProgressIndicator(),
        ),
      ),
    );
  }
}
@cubukcum
Copy link
Author

cubukcum commented Mar 26, 2024

I also cloned your example project. I was looking for an example that detects objects and draws on a live camera feed. However, it seems that the project only displays the camera feed, FPS, and milliseconds on the camera tab. On the gallery tab, it displays the selected image on the screen but does not perform any other actions. The application runs without any errors though, and here are some of the outputs I get in the console:

E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/SurfaceViewImpl(32001): Surface destroyed.
D/SurfaceViewImpl(32001): Surface invalidated androidx.camera.core.SurfaceRequest@2db69c7
D/DeferrableSurface(32001): surface closed,  useCount=1 closed=true androidx.camera.core.SurfaceRequest$2@890dbf4
W/WindowOnBackDispatcher(32001): sendCancelIfRunning: isInProgress=falsecallback=android.view.ViewRootImpl$$ExternalSyntheticLambda17@3e42dcd
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) dequeueBuffer: BufferQueue has been abandoned
E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
D/YuvToJpegEncoder(32001): onFlyCompress

@fstizza
Copy link

fstizza commented Mar 28, 2024

The predictor is working fine.

I've been analyzing the UltralyticsYoloCameraPreview and I've detected that the following switch statetement is the cause of th e non showing bounding boxes:

 switch (widget.predictor.runtimeType) {
                case ObjectDetector _:
                  return StreamBuilder(
                    stream: (widget.predictor! as ObjectDetector)
                        .detectionResultStream,
                    builder: (
                      BuildContext context,
                      AsyncSnapshot<List<DetectedObject?>?> snapshot,
                    ) {
                      if (snapshot.data == null) return Container();

                      return CustomPaint(
                        painter: ObjectDetectorPainter(
                          snapshot.data! as List<DetectedObject>,
                          widget.boundingBoxesColorList,
                          widget.controller.value.strokeWidth,
                        ),
                      );
                    },
                  );
                case ImageClassifier _:
                  return widget.classificationOverlay ??
                      StreamBuilder(
                        stream: (widget.predictor! as ImageClassifier)
                            .classificationResultStream,
                        builder: (context, snapshot) {
                          final classificationResults = snapshot.data;

                          if (classificationResults == null ||
                              classificationResults.isEmpty) {
                            return Container();
                          }

                          return ClassificationResultOverlay(
                            classificationResults: classificationResults,
                          );
                        },
                      );
                default:
                  return Container();

Since that pattern matching is not working as expected (see this issue) I've changed that code in a local copy of the plugin to be this simple if-else statement:

  if (widget.predictor is ObjectDetector) {
                return StreamBuilder(
                  stream: (widget.predictor! as ObjectDetector)
                      .detectionResultStream,
                  builder: (
                    BuildContext context,
                    AsyncSnapshot<List<DetectedObject?>?> snapshot,
                  ) {
                    if (snapshot.data == null) return Container();

                    return CustomPaint(
                      painter: ObjectDetectorPainter(
                        snapshot.data! as List<DetectedObject>,
                        widget.boundingBoxesColorList,
                        widget.controller.value.strokeWidth,
                      ),
                    );
                  },
                );
              } else if (widget.predictor is ImageClassifier) {
                return widget.classificationOverlay ??
                    StreamBuilder(
                      stream: (widget.predictor! as ImageClassifier)
                          .classificationResultStream,
                      builder: (context, snapshot) {
                        final classificationResults = snapshot.data;

                        if (classificationResults == null ||
                            classificationResults.isEmpty) {
                          return Container();
                        }

                        return ClassificationResultOverlay(
                          classificationResults: classificationResults,
                        );
                      },
                    );
              } else {
                return Container();
              }

And started working OK. Hope they could do this change soon so I can get back my app to use the latest version of this package.

@pderrenger
Copy link
Member

@fstizza hello 👋,

Thank you very much for this detailed analysis and for proposing a solution! It's great to see active community involvement helping to refine and improve the tool.

You're right, the issue you've encountered with the pattern matching in Dart and the switch statement not working as expected for type checks is acknowledged. Your proposed use of if-else statements is a good workaround and aligns well with recommended solutions for this Dart language issue.

We'll review the changes you've suggested and consider integrating them into the next update of the plugin. This kind of feedback is invaluable and helps us ensure that the library remains robust and user-friendly. Please keep an eye on the upcoming versions for improvements and fixes!

If you have other suggestions or need further assistance, feel free to share. Happy coding! 😄

@fstizza
Copy link

fstizza commented Apr 9, 2024

I'm really glad I could help!

@pderrenger
Copy link
Member

@fstizza We're equally glad for your contribution, and it's fantastic to have such supportive community members! 😊 Your insights are greatly appreciated, and they play a crucial role in refining and improving our library. Keep the suggestions coming and happy coding! 🚀

@cubukcum
Copy link
Author

@fstizza would you check my code and tell me whats wrong with it? I am trying to run a simple model. I have changed the switch statement as you suggested but still I am getting no boxes.

Copy link

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added Stale and removed Stale labels May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants