Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera Preview Screenshot is not getting #132

Open
shahryar-cmyk opened this issue Dec 8, 2020 · 0 comments
Open

Camera Preview Screenshot is not getting #132

shahryar-cmyk opened this issue Dec 8, 2020 · 0 comments
Labels
question Further information is requested

Comments

@shahryar-cmyk
Copy link

shahryar-cmyk commented Dec 8, 2020

I want to get the screenshot of the camera preview but don't want to store it locally in the gallery I want to store it in the temp folder. So, Here is the scenario of my app. 1- The Ml vision camera package of the flutter detects the face of the human. 2- Because the ml vision is not detecting the face properly So, my workaround to make an API request of the screenshot if the API response is good then go to the next screen. My scenario is working correctly. But the screenshot is saved on the gallery. So, you can say use RenderBondary but with this camera, the preview is not showing. This is the issue in GitHub also regarding this. If you have a workaround how can get this done without saving or complete the scenario of the app?

import 'dart:convert';
import 'dart:io';
import 'dart:typed_data';

import 'package:auto_route/auto_route.dart';
import 'package:esol/screens/home/home_page.dart';
import 'package:flutter/material.dart';

//Flutter Packages
import 'package:camera/camera.dart';
import 'package:firebase_ml_vision/firebase_ml_vision.dart';
import 'package:flutter_camera_ml_vision/flutter_camera_ml_vision.dart';

import 'package:esol/screens/routes.gr.dart';
import 'package:http/http.dart' as http;
import 'package:native_screenshot/native_screenshot.dart';
import 'package:shared_preferences/shared_preferences.dart';

class VeifyFaceDetect extends StatefulWidget {
  @override
  _VeifyFaceDetectState createState() => _VeifyFaceDetectState();
}

class _VeifyFaceDetectState extends State<VeifyFaceDetect> {
  // File _imageFile;

  List<Face> faces = [];
  final _scanKey = GlobalKey<CameraMlVisionState>();
  CameraLensDirection cameraLensDirection = CameraLensDirection.front;
  FaceDetector detector =
      FirebaseVision.instance.faceDetector(FaceDetectorOptions(
    enableClassification: true,
    enableTracking: true,
    enableLandmarks: true,
    mode: FaceDetectorMode.accurate,
  ));
  String detectString = 'No Face Found';

  Future<void> uploadData(
      {String base64Image,
      bool template,
      bool cropImage,
      bool faceAttributes,
      bool facialFeatures,
      bool icaoAttributes}) async {
    setState(() {
      detectString = 'Verifying Face';
    });
    final url = 'https://dot.innovatrics.com/core/api/v6/face/detect';
    final response = await http.post(
      url,
      headers: {'Content-Type': 'application/json'},
      body: json.encode(
        {
          'image': {
            "data": base64Image,
            "faceSizeRatio": {
              "min": 0.01,
              "max": 0.9,
            }
          },
          'template': template,
          'cropImage': cropImage,
          'faceAttributes': faceAttributes,
          'facialFeatures': facialFeatures,
          'icaoAttributes': icaoAttributes,
        },
      ),
    );

    if (response.statusCode == 200) {
      // setState(() {
      print("Here is the code");
      // });
      if (response.body.toString().contains('NO_FACE_DETECTED')) {
        setState(() {
          detectString = 'Move Face Closer';
        });
        print('This is the response ${response.body}');
      }
      if (!response.body.toString().contains('NO_FACE_DETECTED')) {
        print('This is the another response ${response.body}');
        // Navigator.of(context).pushNamed(Routes.homePage);
        SharedPreferences prefs = await SharedPreferences.getInstance();
        prefs.setString('data', "ok");
        Future.delayed(Duration(seconds: 2), () {
          Navigator.of(context).pop();
        });
        // Navigator.of(context).pop();
      }
    }
  }

  _takeScreenShot() async {
    final dataValue = await NativeScreenshot.takeScreenshot();
    if (dataValue != null) {
      var imageFile = File(dataValue);

      Uint8List byteFile = imageFile.readAsBytesSync();
      String base64Image = base64Encode(byteFile);
      uploadData(
          base64Image: base64Image,
          cropImage: true,
          faceAttributes: true,
          facialFeatures: true,
          icaoAttributes: true,
          template: true);
    }
  }

  @override
  Widget build(BuildContext context) {
    String layoutHeight = MediaQuery.of(context).size.height.toString();
    String layoutWidth = MediaQuery.of(context).size.width.toString();

    print('This is data String $detectString');
    print('This is the height of the page : $layoutHeight');
    return Scaffold(
      backgroundColor: Color.fromRGBO(0, 85, 255, 1),
      appBar: AppBar(
        elevation: 0.0,
        leading: IconButton(
          icon: Icon(Icons.arrow_back_ios),
          onPressed: () => Navigator.of(context).pop(),
        ),
        actions: [
          IconButton(
            icon: Icon(Icons.close),
            onPressed: () => Navigator.of(context).pop(),
          ),
        ],
      ),
      body: Stack(children: [
        Positioned(
          // bottom: 100,
          // right: 80,
          // top: 100,
          child: Align(
            alignment: Alignment.topCenter,
            child: Text(
              detectString,
              style: TextStyle(color: Colors.white, fontSize: 30),
            ),
          ),
        ),
        Center(
          child: Container(
            // height: MediaQuery.of(context).size.height / 3.2,
            // width: MediaQuery.of(context).size.width * 0.7,
            height: 300,
            width: 300,
            child: ClipOval(
              child: CameraMlVision<List<Face>>(
                key: _scanKey,
                cameraLensDirection: cameraLensDirection,
                detector: (FirebaseVisionImage image) {
                  return detector.processImage(image);
                },
                overlayBuilder: (c) {
                  return Text('');
                },
                onResult: (faces) {
                  if (faces == null || faces.isEmpty || !mounted) {
                    return;
                  }

                  setState(() {
                    faces = []..addAll(faces);
                    if (faces[0].rightEyeOpenProbability >= 0.9 &&
                        faces[0].leftEyeOpenProbability >= 0.9 &&
                        faces[0].rightEyeOpenProbability >= 0.5 &&
                        faces[0].leftEyeOpenProbability >= 0.5 &&
                        faces[0].boundingBox.isEmpty == false) {
                      detectString = 'Face detected';
                      _takeScreenShot();
                    }
                    if (faces[0].rightEyeOpenProbability <= 0.5 &&
                        faces[0].leftEyeOpenProbability <= 0.5) {
                      detectString = 'Open your Eyes';
                    }
                  });
                },
                onDispose: () {
                  detector.close();
                },
              ),
            ),
          ),
        ),
      ]),
    );
  }
}
@Kleak Kleak added the question Further information is requested label Dec 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants