Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get camera preview output #24

Closed
borgesgabriel opened this issue Sep 15, 2015 · 3 comments
Closed

Get camera preview output #24

borgesgabriel opened this issue Sep 15, 2015 · 3 comments

Comments

@borgesgabriel
Copy link

Hi,

I'm having issues obtaining the CameraSource frame passed to the face detector.

I have tried getting the drawing cache from both the GraphicOverlay object on CameraSourcePreview (which contains only the drawn overlay, as expected), and its SurfaceView object (bitmap empty). The drawing cache from the CameraSourcePreview object itself doesn't help, either.

Another path tried was to create a custom detector myself and expose a getFrame() method that returned the last frame passed to the detector. This solution seemed a bit of a workaround and I believe I was getting frames somewhat out of sync, so I pursued it no further.

Finally, I resorted to the following logic: whenever I find a face I deem acceptable to my use case, I submit an async request to take a picture using CameraSource.takePicture(). With this, however, comes a bit of a lag on the preview while the picture is being taken; also, since there's a delay between finding a face I want and taking its picture, the obtained bitmap may not be "acceptable" any more. Finally, in order to track the same face I wanted in the first place (I'm using LargestFaceFocusingProcessor here), I need to pass the detector I'm using on my activity to the object that actually takes the picture and does further processing with that. Overall I'm not sure if that'd be the way to go here.

I'd appreciate any tips on how to proceed with this issue. Thanks.

@pm0733464
Copy link
Contributor

I think that the custom detector approach would be the way to go. See this question on Stack Overflow for an overview:

http://stackoverflow.com/questions/32299947/mobile-vision-api-concatenate-new-detector-object-to-continue-frame-processing/32314136

I'd guess that the "getting frames somewhat out of sync" issue is due to overwrites of the Frame instance's underlying byte buffer. The camera source cycles through a few buffers for frame data, constantly recycling them (this greatly reduces GC churn). My guess is that you are grabbing a Frame instance at one point in time, it is overwritten a little later by the camera source, and then you are seeing a later frame.

The solution would be to copy the buffer that you get from Frame.getGrayscaleImageData() from within your Tracker. That would snapshot the image data at that point in time.

@borgesgabriel
Copy link
Author

Really sorry for taking that long to close this.

The approach you suggested was the one I ended up following -- I outlined the last step I needed to take in this answer.

Thanks ever so much for your help!

@jibinjoseph
Copy link

jibinjoseph commented Apr 12, 2017

@borgesgabriel : hai i am facing same issue ,i have tried your solution but still i am getting a bitmap without mask(GraphicOverlay ) added and also rotated inverse ,can you check this stack overflow question i have opened
http://stackoverflow.com/questions/43361706/can-not-get-camera-output-face-detection-android .Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants