-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get camera preview output #24
Comments
I think that the custom detector approach would be the way to go. See this question on Stack Overflow for an overview: I'd guess that the "getting frames somewhat out of sync" issue is due to overwrites of the Frame instance's underlying byte buffer. The camera source cycles through a few buffers for frame data, constantly recycling them (this greatly reduces GC churn). My guess is that you are grabbing a Frame instance at one point in time, it is overwritten a little later by the camera source, and then you are seeing a later frame. The solution would be to copy the buffer that you get from Frame.getGrayscaleImageData() from within your Tracker. That would snapshot the image data at that point in time. |
Really sorry for taking that long to close this. The approach you suggested was the one I ended up following -- I outlined the last step I needed to take in this answer. Thanks ever so much for your help! |
@borgesgabriel : hai i am facing same issue ,i have tried your solution but still i am getting a bitmap without mask(GraphicOverlay ) added and also rotated inverse ,can you check this stack overflow question i have opened |
Hi,
I'm having issues obtaining the
CameraSource
frame passed to the face detector.I have tried getting the drawing cache from both the
GraphicOverlay
object onCameraSourcePreview
(which contains only the drawn overlay, as expected), and itsSurfaceView
object (bitmap empty). The drawing cache from theCameraSourcePreview
object itself doesn't help, either.Another path tried was to create a custom detector myself and expose a
getFrame()
method that returned the last frame passed to the detector. This solution seemed a bit of a workaround and I believe I was getting frames somewhat out of sync, so I pursued it no further.Finally, I resorted to the following logic: whenever I find a face I deem acceptable to my use case, I submit an async request to take a picture using
CameraSource.takePicture()
. With this, however, comes a bit of a lag on the preview while the picture is being taken; also, since there's a delay between finding a face I want and taking its picture, the obtained bitmap may not be "acceptable" any more. Finally, in order to track the same face I wanted in the first place (I'm usingLargestFaceFocusingProcessor
here), I need to pass the detector I'm using on my activity to the object that actually takes the picture and does further processing with that. Overall I'm not sure if that'd be the way to go here.I'd appreciate any tips on how to proceed with this issue. Thanks.
The text was updated successfully, but these errors were encountered: