You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@sfoster@punamdahiya feel free to add ideas and capabilities you think would be useful to have.
I see if we can rely on something that tells us that we have an object in focus, not blurred etc. Otherwise maybe I'll check if we can use OpenCV for that (eg. detect contours, extract keypoints and see if we have enough of them).
The text was updated successfully, but these errors were encountered:
Well the summary is pretty short - there is nothing useful for us at this stage in iOS framework, we'll have to rely on something based on OpenCV.
AVFoundationFramework can only give us additional metadata about faces or machine readable codes it sees in the photo (location, bounding rect, roll/yaw angle and type for the codes). We may need it later though, but it's too early to think about it.
@sfoster @punamdahiya feel free to add ideas and capabilities you think would be useful to have.
I see if we can rely on something that tells us that we have
an object
in focus, not blurred etc. Otherwise maybe I'll check if we can use OpenCV for that (eg. detect contours, extract keypoints and see if we have enough of them).The text was updated successfully, but these errors were encountered: