Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate available camera capabilities we can leverage #47

Open
azasypkin opened this issue Jan 23, 2017 · 5 comments
Open

Investigate available camera capabilities we can leverage #47

azasypkin opened this issue Jan 23, 2017 · 5 comments

Comments

@azasypkin
Copy link
Member

@sfoster @punamdahiya feel free to add ideas and capabilities you think would be useful to have.

I see if we can rely on something that tells us that we have an object in focus, not blurred etc. Otherwise maybe I'll check if we can use OpenCV for that (eg. detect contours, extract keypoints and see if we have enough of them).

@azasypkin
Copy link
Member Author

https://developer.apple.com/library/content/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html

Strangely (or obviously since we're talking about Apple) iPhone 5(s) cameras aren't mentioned.

@azasypkin
Copy link
Member Author

Camera properties we can control directly from C++ (VideoCapture): https://github.com/opencv/opencv/blob/master/modules/videoio/include/opencv2/videoio.hpp#L486-L491

@azasypkin
Copy link
Member Author

Okay here is the list of all currently available frame metadata that iOS can give us. Basically face, none and a bunch of machine readable codes (that we can use later on):

  • AztecCode
  • Code128Code
  • Code39Code
  • Code39Mod43Code
  • Code93Code
  • DataMatrixCode
  • EAN13Code
  • EAN8Code
  • Face
  • Interleaved2of5Code
  • ITF14Code
  • None
  • PDF417Code
  • QRCode
  • UPCECode

@sfoster
Copy link
Collaborator

sfoster commented Jan 31, 2017

That list means nothing to me. Can you summarize the significance of these metadata properties @azasypkin

@Yoric Yoric modified the milestones: Sprint 3, Sprint 2 Feb 1, 2017
@azasypkin
Copy link
Member Author

That list means nothing to me.

Well the summary is pretty short - there is nothing useful for us at this stage in iOS framework, we'll have to rely on something based on OpenCV.

AVFoundationFramework can only give us additional metadata about faces or machine readable codes it sees in the photo (location, bounding rect, roll/yaw angle and type for the codes). We may need it later though, but it's too early to think about it.

@azasypkin azasypkin removed their assignment Jul 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants