Do you see what I see?
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
DoYouSee-Vision.xcodeproj
DoYouSee-Vision
README.md

README.md

Vision - Do you see what I see?

This is a demo for testing Vision and CoreML frameworks intrucuced by Apple for iOS 11. I wanted to check this out as Now your device can tell you what it is seeing 😎 awesome, right?

I think of many people that could use this framework whom have sight difficulties, this would be great help, Thanks Apple 🙌🏻

Keep reading I hope I can ilustrate some more 🤓

A question confronting neuroscientists and computer vision researchers alike is how objects can be identified by simply "looking". We know that the human brain solves this problem very well. We only have to look at something to know what it is. But teaching a computer to "know" what it is looking at, is far harder!

🙌So, what do we want to do?

When we want to analyze an image we have three major tasks that we actually want to perform.

1 - The Asks:

That is, finding out what is in the image and what do I want to know about it. In terminology for Vision that means these are requests. An image request will hold on to the image and ...

2 - The Machinery:

... it's going to do all the work for you. There's the machinery, somebody's got to do the work. Vision machinery is its request handler.

3 - The Results:

Last we get some results out of the request, at least we hope that's what's going to happen. 🤞🏼 So as a result, Vision gives you back what it calls observations, what did Vision observed in this image. And these observations depend on what you asked Vision to do.

So how do we do this? How do we make the device see for ourselves? Here is when the coding comes 👏🏻

Also check this out https://developer.apple.com/videos/play/wwdc2017/506/