Teddy is an app to help those with touch accessibility issues still be able to capture those meaningful moments with their iPhone or iPad.
My grandfather, Larry, was a force of nature. He was kind, loving, and a mentor to many, but especially his grandchildren. He was also a fantastic photographer. Growing up, he shared his passion of photography with me and inspired my own. Whenever we traveled, he would bring along his Cannon EOS D6 with him. He taught me how to use it, how to frame a photo, and what makes an interesting photograph. When Shot on iPhone became a feasible thing for professionals, he continuously tried to take photos on his phone. Unfortunately, there wasn’t much blood in his fingers, leading to a difficulty using his phone and losing the moments he wanted to capture.
On a larger scale, there is a strong overlap between those who have touch issues and those who have difficulty learning accessibility focused features such as VoiceOver [1][2]. Teddy addresses this issue through Apple’s Foundation Models and SpeechAnalyzer APIs to take action on behalf of the user through natural language processing and tool calling.
[1] https://journals.sagepub.com/doi/10.1177/21695067231193656
[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC7924826/
Get the TestFlight here
Oh my gosh, there were so many fun things used! Here's a brief list of frameworks:
- UIKit
- SwiftUI
- AVKit
- AVFoundation
- CoreImage
- FoundationModels
- Speech
- TipKit
Yes, Teddy uses private APIs to achieve certain aspects of the app, namely the glass backgrounds. To learn more about _UIViewGlass, click here.
I used AI in this project, but very minimally.
AVFoundation can be very difficult to work with and understand since many of the errors it communicates are simply bad memory access crashes. To achieve the blured bounds for the current camera where the aspect ratio of the normal preview does not cover, I had ChatGPT generate some of the structures to convert AVCaptureVideoPreviewLayer into a UIImage on frame update. Then I did all of the manual work of representing and displaying it.
Also, shout out to Ethan Lipnik for alerting me to the existance of the new SpeechAnalyzer API. To transition from a solo model of simply using the old SFSpeechRecognizer, I used AI to do some of the heavy lifting of implementing SpeechAnalyzer and refactoring the audio input code to make everything consistent.
While AI assisted me in understanding and using these APIs, I did all of the architecture work as well as many components of integrating technological know-how to make these features come together. AI just helped to create the first draft.




