A SwiftUI app using CoreML & Vision to extract text content from the images and querying from the context. Underlying it uses Google's BERT model which converted into CoreML model
-
Updated
Jan 14, 2024 - Swift
A SwiftUI app using CoreML & Vision to extract text content from the images and querying from the context. Underlying it uses Google's BERT model which converted into CoreML model
Official repository of my Master's Thesis project: "Developing an AI-Powered Voice Assistant for an iOS Payment App"
Utilizing AI and machine learning, the project extracts text from images via Apple's Vision Framework and offers instant answers to questions in documents through the BERT model.
Add a description, image, and links to the bert-model topic page so that developers can more easily learn about it.
To associate your repository with the bert-model topic, visit your repo's landing page and select "manage topics."