Airput is a visionary project conceived during a high-energy hackathon, with the mission to break barriers for individuals with limited mobility. By harnessing cutting-edge computer vision techniques, we've crafted a solution that empowers differently-abled users to engage with personal computers using nothing but their facial and eye movements.
Due to some technical issues with LFS during the hackathon, certain essential files couldn't be uploaded to GitHub. Not to worry though! We've made them available in the cloud. Access them on Gooogle Drive.
And here is our presentation on Canva.
Wanna know how it works? Youtube
As we look ahead, we're excited about the potential of Airput. We're dedicated to transforming it into a full-fledged, open-source initiative and we're eager to welcome contributions from the vibrant tech community. If you're passionate about accessibility and inclusion, stay tuned! We'll be rolling out the welcome mat for contributors very soon.
This project is licensed under the MIT License.
Note: The name 'Airput' is a playful fusion of AI (Artificial Intelligence) and AR (Augmented Reality), reflecting our focus on harnessing computer vision for revolutionary input methods. Crafted amidst the intense creativity of a hackathon, Airput is a testament to the power of innovation under pressure.