Skip to content
forked from esylab/myeye

This is a project aimed at helping the blind with eyes.

License

Notifications You must be signed in to change notification settings

AnthonyByansi/myeye

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

myeye

This is a project aimed at helping the blind by providing audio descriptions of images, converting text to speech, or providing navigation assistance.

Features of the app

  • Audio guidance for navigating unfamiliar environments
  • Text-to-speech functionality for reading web content aloud
  • Controls for adjusting audio volume and speed

Built With

  • React - A JavaScript library for building user interfaces
  • Node.js - A JavaScript runtime built on Chrome's V8 JavaScript engine
  • Express - A fast, minimalist web framework for Node.js
  • MongoDB - A document-oriented database

Requirements

Node.js npm (comes with Node.js)

Installation

  • Clone the repository: git clone https://github.com/
  • Install the dependencies: npm install
  • Running the app
  • Start the server: npm run start:server
  • In a separate terminal window, start the client: npm run start:client
  • Open your web browser and navigate to http://localhost:3000
  • Building the app
  • To build the app for production, run npm run build. The production-ready files will be located in the build folder.

Contributing

If you would like to contribute to the project, please follow these guidelines:

  • Fork the repository
  • Create a new branch for your feature
  • Make your changes and commit them to your branch
  • Submit a pull request for review

License

This project is licensed under the MIT License.

More Features of myeye.

  • First and foremost, it is intended to include features to help the user navigate and orient themselves in their environmen. This could include features like GPS and maps to help the user know where they are, as well as tools to help them identify nearby objects and obstacles.
  • The is also has the ability to convert visual information into audio or tactile feedback, it will use optical character recognition (OCR) to read text out loud, use object recognition to identify and describe objects in the environment.
  • The app also has features to help the user communicate and interact with others, such as text-to-speech and speech-to-text tools, as well as tools for sending and receiving messages.

Ingredients

  • Microsoft Azure Computer Vision API (automatically generate image descriptions)
  • OpenCV library for Python ( analyze images and extract information from them)
  • pyttsx3 library to convert text to speech
  • Google Maps API to provide navigation assistance.
  • braille output for text-based information
  • FIrebase(For AI based dictionary)
  • OpenCv (Face Recognition)
  • Google Bangla Speech to text API
  • Google Bangla Text to speech API
  • Google Location API
  • etc

About

This is a project aimed at helping the blind with eyes.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 96.6%
  • CSS 2.6%
  • HTML 0.8%