Training Artificial Intelligence to support people with additional needs into education and/or into the workforce, no matter their spoken language or dialect.
Accessio will develop an NLP Jamaican Patois dataset, that will become the basis of the necessary language and knowledge required to specifically understand having a disability in the Jamaican context. For the first time, people with disabilities, who speak Jamaican Patois will also be able to access speech-based assistive technologies using their native language.
In 2022, AI’s biggest names (DeepMind, Open-AI, Hugging Face) will further improve the use and development of NLP technology.
However, current algorithms are trained on traditional written sources such as newspapers, books, and articles which use standard, formal structures of language. A language is rarely spoken in exactly the same way by everyone, and people have their own accents, dialects, and slang, etc.
NLP is yet to be developed for Jamaican Patois and other Caribbean languages and dialects. There's a community of NLP developing models for other languages - such as Chinese, Korean and numerous African languages.
There are currently 3 major Caribbean languages with no existing NLP datasets:
- Haitian Creole (9.6 million speakers)
- Jamaican Patois (3.2 million speakers)
- Trinidadian Creole (1 million speakers)
Accessio is an open-source web app that can be used on smartphones or web browsers.
- Accessio listens to the user.
- User's speech is converted to text.
- Text is converted to NLP tokens and processed through transformers specifically trained with understanding Jamaican Patois.
- Accessio provides an appropriate response.
Source: Google Cloud tutorial
Step 1: Source & prepare data
- Primary research: Facilitate and capture honest, open conversations about living with a full range of disabilities in Jamaica.
- Secondary research: Collect relevant disabilities information to train Accessio on facts, figures and process relasted to specific disabilities.
Step 2: Code your model
Step 3: Train, evaluate and tune your model
Step 4: Deploy your trained model
Step 5: Get predictions from your model
Step 6: Monitor the ongoing predictions
Step 7: Manage your models and versions
Basic
As a user,
I want to see a filtered list of relevant assistive technology funding opportunities,
So I can narrow my applications to only the opportunities for which I am eligible.
As a user,
I want to compare funding opportunities in a table format,
So I can compare all relevant funding opportunities on one screen.
As a user,
I want to have a step-by-step list of the entire application process,
So I have no surprises or requests for random pieces of information or evidence.
Dyslexic User
As a dyslexic user,
I want to have a step-by-step customer journey map of the entire application process,
So I canvisualise the whole process in one image.
Keyboard-Only User
As keyboard-only user,
I want to be able to reach the main navigation links with a keyboard,
so that I can determine the different areas of the site.
As keyboard-only user,
I want the ability to reach all links (text or image), form controls and page functions,
so that I can perform an action or navigate to the place I choose.
As a keyboard-only user,
I want the ability to use the enter key to open the selected link,
so that every link on a page is accessible using a keyboard as it would be with a left mouse click.
As keyboard-only user,
I want to know where I am on the screen at all times,
so that I know what I can do and how to do it.
Screen Reader User
As a screen reader user,
I want to hear the text equivalent for each image conveying information,
so that I don’t miss any information on the page.
As a screen reader user,
I want to hear the text equivalent for each image button,
so that I will know what function it performs.
As a screen reader user,
I want to understand know what each form label is for each form field,
so that I can effectively enter the correct information in the form.
As a screen reader user,
I want to know what the column and row headers for each table cell,
so that I can understand the meaning of the data.
User with Low-Vision
As a user who has trouble reading due to low vision,
I want to be able to make the text larger on the screen,
so that I can read it.
User with Color-blindness
As a user who is color blind,
I want to have access to information conveyed in color,
so that I do not miss anything and I understand the content.
As a user who is color blind,
I want to links to be distinguishable on the page,
so that I can find the links and navigate the site.
As a user who is color blind,
I want to know what fields are required,
so that I can fill out the form.
Hearing Impaired User
As a user who is hearing-impaired,
I want a transcript of the spoken audio,
so that I can have access to all information provided in audio clips.
As a user who is hearing-impaired,
I want to turn on video captions,
so that I can understand what is being said in videos.
LinkedIn Learning, udemy & Makers Academy
- NLP with Python for Machine Learning Essential Training
- Unit Testing & Test Driven Development in Python
- Azure Machine Learning Development: 1 Basic Concepts
- Advanced NLP with Python for Machine Learning
- Design Thinking: Data Intelligence
- Deep Learning Foundations: Natural Language Processing with TensorFlow
- Build a Backend REST API with Python & Django - Advanced
- Makers Algorithm course - #Algorithm channel on Slack