This repository contains the research and prototype for Zammo.ai in the context of its AI for Accessibility grant: Employment Accessibility Advancements through Voice AI.
The overarching goal of the project and prototype aims to use Artificial Intelligence to help job seekers with disabilities enhance their existing interface options in an effort to both improve the experience and success of finding the right job.
The combination of a Zammo chatbot hosted in an edge extension was chosen due to its ability to provide a continuous interface across domains. This allows a job seeker to continue their conversation with the chatbot throughout a job search journey across multiple unrelated job postings, job providers, or web domains.
The Zammo chatbot acts to provide an interface for end users to interact with their content in a conversational manner leveraging artificial intelligence to get relevant information fast through both a voice and/or chat interface, as required by the end user.
The overall project has been a collaborative effort between Zammo.ai, Microsoft, and Open Inclusion.
Zammo.ai is an omni-channel, no-code conversational AI software platform. Open Inclusion is a research, design, and innovation consultancy making environments that work better for all.
The following is a demo of the end-state prototype and solution that a job seeker can use leveraging a Zammo chatbot and the edge extension code included in this repo.
Employment Accessibility Advancements through Voice AI Demonstration
People with disabilities face numerous challenges along the job-seeking journey.
Highlights of the issues signaled by job seekers include:
- Finding relevant information about job openings for them
- Understanding how to apply
- Filling out and submitting the job application with the accurate, complete, and formatted details
The following includes key research findings from the project as discovered from user surveys, user interviews, focus groups, and usability testing.
Employment Accessibility Advancements through Voice AI Research Study Findings
Following the research, we moved into a prototyping phase.
We wanted to explore a few ideas we uncovered during the research phase.
The sections below describe the prototype and a guide to help any readers deploy and use it.
At the core idea of the prototype, we aimed to create a companion that could support the user in their job search journey.
The chatbot as an assistant that could both control the browser and also read and translate or transcribe the information on the page appeared to be a possible solution for our needs.
We designed the following interaction:
sequenceDiagram
participant user as User
participant browser as Edge Browser
participant chatbot as Chatbot in Edge extension
participant bot as Zammo Bot Engine
participant openai as Generative LLM
user->>browser: browses to a job advertisement page
user->>browser: opens the extension
user->>chatbot: asks the bot to read the page they are currently reading
chatbot->>chatbot: retrieve the page content
chatbot->>bot: sends the page content
bot->>openai: contextual query with user request and ingested data
openai->>bot: generated response
bot->>bot: parse and structure response
bot->>chatbot: response with requested info
chatbot->>user: response
The Setup consists of 4 key components and requirements:
- Generative large language model.
- Creation of a Zammo bot
- Installation of the Microsoft Edge browser
- Creation of a Microsoft Edge Extension
LLM (Large language models) are machine learning models that have been trained on large datasets.
They can be used for many different use cases but one of the more popular ones is text generation.
Based on a given prompt the machine learning model can generate a text (of any chosen size) that would complement the prompt.
In our prototype, we use the job content and the user questions as a prompt and provide the model output as a response to the user.
The chatbot needs to access the large language model. OpenAI is a company that offers a large language model as an API.
OpenAI LLM is accessible through:
- If using Azure OpenAI:
- Head to Azure OpenAI Service
- Provision a dedicated service and instance.
- Take note of the API key by going to the keys in the resource blades in the Azure portal.
- Go to the Azure OpenAI studio. Select experiment with prompt completions. Then under deployments, select create a new deployment, then select the Model named text-davinci-001 and any Deployment name you want.
- Once the deployment is completed, go to the playground. Ensure that your newly created deployment is selected. Click on code view.
- Take note of the endpoint URL near the dropdown with the programming language.
You should have an endpoint URL and an API key.
- if using OpenAI:
- Go to
beta.openai.com
. - Create or login with the account which will provide the credentials for the bot.
- Once signed in, under the profile menu (on the top right) select
View API keys
. - Generate an API Key and take note of the API key.
The extension leverages a Zammo-built bot to power the experience.
You can create an account for free.
You can create your own Zammo bot by following the steps below or following along with the Zammo Bot Creation Walkthrough:
- Going to app.zammoa.ai, creating a business
- Navigating to UI Builder using your business
- In UI Builder select the 3 dots (more options) in the lefthand menu next to your businesses bot name
- Select Import a .zip for this bot
- Import the following file: Employment-Accessibility-Advancements-through-Voice-AI.zip
- In the top right, Save your bot
- In the configuration section:
- if you are using Azure Open AI add the config for
config.azureOpenAiHostUrl
, andconfig.azureOpenAiApiKey
- if you are using Open AI add the config for
config.openAiApiKey
Based on your Open AI integration set
config.openAiMode
toazureopenai
oropenai
.
- From the top menu, open the Simulator
- Select "Your Website" and copy the code snippet
- From the code snippet, take note of your simulator token
In order to use the extension you will need to have installed the browser Microsoft Edge on your computer.
This repo contains the source code for the Microsoft Edge Extension hosting the Zammo Bot.
Follow the guide on the page for information on the setup.
Make sure to pin the extension to the navigation bar so it's accessible without having to hover over the extension's icon all the time.
The first time you click on the extension, it will ask for a webchat ID. The ID corresponds to the ID of the webchat the system will interact with.
- If you have access to a Zammo organization with a plan, You can use the webchat ID. You can find it on the channels page, by selecting Webchat and copying the webchat code for the bot. In that webchat code, you will find an ID that you can copy and use.
- Otherwise, on the Conversation Design Q&A, or inside UI Builder, click on the Simulator button. In the subsequent modal, select Webchat. Ensure that the token for the webchat is valid, otherwise, make sure to refresh it. Copy the code for the webchat. In that webchat code, find the simulator token and copy it.
In the first view of the extension, paste the simulator token or the webchat ID.
It should initiate the bot and allow you to interact with it.
With UI Kit (the chatbot interface of Zammo) there are a few currently known limitations highlighted below:
- When using a screen reader there is no dedicated auditory cue when new message(s) are received
- When using a screen reader, if multiple messages are sent back from the bot at once the sequence is not automatically queued and read to the user. The user is required to tab between the different message content
Zammo continues to do usability and accessibility testing of its chatbot interface and regularly completes releases of its UI Kit that includes both fixes and features. If there are any unidentified issues please let us know!
To report a vulnerability, please e-mail support@zammo.ai with the subject line of "Employment Accessibility Advancements through Voice AI - Bot Issue", a description of the issue, the steps you took to create the issue, affected versions, and if known, mitigations for the issue.
For more information about Zammo.ai and the project, you can keep in touch with us here:
A quick note from Zammo.ai:
Many thanks to the Open Inclusion and Microsoft Teams for providing valuable resources and support!
Thanks also to the Azure product suite(s) and OpenAI for the powerful technologies that are used in this project.
Thank you to all upcoming contributors that will continue to bring this project even further and the mission to do good!