Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[a11y attend meeting] Chatbot MHV Prescription Skill: Chatbot Team and CAIA a11ys 10/24/2023 #68359

Closed
Tracked by #68026
sara-amanda opened this issue Oct 25, 2023 · 1 comment
Assignees
Labels
sitewide accessibility sitewide CAIA Virtual-Agent Team working on creating the VA Virtual Agent

Comments

@sara-amanda
Copy link
Contributor

sara-amanda commented Oct 25, 2023

CAIA & Chatbot Team Meeting

  • Meeting Date: 10/24/2023
  • Meeting Notes: Google Doc (CAIA Internal WIP Doc)
  • Attendees:
    • Joy Elizabeth
    • Nathalie Rayter - product owner
    • Zinal Patel
    • Anita DeWitt (A11y Champ)
    • Swapna
    • Kathy Cui
    • Karan Krishani
    • Luciana Morais - product owner (OCTO)
    • Sara Smith
    • Sarah Koomson
    • Eli Mellen

Meeting Notes 10/24/2023

Goal of Meeting

  • Move toward the alignment on the definition of done.
  • Clarify questions about the findings.
  • Identify best methods to follow up.

Meeting Takeaways

Meeting Action Items Have Been Aggregated & Included in Task Lists in Ticket #68026

  • Determine the definition of done:
    • Think of it as someone using Assistive Tech. to approach the definition of done. Completion path, given some assumptions.
  • Joy is using a workbook that combined the governance and CAIA tickets.
  • Chatbot would like the names of the semantic tools we are using to compare. providing: Tools to use - WIP from CAIA. Mix manual and automatic testing tools.
  • Determine the highest level of priority.
  • Algin priority levels for launch-blocking items.
  • Microsoft progress (TBD)
  • More time on the calendar as needed: Office hours, schedule meetings, async, accessibility community of practice, are all available options.
  • Revised Launch Goal per Luciana:
    • Definition of Done for Releasing in 2 Months.
    • Anything else work on in 2024.
  • Other Chatbots: Check to see if there are any other POs on chatbots to make connections.
  • Consult with Angela Fowler during her Office Hours

Full Notes

Access the full notes from the 10/24/2023 meeting

Meeting Action Items Have Been Aggregated & Included in Task Lists in Ticket #68026

Joy

  • Able to create an alignment workbook with the governance and CAIA tickets.
  • There are also recommendations in the workbook that we shared that is not part of any GH ticket. Didn’t want to lose track, so they added those onto a separate page, to check off all items needing addressed.
  • What is the highest level of priority? Want to align with our priority level. Does it make sense? Based on our feedback, what is completely blocking us and what isn’t. So we can get our RX skill to launch.

Anita

  • Prioritize launch-blocking features
  • In convos with Microsoft to get some of these issues solved.
  • After we go through the discovery, research and solutions, what is our definition of done, so that we can align on those?

Definition of Done

Eli

We can define that and have a conversation, it is something we as CAIA have been pushing VA to provide. There is no OCTO documentation, as far as the definition of done. Particularly with a new Chatbot to the ecosystem. I think we have the opportunity now or on future calls, what we want that definition of done to be and so that we can be aligned, and so we can bring it to governance.

Think of it as someone using Assistive Tech. to approach the definition of done. Completion path, given some assumptions.

  • Definition of Done for Releasing in 2 Months (Luciana)
  • Anything else 2024 (Luciana)

Launch-Blocking Feedback

  • Variable: There could be other edge cases, but these are the two areas we test the most (Eli).
Anita
  • Would like to have more of an understanding on the launch-blocking feedback.
  • There are two big buckets:
      1. Screen Readers (working with Microsoft with that piece)
      1. Keyboard Navigation
  • Example #25164
    • If you hit up and down on the keyboard it will do that.
      • In chatbot if you hit up and down, even if it is a scrollable container, it will kind of pop you to the next message.
      • As a keyboard user it breaks that expectation.
    • Expecting to scroll with your scroll wheel, but it is popping to previously defined semantic elements, in a cage.
    • Jumping vs. scrolling
    • Interfere with the screen reader? Not interfering with the SR tech - but it breaks the expectations of someone who is relying on the SR to provide the feedback.
    • Tab isn’t supported in the Chatbot, instead you use the up and down arrows.

This is untrodden territory, where our guidance (VA) suggests to us we should treat it like an interactive web form, but it is a web application within another web application. That is why we are using various playbooks, to help guide this process. (Eli)

  • The browser guidance and the chatbot guidance conflict.
    • The keyboard control modal kind of blocks you too, close from the keyboard and shift tab and it will open it again.
    • There isn’t really a predictable way to do it.

Setting a new precedence in the web space is a difficult task especially when you peel off layers for context clues. Remove anything based on contrast, like a new expectation of what you would want a button to do. You would have to level the playing field, with a tour, or helper text, but that might be solutioning, and we want to lean away from that right now. We want to lean on existing patterns. It will be the easiest thing to fall into to be successful. Vs. falling down a hill. (Eli)

a11y Assistance

We are available async and in Slack.

We can help provide input, and/or bring it to the accessibility community of practice, to help define that definition of done.

Other Chatbots

Joy
  • Do we have resources from other departments in the VA on these specific concerns around accessibility and chatbot. There are multiple chatbots in VA. Would be good to learn from someone else’s success and failures.
  • We can ask to see if there are any other POs on chatbots to make those connections?
  • Should we plan to join the CAIA Office Hours Weekly?
    • Larger practice of accessibility we can bring this to. (Eli)
    • We can also book Eli’s time or schedule time. (Eli)
    • Angela Fowler has office hours and is a full-time screen reader. Not sure if her contract would allow for this - it should because she can do MHV.
    • Angela could demonstrate the experience of using a screen reader.
    • Eli’s goal with suggesting that they visit Angela's office hours for them to gain context around how assistive tech functions in real life since that seemed to be a new paradigm.

Testing Tools

  • Automated Tools are concerned about semantics. (i.e. H1 and H2)
  • Manual Testing will move you up a layer. (While interacting with these doesn’t make sense with the same title)
  • Don’t rely on a single testing tool, try and mix and match to learn some of these patterns
@sara-amanda
Copy link
Contributor Author

Closing Ticket

Meeting Action Items Have Been Aggregated and Included in Task Lists Located in Ticket #68026

@eli-oat @SarahKay8 @coforma-terry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sitewide accessibility sitewide CAIA Virtual-Agent Team working on creating the VA Virtual Agent
Projects
None yet
Development

No branches or pull requests

6 participants