Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customer Effort Score v1 (Survey via email) #10542

Closed
4 of 7 tasks
lpciferri opened this issue Apr 25, 2019 · 16 comments
Closed
4 of 7 tasks

Customer Effort Score v1 (Survey via email) #10542

lpciferri opened this issue Apr 25, 2019 · 16 comments
Assignees
Labels
Epic Priority: Low Reported issue, not a blocker. "Last in" priority for new work.

Comments

@lpciferri
Copy link
Contributor

lpciferri commented Apr 25, 2019

Background

Every month, the team needs to report metrics about Caseflow for OIT and OMB. One of these metrics has to look at "customer satisfaction." OIT proposed the following: "Average System Usability Scale (SUS) score of all systems in the investment that are currently in the target architecture." This would be reported annually. If you're not familiar (I wasn't): https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html

Instead, our team is proposing the Customer Effort Score (CES).

Customer Effort Score (CES) is very similar, but instead of asking how satisfied the customer was, you ask them to gauge the ease of their experience.

You're still measuring satisfaction, but in this way, you're gauging effort (the assumption being that the easier it is to complete a task, the better the experience). As it turns out, making an experience a low-effort one is one of the greatest ways to reduce frustration and disloyalty.

A CES survey may look something like this:
image (15)

The first month we have to report this for OMB is July 2019. However, we could administer this survey via email instead of in Caseflow itself, so we don't necessarily need to have anything built before July 2019.

We think we can use this beyond the quarterly metrics that OIT and OMB gathers - for an internal measure to better understand how our users can complete their tasks.

When asking for this, we could also ask users to share more feedback about Caseflow with us.

To do

Idea

We think the design could include

  1. explanatory text (leanest option is that this text is the same everywhere)
  2. survey itself
  3. free text box for more feedback

How to determine what tasks/users to survey?

Goal Option
Reach the most amount of users - Survey users when they log into their individual queue
Difficult tasks - Survey Litigation Support
- Caseflow Intake, at the end of the flow
Common tasks - Survey mail users after they create mail tasks
Time-consuming tasks - Survey VLJ Support Staff when they open an admin action

Documentation

CES Survey research protocol

@lpciferri
Copy link
Contributor Author

Goal: draft email by 6/10.

Target audience (to be decided by next week 6/10):

  • Caseflow Intake, at the end of the flow
  • Attorneys
  • VLJ Support Staff

@lpciferri lpciferri added this to Current work (now through +6 weeks) in Caseflow Project Dashboard - timelines are estimates only Jun 3, 2019
@lpciferri lpciferri changed the title Idea: Customer Effort Score Customer Effort Score v1 (Survey via email) Jun 3, 2019
@lpciferri
Copy link
Contributor Author

Note: We need to give union notification for all users to be able to answer this email survey.

@sneha-pai
Copy link
Contributor

I've outlined 3 proposed approaches and 3 audiences (going from leanest to slightly more complex):

Customer Effort Score

What is it?

Instead of asking how satisfied the user was, you ask users to gauge the ease of their experience.

Approach 1:

Channel:

Email a surveymonkey link to users

Assumptions/Hypothesis:

By emailing Queue users from different branches at BVA the same question about customer effort:

  • We'll get an early signal about our users' general experience/difficult using Caseflow
  • They may not understand the difference between Caseflow Queue and another product in Caseflow.
  • They'll organically and voluntarily surface reasons why they find it easy-neutral-hard to use
  • We will be able to compare and contrast various team's experiences (and synthesize the why's they've written)

Strategy:

Start with multiple branches, but one product.

Criteria and sample:

Randomization of 20%-30% of the total population of the following teams (in the order that Caseflow was given to them)

  • Attorneys
  • Judges
  • VLJ Support
  • Hearings
  • AOD
  • FOIA
  • etc.
  • In this approach, let's pick a product that's been in usage broadly for some time, say from February till June 2019: Caseflow Queue

Method:

  • Send out the following in survey monkey, via email.
    1. Intro to why this survey is happening:
      • We're trying to learn more about how to improve Caseflow Queue. Please take 5 minutes to complete a brief survey about your recent experience —> Link to surveymonkey survey.
    2. "Overall, since you've had access to Caseflow, how easy has it been for you to process your work using Caseflow Queue?"
      • Very difficult
      • difficult
      • neither
      • easy
      • very easy
    3. Can you please share more reasons about why you chose your above response?
      • OPEN TEXT FIELD (unlimited word count)

Approach 2:

Channel:

When they mark an administrative action complete, trigger an email to get sent to that respective VLJ support staff member that contains a surveymonkey link.

Assumptions/Hypothesis:

By asking VLJ support users who've just completed an admin action to take a survey, (via email trigger or as a link invitation on the confirmation screen of queue):

  • we'll understand as freshly after admin action as possible how VLJ support staff feel about the effort of their work (in total - from getting assigned an action to sending correspondence (optionally, to completing the task).

Criteria and sample:

100% of the population of:

  • Intake users, when they just finish Intaking a case
  • By posting a link to survey monkey on the admin action confirmation page, for each unique vlj support user, once.
  • Once they've clicked on the link, we will no longer show them the survey link in future admin action processing.

Method:

  • Posting a prominent survey monkey link on the confirmation page of intake (with some framing language
    1. We're trying to learn more about how to improve Caseflow Intake. Please take 5 minutes to complete a brief survey about your experience with the case you just finished Intaking —> Link to survey.
    2. How easy was it to process this most recent Veteran's forms in Caseflow Intake?
      • Very difficult
      • difficult
      • neither
      • easy
      • very easy
    3. Can you please share more reasons about why you chose your above response? (optional but encouraged)
      • OPEN TEXT FIELD (unlimited word count)

Approach 3:

Channel:

Posting a prominent survey monkey link on the confirmation page of intake

Assumptions/Hypothesis:

By asking Intake users who've just completed an Intake to take a survey:

  • we'll understand freshly how easy-medium-difficult they found that particular intake.
  • we'll be able to track, over time, the variation/range in difficulty scoring by intake team members
  • if we keep the survey link persistently available, intake over intake, we'll get a new score for each intake, and an aggregate mean scoring of intake ease of use.

Criteria and sample:

100% of the population of:

  • Intake users, when they just finish Intaking a case
  • By posting a link to survey monkey on the Intake confirmation page, for each unique Intake user, once.
  • Once they've clicked on the link, we will no longer show them the survey link in future Intake processing.

Method:

  • Posting a prominent survey monkey link on the confirmation page of intake (with some framing language
    1. We're trying to learn more about how to improve Caseflow Intake. Please take 5 minutes to complete a brief survey about your experience with the case you just finished Intaking —> Link to survey.
    2. How easy was it to process this most recent Veteran's forms in Caseflow Intake?
      • Very difficult
      • difficult
      • neither
      • easy
      • very easy
    3. Can you please share more reasons about why you chose your above response? (optional but encouraged)
      • OPEN TEXT FIELD (unlimited word count)

@lpciferri
Copy link
Contributor Author

lpciferri commented Jun 27, 2019

Connected with JW today. She proposes sneding this survey out to the entire BVA listserv so it's truly random, anonymous, and optional. She approves us adding a question about the user's role/team.

We can create the SurveyMonkey now!

@lpciferri
Copy link
Contributor Author

hey @sneha-pai and @carodew - I know you both have been doing work on this! could you give me an update on how the survey monkey is going? I can take the next step in communicating this with our BVA stakeholders, who will send the survey can go out

@lpciferri
Copy link
Contributor Author

Posting comments here to be sure they come through!

My only suggestions are for question 1. Can we:

  • add Supervisory Senior Counsel
  • add Mail Management Branch
  • add Litigation Support Branch
  • add Quality Review team
  • rename "Dispatch team" to "Decision Management Branch"
  • rename to "VLJ Support Staff"
  • rename "Intake" to "Case Review / Intake" team

@sneha-pai
Copy link
Contributor

sneha-pai commented Jul 19, 2019

Hello @laurjpeterson and @carodew ,
Lauren, the survey has been edited to reflect all teams mentioned above.

Getting BVA feedback

If BVA stakeholders would like a walkthrough, we can share this link with them to help them see what we'd like to roll out, with room for their comments before it actually gets sent out. If they'd like to add any extra language that is standard to BVA outgoing emails, we can manage that as well.

@carodew please have a look at the survey link pasted above/and or the email shared with your nava email, and provide any feedback or questions you have in your initial understanding of the progression of questions.

Regarding administering the survey

In terms of administering the survey, I recommend that we work very closely with them to send out the survey, since the account management is all within the Caseflow SurveyMonkey account as of now. If they could give us listservs of all those who should be contacted, we can administer the survey and manage incoming responses and their analysis.

@lauren
Copy link

lauren commented Jul 19, 2019

@sneha-pai Wrong Lauren.

@lpciferri
Copy link
Contributor Author

@sneha-pai - I like the idea about sending BVA stakeholders the link, as you mentioned! I'll do that when @carodew acknowledges she's taken her pass.

They plan to send this to a massive listserv of all BVA employees, and I don't think we will be allowed to administer the survey by sending it out. Are you asking if we could get the list of people who are in that listserv?

@carodew
Copy link
Contributor

carodew commented Jul 23, 2019

Added my comments to the survey. Fair warning, I got pretty nit-picky on some of the text, the source of some of which was https://content-guide.18f.gov

I will also repeat here that I get nervous when people use Likert scales since statistically they're notoriously hard to analyze correctly, especially changes over time. I wonder if we would have an easier time if it was just a statement with just an agree/disagree option?

For more on Likert scales: https://statisticscafe.blogspot.com/2011/05/how-to-use-likert-scale-in-statistical.html

@carodew carodew self-assigned this Jul 23, 2019
@carodew
Copy link
Contributor

carodew commented Jul 23, 2019

I did some more reading on how to analyze Likert scale results, and it seems like a paired t-test is reasonably good enough if the results more or less follow a normal distribution.

How to analyze Likert Scale data: https://statisticsbyjim.com/hypothesis-testing/analyze-likert-scale-data/
tl;dr – you're probably ok with a paired t-test but understand that the test is really for continuous data, and Likert scales are ordinal data, so YMMV. You accept some risk of inaccurate results.

How to do a paired t-test: https://www.statisticssolutions.com/manova-analysis-paired-sample-t-test/

There appear to be numerous tutorials on how to do a t-test in common spreadsheet programs, so I don't think we'd need fancy statistics software to run this. We would just need to be able to export a numerical score for all the responses. I'd want to be sure we write up a good 'how to interpret this data' section when we present the results.

@carodew
Copy link
Contributor

carodew commented Jul 24, 2019

I started a research protocol document to start gathering all the information in a place that's won't get lost when we close this issue: https://github.com/department-of-veterans-affairs/appeals-team/tree/master/Project%20Folders/Caseflow%20Projects/CES-survey

I'll also add this link to the topmost comment.

@carodew carodew added the Design: size-dragon 🐉 design team estimation - too big (5) label Aug 5, 2019
@carodew
Copy link
Contributor

carodew commented Aug 6, 2019

@laurjpeterson I broke this card down into several new tickets because that helps me focus on what things need my attention first. However I did not create new tickets for the last two items (engineering LOE and prioritization) since those seem to be more of a future effort. Let me know if you'd rather those get broken out too.

I'm also not sure if we want to convert this card to an epic or just delete it. I find myself really wanting three levels of hierarchy in the cards, epics, "stories", and then smaller tasks, but doesn't seem like ZenHub supports that. /shrug /end grumpiness

@lpciferri
Copy link
Contributor Author

lpciferri commented Aug 6, 2019

Sounds good! I think we can convert it to an epic (I thought i already did that!). And once we send the survey out, we can close it I think.

In terms of dev for CES, I agree that comes later. Perhaps we can create a completely separate epic, CES v2, where that lives. If we do put something in our app, it requires product, design and eng work, so will be at the epic level again I think.

@carodew
Copy link
Contributor

carodew commented Aug 7, 2019

Great! I'll clean these tickets up then.

@carodew carodew added Epic and removed Design: size-dragon 🐉 design team estimation - too big (5) labels Aug 7, 2019
@jimruggiero jimruggiero moved this from Current work (now through +6 weeks) to Incoming/Unprioritized Stakeholder Requests in Caseflow Project Dashboard - timelines are estimates only Mar 30, 2020
@jimruggiero jimruggiero added Priority: Medium Blocking issue w/workaround, or "second in" priority for new work. Priority: Low Reported issue, not a blocker. "Last in" priority for new work. and removed Priority: Medium Blocking issue w/workaround, or "second in" priority for new work. labels Apr 25, 2020
@jimruggiero jimruggiero added this to Unrefined Backlog in Caseflow Program Priorities Jun 4, 2020
@mkhandekar mkhandekar assigned mkhandekar and unassigned sneha-pai Oct 14, 2020
@alisan16
Copy link
Contributor

Closing as we are no longer tracking CES, based on @FredAllen6608 's conversations with OIT

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Epic Priority: Low Reported issue, not a blocker. "Last in" priority for new work.
Projects
No open projects
Caseflow Program Priorities
Unrefined Backlog - To Deprecate
Caseflow Project Dashboard - timeline...
Incoming/Unprioritized Stakeholder Re...
Development

No branches or pull requests

7 participants