-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customer Effort Score v1 (Survey via email) #10542
Comments
Goal: draft email by 6/10. Target audience (to be decided by next week 6/10):
|
Note: We need to give union notification for all users to be able to answer this email survey. |
I've outlined 3 proposed approaches and 3 audiences (going from leanest to slightly more complex): Customer Effort ScoreWhat is it?Instead of asking how satisfied the user was, you ask users to gauge the ease of their experience. Approach 1:Channel: Email a surveymonkey link to users Assumptions/Hypothesis: By emailing Queue users from different branches at BVA the same question about customer effort:
Strategy: Start with multiple branches, but one product. Criteria and sample: Randomization of 20%-30% of the total population of the following teams (in the order that Caseflow was given to them)
Method:
Approach 2:Channel: When they mark an administrative action complete, trigger an email to get sent to that respective VLJ support staff member that contains a surveymonkey link. Assumptions/Hypothesis: By asking VLJ support users who've just completed an admin action to take a survey, (via email trigger or as a link invitation on the confirmation screen of queue):
Criteria and sample: 100% of the population of:
Method:
Approach 3:Channel: Posting a prominent survey monkey link on the confirmation page of intake Assumptions/Hypothesis: By asking Intake users who've just completed an Intake to take a survey:
Criteria and sample: 100% of the population of:
Method:
|
Connected with JW today. She proposes sneding this survey out to the entire BVA listserv so it's truly random, anonymous, and optional. She approves us adding a question about the user's role/team. We can create the SurveyMonkey now! |
hey @sneha-pai and @carodew - I know you both have been doing work on this! could you give me an update on how the survey monkey is going? I can take the next step in communicating this with our BVA stakeholders, who will send the survey can go out |
Posting comments here to be sure they come through! My only suggestions are for question 1. Can we:
|
Hello @laurjpeterson and @carodew , Getting BVA feedbackIf BVA stakeholders would like a walkthrough, we can share this link with them to help them see what we'd like to roll out, with room for their comments before it actually gets sent out. If they'd like to add any extra language that is standard to BVA outgoing emails, we can manage that as well. @carodew please have a look at the survey link pasted above/and or the email shared with your nava email, and provide any feedback or questions you have in your initial understanding of the progression of questions. Regarding administering the surveyIn terms of administering the survey, I recommend that we work very closely with them to send out the survey, since the account management is all within the Caseflow SurveyMonkey account as of now. If they could give us listservs of all those who should be contacted, we can administer the survey and manage incoming responses and their analysis. |
@sneha-pai Wrong Lauren. |
@sneha-pai - I like the idea about sending BVA stakeholders the link, as you mentioned! I'll do that when @carodew acknowledges she's taken her pass. They plan to send this to a massive listserv of all BVA employees, and I don't think we will be allowed to administer the survey by sending it out. Are you asking if we could get the list of people who are in that listserv? |
Added my comments to the survey. Fair warning, I got pretty nit-picky on some of the text, the source of some of which was https://content-guide.18f.gov I will also repeat here that I get nervous when people use Likert scales since statistically they're notoriously hard to analyze correctly, especially changes over time. I wonder if we would have an easier time if it was just a statement with just an agree/disagree option? For more on Likert scales: https://statisticscafe.blogspot.com/2011/05/how-to-use-likert-scale-in-statistical.html |
I did some more reading on how to analyze Likert scale results, and it seems like a paired t-test is reasonably good enough if the results more or less follow a normal distribution. How to analyze Likert Scale data: https://statisticsbyjim.com/hypothesis-testing/analyze-likert-scale-data/ How to do a paired t-test: https://www.statisticssolutions.com/manova-analysis-paired-sample-t-test/ There appear to be numerous tutorials on how to do a t-test in common spreadsheet programs, so I don't think we'd need fancy statistics software to run this. We would just need to be able to export a numerical score for all the responses. I'd want to be sure we write up a good 'how to interpret this data' section when we present the results. |
I started a research protocol document to start gathering all the information in a place that's won't get lost when we close this issue: https://github.com/department-of-veterans-affairs/appeals-team/tree/master/Project%20Folders/Caseflow%20Projects/CES-survey I'll also add this link to the topmost comment. |
@laurjpeterson I broke this card down into several new tickets because that helps me focus on what things need my attention first. However I did not create new tickets for the last two items (engineering LOE and prioritization) since those seem to be more of a future effort. Let me know if you'd rather those get broken out too. I'm also not sure if we want to convert this card to an epic or just delete it. I find myself really wanting three levels of hierarchy in the cards, epics, "stories", and then smaller tasks, but doesn't seem like ZenHub supports that. /shrug /end grumpiness |
Sounds good! I think we can convert it to an epic (I thought i already did that!). And once we send the survey out, we can close it I think. In terms of dev for CES, I agree that comes later. Perhaps we can create a completely separate epic, CES v2, where that lives. If we do put something in our app, it requires product, design and eng work, so will be at the epic level again I think. |
Great! I'll clean these tickets up then. |
Closing as we are no longer tracking CES, based on @FredAllen6608 's conversations with OIT |
Background
Every month, the team needs to report metrics about Caseflow for OIT and OMB. One of these metrics has to look at "customer satisfaction." OIT proposed the following: "Average System Usability Scale (SUS) score of all systems in the investment that are currently in the target architecture." This would be reported annually. If you're not familiar (I wasn't): https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html
Instead, our team is proposing the Customer Effort Score (CES).
Customer Effort Score (CES) is very similar, but instead of asking how satisfied the customer was, you ask them to gauge the ease of their experience.
You're still measuring satisfaction, but in this way, you're gauging effort (the assumption being that the easier it is to complete a task, the better the experience). As it turns out, making an experience a low-effort one is one of the greatest ways to reduce frustration and disloyalty.
A CES survey may look something like this:
The first month we have to report this for OMB is July 2019. However, we could administer this survey via email instead of in Caseflow itself, so we don't necessarily need to have anything built before July 2019.
We think we can use this beyond the quarterly metrics that OIT and OMB gathers - for an internal measure to better understand how our users can complete their tasks.
When asking for this, we could also ask users to share more feedback about Caseflow with us.
To do
Idea
We think the design could include
How to determine what tasks/users to survey?
- Caseflow Intake, at the end of the flow
Documentation
CES Survey research protocol
The text was updated successfully, but these errors were encountered: