Skip to content

US Digital Services Playbook

Kay Chung edited this page Mar 2, 2017 · 11 revisions

In developing this prototype, we're following the plays identified in the US Digital Services Playbook:

1. Understand what people need

Our approach to building digital services and products puts the most importance on understanding and meeting user needs. During the short development period of this prototype project, we have:

  • spent time with current and prospective public users of emergency alert systems
  • used qualitative and quantitive research methods to understand goals, needs and behaviors, while valuing the time of participants
  • performed user research to identify the state administrative user of the emergency alert system
  • tested prototypes of our work with real people
  • documented the findings about user goals, needs, behaviors and preferences
  • shared findings informally and formally with our team
  • created a prioritized list of tasks that we understand our users need to accomplish, as user story cards in our Kanban board and as Issues
  • regularly tested the service as time has allowed during development to ensure it meets user needs

Answers to key questions

  • Our primary users are: a) California residents and b) State-employed administrators who have the job of informing residents about emergencies. We have documented these primary users in the form of Personas
  • The user needs our service will address are detailed in our persona documentation
  • The people who may have the most difficulty using this service include a) those who are unfamiliar with web services, b) those who do not regularly use the Internet, c) people who do not have a good command of the English language because we have not yet provided it in other languages
  • The research methods we used were: a) user surveys, b) personas, c) interviews, and d) interactive user testing
  • The key findings of our research are documented in our research journal and were discussed regularly in our team Slack and in person
  • We are testing with real people during every sprint, see our research journal

2. Address the whole experience, from start to finish

The point of this play is to make sure that the team considers all the different journeys and ways a user might take to meet their goals or have their needs met. By taking an approach of understanding and meeting user needs, we try to make sure that all types of encounter or interaction are thought of so that we can help users achieve their goal.

We have:

  • Thought about the different ways in which people might encounter and use this service. In the case of this prototype, many of these considerations may be out of scope - for example, what communications might people encounter to make them aware of the service and the value or need that the service will provide or meet for users. See also our work on personas that details how an administrative user might encounter and engage with the admin interface to this service. The user stories and issues in our Kanban board also reflect the thought we have put in to considering how users might interact with this service.
  • Identify pain points in the current way users interact with the service, and prioritize these according to user needs: there is no SMS state-wide emergency notification system, so the lack of such a service is a pain point. Our research also indicated that people find out about emergencies after the fact, in newspapers. Further, in researching the job description of possible admin users, we saw that there is a high likelihood that they have to use legacy Windows and MS-DOS systems.
  • Design the digital parts of the service so that they are integrated with the offline touch points people use to interact with the service: this is out of scope of the prototype, see above. Our consideration would include communication about the service.
  • Develop metrics that will measure how well the service is meeting user needs at each step of the service. See our notes about performance and metrics and user stories about tracking and visualizing data that have been tagged as either MVP or stretch for implementation.

Answers to key questions:

Our user research journal, personas and Performance and metrics provides responses to the key questions of this play.

3. Make it simple and intuitive

Our goal is to build digital services and products that are simple and intuitive enough that users succeed first time, unaided.

To do this, we have:

  • Used the US Web Design Standards
  • Used the Design style guide consistently
  • At each stage of the design process made sure that content and copy is designed to be simple, clear and easy to understand

In the design of our prototype:

  • We have identified the primary tasks that the users are trying to accomplish
  • We have checked and validated through user research and remote user testing that we are using plain and universal language
  • Due to the time constraints of the prototype development period, our service is offered in English only, but our template engine would allow for localization and fully supports unicode.
  • If a user needs help using the service, we have provided agency contact information in the prototype design. Otherwise, designing and delivering full customer support is out of scope for this prototype.
  • We have used the CA.gov logo and colors as well as the mandated US Draft Web Design Standards to ensure consistency, trust and authority with end users

4. Build the service using agile and iterative practices

We have used an agile, iterative approach to deliver this prototype. We have:

  • shipped an MVP (the prototype submission) in a short timescale
  • run regular user research on completed features at the end of each sprint and reflected insights and learnings for the next sprint/iteration of design and development
  • our team has communicated regularly and frequently both in person and online through the use of a war room and Slack channel, as well as practices such as requiring peer review of pull requests and merges. We have used techniques such as daily standups and sprint reviews.
  • our Kanban and backlog reflects a prioritized list of features and bugs that support our lightweight scrum process and based on data gathered during user testing
  • we have used a source code version control system and given the entire project team access to it and our issue tracker

Answers to key questions:

  • the MVP took 4 weeks to ship
  • production deployments are automatically performed in under 5 mintutes
  • each iteration/sprint is 5 days
  • we are using Github's branches for version control
  • we are using Github's Issues and Kanban board to log and track work (bugs and user stories). During this short prototype development period, bugs were logged as Issues and prioritized by the product owner to ensure the highest value fixes were delivered the earliest.
  • the feature backlog is managed in the project Kanban
  • we reviewed and re-prioritized the feature and bug backlog at the beginning of each sprint, and due to the short timescale of prototype delivery and the short backlog, we also allowed for review and re-prioritization during the morning standup meeting if required.
  • user feedback was collected during each sprint. It was collected and documented in the research journal and the findings used to update Issues as required
  • our user research journal documents the gaps that we identified and the steps and considerations we took to address those needs

5. Structure budgets and contracts to support delivery

This play does not apply to the scope and exercise of prototype delivery.

6. Assign one leader and hold that person accountable

The project/product leader is identified in the README documentation

7. Bring in experienced teams

Our team has industry-recognized award-winning direct experience in the required fields to design and deliver modern digital services that meet user needs.

8. Choose a modern technology stack

Our README includes a summary of our modern technology stack in item (l).

9. Deploy in a flexible hosting environment

Our README includes a description of how the prototype has been deployed in a flexible hosting environment in items (m) and (r).

10. Automate testing and deployments

Our README includes a description of how we have automated testing and deployments in items (n) and (o).

11. Manage security and privacy through reusable processes

The prototype relies upon the collection and usage of personal information from users. We have made sure not to collect more information than is necessary. For example, we have restricted the prototype to collect only the following information:

  • phone number (for SMS notification)
  • zipcode (for location)
  • notification preference (opt-in for non-emergency notifications)
  • optional email address if the user would prefer notification by email

We have decided that there is no need to collect:

  • name
  • address

The prototype provides agency contact details that public users can use to report security issues.

12. Use data to drive decisions

Due to the scope and timeline of this prototype, we only have user research data. The short development time means that there has been no opportunity to collect meaningful and useful data that would inform subsequent design and development. However:

  • we have implemented automatic and continuous monitoring
  • we have implemented Google Analytics to measure user behavior

13. Default to open

We believe in open. It makes things better. We have:

  • included a link in the footer of page templates so that public users can report bugs and issues
  • licensed our work under the MIT license
  • documented our work in the open