Prototype A: CDT Procurement Demo - Lab Zero
The prototype is running at this url: https://adpq.labzero.com/
- User: admin
- Pass: admin
- User: user
- Pass: user
- You can create additional Requester accounts by logging in with any unique username and the password “user”. This may be helpful for cart & reporting testing.
- Quick-access walkthrough to confirm how Lab Zero's prototype meets the functional requirements stated in Prototype A RFI.
Table of Contents
Installation of requirements
- Elixir 1.4.1 (Erlang/OTP 19 [erts-8.2])
- Phoenix Framework 1.2.1
- postgres (PostgreSQL) 9.6.2
- Node.js 7.5.0
- React 15.4.2
MacOS dev environment
- Install Homebrew if not already installed
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
- Update Homebrew
- Install postgresql
brew install postgresql
- Install node
brew install node
- Install elixir
brew install elixir
- Install mix
- Create PostgreSQL role
createuser -d adpq
- Create and migrate schema
mix ecto.create && mix ecto.migrate
- To add seed data to your database:
mix run priv/repo/seeds.exs
Starting the application
- Install dependencies with
- Install Node.js dependencies with
- Start Phoenix endpoint with
Now you can visit
localhost:4000 from your browser.
The Lab Zero team’s approach to product development and agile software delivery mirrors the U.S. Digital Services Playbook as shown in the Playbook Adherence section below and fully illustrated within the Docs folder in this repo. Our team kicked off design by interviewing target users to understand their needs and to test solution ideas. User feedback informed design iterations, user stories in the backlog, and prioritization during the sprint cycles. Collaboration enabled the team to optimize design iterations that could be feasibly delivered within the timeline. Our engineers chose modern tools that supported our need to bring features together quickly and deliver them continually with a high degree of quality. The team’s high level of rigor in engineering—gleaned from years of experience delivering mission-critical applications—results in code that is easy to adapt to meet evolving business needs for the State of California.
This web application consists of a modern React.js app (Single Page Application) that consumes a JSON API backend written in Elixir using the Phoenix framework backed by a Postgres database. We considered using Shopify or Spree but ultimately decided to build the prototype from scratch. This decision enabled us to demonstrate our ability to develop an easy-to-use application designed in light of careful and deliberate conversations with real users.
- React Components
- JS REST access
- JS routes (defining client side URLs)
- JSON serialization
We use the GitFlow branching model and create feature branches off of the develop branch for all new changes. All commits should adhere to the guidelines described in our commit guide. Each feature branch is pushed to Github and a pull request is created, built and tested in CircleCI before peer-review is performed by other developers on the team. Upon final approval by the dev lead, the branch will be squash-merged back into develop.
The CI service checks all Pull Requests and looks for success for all of these steps:
- Compilation and Docker container build
- Credo (code quality/style analyzer)
- Unit tests
The delivery process relies upon automated movement of code and assets into the test environment triggered by commits to the develop branch.
- Commits to develop trigger deployment to our Test environment
- Upon deployment, post-deploy automated testing it performed
Using GitFlow tooling, we create a release branch and tag. The tag is then used to create a new container image. A job in CircleCI is used to deploy the tagged container to ECS in AWS.
We built the application in a cloud-first manner on AWS, but deployed it in a Docker container in order to allow cloud portability. However, if AWS offers a managed service for something we need, we prefer the managed service to rolling our own infrastructure, i.e. Postgres via RDS instead of running our own Postgres servers in EC2.
We maintain our VPC and security blueprints as CloudFormation templates checked into Git.
Database table definition/migrations https://github.com/labzero/adpq/blob/master/priv/repo/migrations/20170217185137_create_catalog_item.exs
Our prioritized Prototype Design and Prototype Dev backlogs within GitHub show the activities in our iterative and collaborative process from discovery to delivery and deployment. You may also find reference to the Playbook activity within many cards in the Product Design backlog (noted as “PB”).
The list below associates key activities and artifacts with the Digital Service Plays:
1: Understand what people need
- Drafted open-ended discovery interview scripts for key personas Requester Interview Script, Admin Interview Script
- Interviewed representative users, learnings shared and informed project goals and designs Dennis Baker, Robert Lee, Ned Holets
- Utilize existing large scale quantitative eCommerce research through the Baymard Institute Ecommerce Usability Guidelines, Shopping & Procurement Research
- Outline Full Persona List to note all roles likely involved in the full experience Link
- Focus on and define State Agency Requester as primary persona Link
- Focus on and define Lead Purchasing Org web admin as secondary persona Link
- Capture & prioritize needs as user stories Link
- Regularly test to validate problem/solution fit Robert Lee, Tracey Thompson
2: Address the whole experience, from start to finish
- Illustrated on- & off-line touch points and align team on key points of impact & focus Service Map
- Stated project summary, goals, & metrics to ensure the effort meets needs Product Speclet
3: Make it simple and intuitive
- Consistently utilized US Web Design Standards
- Followed accessibility best practices Section G of Requirements List
- Leveraged login to provide users with a way to exit and return later to complete process
- Improved readability by re-formatting and adjusting sample data Data Spreadsheet
4: Build the service using agile and iterative practices
- Shipped a functioning MVP
- Frequently ran usability tests to identify improvements Interviews, User Testing
- Facilitated team alignment & communication through daily standups, weekly demos/retros, & Slack channel
- Kept the delivery team flat & focused Kickoff deck
- Drafted a prioritized features backlog and review with team Link
6: Assign on leader and hold that person accountable
- See Requirements List, Section A
7: Bring in experienced teams
- See Requirements List, Section B
8: Choose a modern technology stack
- See Requirements List, Section L
9: Deploy in a flexible hosting environment
- See Requirements List, Section M
10: Automate testing and deployments
- See Requirements List, Section O
12: User data to drive
- See Requirements List, Section Q
13: Default to open
- Utilized open source as documented in the Open Source Technology Audit
####A. Assigned one (1) leader and gave that person authority and responsibility and held that person accountable for the quality of the prototype submitted
Aaron Cripps, Product Owner
####B. Assembled a multidisciplinary and collaborative team that includes, at a minimum, five (5) of the labor categories as identified in Attachment B: PQVP DS-AD Labor Category Descriptions
The majority of the team is based in the San Francisco Bay Area. One member is in Tucson AZ, one member in Little Rock AR. Our team collaborates using tools like Slack, Google Hangouts, Screen Hero, GoToMeeting, and Google Docs.
- Product Manager - Aaron Cripps
- Technical Architect - Sasha Voynow, Matt Wilson
- Interaction Designer - Dean Baker, Clayton Hopkins
- Visual Designer - Jim Ochsenreiter
- Front End Web Developer - Adam Ducker, Jeffrey Carl Faden
- Backend Web Developer - Sasha Voynow
- DevOps Engineer - Brien Wankel, Dave O’Dell
####C. Understood what people needed, by including people in the prototype development and design process
Informed by our initial persona attributes, we found three individuals whose job activities aligned with or related to the Lead Purchasing Organization Administration and State Agency IT Requester roles.
- Dennis Baker, State of California Assembly Reprographics Manager
- Robert Lee, Startup Office Manager
- Ned Holets, Lead Software Engineer who has worked on CMS projects
####D. Used at least a minimum of three (3) “user-centric design” techniques and/or tools
Human-centered design is a core aspect of our process. We consider each idea to be a hypothesis which should be tested and proven. You can find a richer explanation of our findings here. Key activity examples below:
- Customer Development
- Stating and prioritizing learning goals (hypotheses)
- Open-ended interviews with people who met our target personas to understand their needs and goals
- In-person usability testing to validate solution ideas/hypotheses
- Clickable prototypes to support usability testing
- ‘Think aloud’ qualitative user tests of prototype
- Accessibility testing
- Leveraging existing usability research
- Baymard Institute, an ecommerce usability research firm who uses qualitative and quantitative research methods.
####E. Used GitHub to document code commits
Yes, we’ve used Github fully for peer-review and as our sole code repository.
####F. Used Swagger to document the RESTful API, and provided a link to the Swagger API
Yes, we've implemented Swagger, you can view the test UI or point your own ui to the raw JSON describing the API. When testing, you can authorize in the Swagger-UI by putting your username in the Authorization header.
####G. Complied with Section 508 of the Americans with Disabilities Act and WCAG 2.0
Yes, we have used HTML and CSS in a manner that complies with the ADA and WCAG 2.0
####H. Created or used a design style guide and/or a pattern library
- Utilized the US Web Design Standards for user experience, visual design and responsive guidelines and patterns.
- Leveraged the Baymard Institute’s research-based user interaction guidelines for eCommerce product lists, homepages and checkout.
####I. Performed usability tests with people
We showed functional prototypes to the following individuals facilitated by a “Think Aloud” qualitative user test.
####J. Used an iterative approach, where feedback informed subsequent work or versions of the prototype
We began by clarifying the business case and target outcomes without proposing solutions. This sets the stage for each activity to be oriented around learning and empowers each team member to bring their expertise and creativity into the solutions which are iteratively built and tested. Learnings from each activity are fed back into subsequent iterations, cross-functionally.
- Product Owner led goal-oriented kickoff and drafted a first version of the “Speclet” to align and hold the team accountable to high-level key outcomes and measurements.
- Explorations improve in fidelity based on our learning needs
- Key learnings from user interviews informed the project summary, goals, and measurements and allowed us to apply improvements to our designs and development.
- Team story time for formal technical review of prioritized backlog. Development feedback assisted in clarifying prototype behavior and story decomposition.
- Validated design concepts through prototypes with people outside the team. User feedback informed design and development work.
- Daily sharing design, development, and product ideas through informal conversations and standups.
- Utilized Scrum framework for frequent inspection and adaptation
####K. Created a prototype that works on multiple devices, and presents a responsive design
Our prototype has been designed, developed and tested to work on desktop browsers, iOS and Android phones.
####L. Used at least five (5) modern and open-source technologies, regardless of architectural layer (frontend, backend, etc.)
We utilized many modern open-source technologies:
- Phoenix Framework
- Ecto (data layer)
####M. Deployed the prototype on an Infrastructure as a Service (IaaS) or Platform as Service (PaaS) provider, and indicated which provider they used
Our prototype has been deployed to AWS as a Docker container running in ECS using RDS for it’s datastore.
####N. Developed automated unit tests for their code
The Engineering Team delivered stories with working code and some level of automated testing. All tests are run in the continuous integration loop with each.
####O. Setup or used a continuous integration system to automate the running of tests and continuously deployed their code to their IaaS or PaaS provider
Our use of a CI server drives automated tests and our deployment pipeline. All new pull requests are tested. We used CircleCI to automate our CI and CD automation.
####P. Setup or used configuration management
We generate CloudFormation templates and build Docker containers, adhering to a https://12factor.net/ approach. CloudFormation templates for staging and production environments can be found in the docs/12-CloudFormation directory.
####Q. Setup or used continuous monitoring
We setup Honeybadger.io for error reporting and Pingdom for uptime monitoring.
####R. Deployed their software in an open source container, such as Docker (i.e., utilized operating-system-level virtualization)
We build Docker containers in our CI/CD process and deploy them to ECR/ECS in AWS.
####S. Provided sufficient documentation to install and run their prototype on another machine
Please see the Setup Instructions section in this document. All engineers used these steps to set up their development environments.
####T. Prototype and underlying platforms used to create and run the prototype are openly licensed and free of charge
All systems used to create and run the prototype are open source and free of charge for use.