Skip to content
This repository has been archived by the owner on Dec 23, 2020. It is now read-only.

Sprint 3: NIBRS Snapshot

Nicole Fenton edited this page Apr 14, 2017 · 1 revision

Research plan

User group

A mix of journalists and members of law enforcement

Priorities

Key questions

Snapshot

  • Given the limitations of the data, how can the CDE best demonstrate the value of NIBRS reporting?
  • What aspects of incident data are most compelling to novice and advanced users alike?
  • Are there unique ways of visualizing NIBRS data (other than a time series, as seen with summary data) that help drive insights?
  • What are the most effective ways of presenting multiple dimensions and datasets on the same page?
  • How can we help people understand the relationships between NIBRS data, summary data, and estimations for a particular crime?

CSV

  • Do users explore to explore/assess a dataset prior to deciding whether or not to download it? If so, what aspects are important for users in making this determination?
  • What level of data shaping/filtering is necessary, from both a technical and usability perspective, prior to the generation of a CSV?
  • How can we best present hierarchical & nested NIBRS data in a tabular/flat-file format, such as a CSV?
    • Are narrower, pre-filtered results generally preferable to raw, but more expansive, data?
    • Can an aggregated approach to incidents (i.e., offenses = 3, rather than breaking out what types of offenses were reported) adequately represent the data? If so, does this approach require more filtering prior to download in order to be effective?
    • What are the implications of showing more data, but having very wide tables, on usability and file size?
    • Can we effectively select which aspects to break out individually and which to aggregate in order to limit file size and width?
  • To what extent does the design of the CSV matter if most users are going to import the data into another tool for analysis?
  • What aspects of the data (i.e., definitions, footnotes) are important to explain in a readme or within the CSV?

API

  • Based on our documentation, is it clear to users what they can and can’t do with the API and if so, how does this align with their expectations?
  • Is the API easy to setup and use? Does the documentation adequately explain how the data is structured and what various terms/definitions mean?
  • What types of use cases does the API not appear to support?

Scenario

Snapshot

As a woman from Ohio, I want to understand the frequency of rape incidents in recent years and see how those statistics relate to national trends and reporting in general. I also want to understand how the stats break down by age, gender, and relationship.

Dimensions

  • Counts
  • Rates
  • Offender: age, sex, race, ethnicity
  • Victim: age, sex, race, ethnicity, relationship to offender
  • Location type (house/park/church/etc.)
  • Time of day (aka incident hour)

Navigation

After gathering internal feedback on high-level wireframes, we will map out the navigation into a basic text outline and make a Treejack test with federal employees to see how users react to our assumptions about navigation and information architecture.

CSV

We are working on a series of questions that will inform CSV design. It would be great if we could show what happens when a user drills-down into a NIBRS highlight presented within the “snapshot” and downloads this data as a CSV. To evaluate potential design approaches, we will show two different types of CSVs - one with aggregated columns and another with more detail, but a wider table width. The CSVs may be dynamically generated, or prepared beforehand, but they should represent the data and scenario that is shown as part of the SnapShot. The CSVs will also include a readme that explains certain elements of the data.

Key quotes

“A lot of times I’ll just start playing with the data and if I have questions, I’ll come back to the terms and definitions to fill in the gaps.”

“When it goes out to the public, give us time to review—I’m not a bigger state, but a week isn’t enough for drill-down comparisons and corrections.”

“I don’t know if I’d use time of day—officers fill out the data and I run into audit-wise, I refrain from publishing it because so many end up at 12am, or they’ll just put in OO, sergeants don’t have time to check this level of detail”

What we learned

  • A user facing distinction between data sources doesn’t seem necessary for comprehension. People didn’t seem to care about SRS v NIBRS, thinking we could solve this within a page by having an attribution link within the visual, etc.

    • When we present elements of summary and NIBRS data together, advanced users will want to know the extent to which they are related so they can analyze the data responsibly.
    • Summary data seems to anonymize the data, but NIBRS started to humanize it by showing who was affected and how by crime.
  • State agencies are excited about what we’re thinking and want to use the same level of data viz/stats that we’re thinking of sharing with the general public.

  • Keeping the charts simple and easily understandable will make the data more widely accessible and usable. (Also, makes it feel more transparent.)

  • Context isn’t just about the data.

    • Defining all the terms is just as important for making this product usable. People loved the glossary feature.
    • We need to link everything together better—it was purposely like that for the test, but people want to click to see definitions, caveats, data declaration, drill into the data, etc.
    • People are quick to make inferences about the data and create patterns - we have to be careful to present only what the data shows and not mislead people about what the data does or does not imply.
  • Downloads will still be key.

  • Need to send out CSV to more participants but early conversations suggest that people want the full data in the CSV accompanied by good readmes and data descriptions.

  • In terms of the CSV, people care about the overall definition/integrity of the data more so than it’s presentation - they’re used to manipulating the data in order to work with it.

Next steps

  • Incorporate NIBRS into main page template.
  • Refine navigation based on the Treejack results.
  • Map out the content needs for the MVP.
Clone this wiki locally