Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caseflow metrics instrumentation #11994

Closed
lpciferri opened this issue Sep 6, 2019 · 1 comment
Closed

Caseflow metrics instrumentation #11994

lpciferri opened this issue Sep 6, 2019 · 1 comment
Labels
Eng: Data Mark when data has been backfilled, or the issue has any data integrity concerns Epic Priority: Low Reported issue, not a blocker. "Last in" priority for new work. Team: Delta 🔺 Type: Metrics or Reporting

Comments

@lpciferri
Copy link
Contributor

lpciferri commented Sep 6, 2019

Ticket to capture allda Caseflow metrics. We have a lot of disparate tools for metrics, too, which we may want to consolidate.

Subsequent sections of this ticket will capture metrics & where they should go (if we know at this time)

Goals

My overall goal for metrics instrumentation

  • Concise - not too many tools, so it's not hard for any Caseflower to view metrics
  • Appropriately capture product metrics that roll up to business metrics (when possible, and not too abstracted)

Goals for this ticket:

  • Document tools, existing metrics and where they live, so we can make decisions about whether they should live somewhere else, and where.

Data sources

There are 2 relevant data stores:

  1. Our databases (Caseflow, efolder, VACOLS)
  2. DataDog’s database

Depending on the metric we’re trying to report, either one could be used.

Tools

Caseflow team owns:

  • DataDog
  • New Relic (app performance only)
  • Google Analytics (user patterns only)
  • Appeals data notebooks Chris used to use

VA teams own:

  • Looker/Tableau - BVA
  • Tableau - CSEM user report
  • Tableau - VBA/PA&I

NOTE all the VA tools currently rely on a Redshift db that effectively mirrors the Caseflow and VACOLS dbs.

Metrics

Groups of metrics included below:

  1. Required OIT metrics (required)
  2. Business metrics (we think we should track, not required)
  3. Product & performance metrics
  4. Team/process metrics

Required OIT metrics

These are metrics required to report to OIT and OMB. They are a subset of all the metrics that we as a Caseflow team would like to capture. But because they're reported externally, they have to be priority.

Key OIT stakeholder contacts for these: Geoffrey Stienblock, Susan Kagan

Status Metric ID 2. Metric Description 3. Unit of Measure 4. Performance Measurement Category Mapping 5. Agency Baseline Capability 6. 2018 Target 7. 2019 Target 8. Measurement Condition 9. Reporting Frequency
Not Started 1709076050 Caseflow - Percentage of all cases certified with Caseflow, which assists Agency Original Jurisdiction in uploading all required documents of record to a shared electronic file repository and automatically prepares the Certification of Appeal to transfer the appeal to the Board of Veterans’ Appeals. (Number of appeals transferred to the Board using Caseflow/Total number of appeals transferred to the Board (manual certifications of paper appeals and Caseflow certifications)) Strategic and Business Results 85.7 90 90 Over target Monthly
Not Started 1709076052 Caseflow Dispatch - Percent of non-denial decisions with an EP created within 7 days. Caseflow facilitates the transfer of cases to the Regional Office following issuance of a Board of Veterans’ Appeals decision, with an end product (EP) created to track work. Not creating an EP increases the possibility of miscommunication and slows average processing times, delaying ratings adjustment and working of remand orders. (Number EPs with date<=7 days after outcoding date/number non-denial decisions) Strategic and Business Results 79.6 80 80 Over target Quarterly
Not Started 1812216039 Increase Veteran show rates at Board hearings using Caseflow Hearing Schedule, which enables a standardized scheduling process providing the ability to schedule hearings at both regional offices and alternate locations closer to Veterans’ homes. (Total scheduled meetings held – postposed hearings/total scheduled meetings from Caseflow Hearing Schedule) Strategic and Business Results   N/A 75 Over target Quarterly
Done 9/30/2019 1812216040 Percentage of AMA reviews established within 7 days Date in Intake appeal established minus date entered Strategic and Business Results   N/A 90% Over target Monthly
Not Started 1812216042 Adoption of Caseflow Reader for appeal decisions (percentage) Date of decision minus date of appeal establishment Customer Satisfaction (Results)   N/A 98% Over target Annual
Not Started 812216043 Average Customer Effort Score (CES) 5 out of 7 positive responses or better Customer Satisfaction (Results)   N/A 5 out of 7 or better Over target Annual
Not Started 1812216044 Mean (average) time to recovery (minutes) Outage recovery time minus outage report time Innovation   N/A 30 minutes or less Under target Quarterly

Business metrics (appeals/decision review process metrics)

These are the hardest, given the appeals process. We'd like to influence the length of time it takes for an appeal to move through the system. Since it takes years, and there are a number of other variables (union quotas, multiple appeals systems, multiple dockets), it's difficult to see the impact of specific features tied back to those long term business outcomes. Doesn't mean we shouldn't try!

I think all these, below, should live in BVA's instance of Tableau.

Things we don't capture yet:

  • Appeals process timeliness, by docket
  • Number of appeals in each phase's backlog (e.g. waiting for distribution, waiting for hearings, at judges and attorneys, etc.)

Things we have captured or know we need to capture

Metric Definition Source Note
Appeal establishment (AMA only) Percentage AMA reviews established within 7 days
Caseflow Certification - data improvements Percentage of cases with mismatched documents ? Chris used to capture this
Hearings show rate (AMA + legacy) (Total scheduled hearings held – postposed hearings/total scheduled hearings Testing in Looker here https://caseflow-looker.va.gov/looks/193 (though I think this is just AMA hearings) Required OIT metric
Alternate hearing location utilization Percentage of hearings held at alternate locations by year ? Chris used to capture this
Reducing hearings backlog Held or cancelled hearings per year. A held or cancelled hearing resolves the hearing request. This measure is thus the number of hearing requests that the Board has capacity to address in a given year. Hearings at which the Veteran did not appear at the scheduled time are treated as cancellations and are thus included here. ? Chris used to capture this
Held hearings Held hearings per year ? Chris used to capture this
Caseflow queue - Time from case storage to attorney assignment (legacy and AMA) Time from case storage to attorney assignment ? Chris used to capture this
Attorney/VLJ case output Monthly case output per attorney/VLJ ? Chris used to capture this
Caselow queue - dispatch Time from signed decision to outcoding ? Chris used to capture this
Caseflow Dispatch (legacy cases only) metric Percent of non-denial decisions with an EP created within 7 days ? blocked, requires CorpDB. Should be BVA tableau Required OIT metric
Caseflow Dispatch - dispatch process timeliness Time from outcoding to claim establishment ? Requires CorpDB data, so long query times. Blocked. Chris used to capture this.

  • Section 5 reporting requirements in AMA legislation --> a whole project in and of itself.
  • Productivity reporting

Product & performance metrics

Adoption & usage

Metric Definition Source Note
Caseflow Intake adoption Percentage of legacy appeals and AMA review requests established by Intake ? Chris used to capture this
Caseflow intake usage Number of review requests established by Intake ? Chris
Reader adoption # decisions where Reader was used. appeals opened in Reader / total decisions ? Required OIT metric
Caseflow Certification adoption (legacy cases only) Percentage of paperless cases certified with Caseflow (eligible cases) ? Chris used to capture this
Caseflow Certification adoption (legacy cases only) Percentage of all cases certified with Caseflow ? Chris used to capture this
Caseflow Dispatch adoption (legacy cases only) Percentage of non-denial decisions with a Caseflow-established EP ? Chris used to capture this
Monthly active users # users by user role CSEM Tableau report or ?
Appeals Status adoption Percentage of Veterans with active appeals accessing Caseflow status information ? Chris used to capture this
Appeals Status monthly active users (Veterans) # unique Veterans who have accessed appeals status via VA.gov per month ? Chris used to capture this. Does DSVA/VA.gov still?
eFolder usage Veteran folders downloaded by month ? Chris used to capture this

User breakdown

  • BVA teams
  • VBA teams
  • non-compensation business line teams (?)
  • VSO teams (VSOs, private attorneys, agents)

Usability

Metric Definition Source Note
Customer Effort Score (to be started) Survey/email to start Required OIT metric

Performance

Data dog dashboard - Service level indicators

Metric Definition Source Note
Page load times
Uptime
Request time
Latency

Team process metrics

Old Caseflow Stats doc where we hacked together incident response metrics, CI metrics, and data integrity metrics.

Metric Definition Source Note
Incident response Mean (average) time to recovery (minutes) Required OIT metric
Velocity (we don't currently track beyond point in time estimation sessions)
Continuous integration success rate

Open questions

  • What metrics do we track that are missing?
  • What tools do we use that are missing?
  • For metrics that we want to track but don't yet, what tool should we use?
@lpciferri lpciferri added the Epic label Sep 6, 2019
@alisan16 alisan16 mentioned this issue Nov 19, 2019
3 tasks
@D-L-Ware D-L-Ware added Eng: Data Mark when data has been backfilled, or the issue has any data integrity concerns Priority: Low Reported issue, not a blocker. "Last in" priority for new work. labels Dec 19, 2019
@alisan16
Copy link
Contributor

alisan16 commented Apr 1, 2020

Closing this as the remaining work isn't actionable. OIT metrics tracked in #12748

@alisan16 alisan16 closed this as completed Apr 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Eng: Data Mark when data has been backfilled, or the issue has any data integrity concerns Epic Priority: Low Reported issue, not a blocker. "Last in" priority for new work. Team: Delta 🔺 Type: Metrics or Reporting
Projects
None yet
Development

No branches or pull requests

4 participants