Skip to content
This repository has been archived by the owner on Aug 1, 2023. It is now read-only.

Release Notes Reviewer #174

Open
stackatron opened this issue Nov 13, 2019 · 3 comments
Open

Release Notes Reviewer #174

stackatron opened this issue Nov 13, 2019 · 3 comments
Assignees

Comments

@stackatron
Copy link

stackatron commented Nov 13, 2019

Overall problem:

  • Solving challenges for user retention is taking longer than anticipated.
  • App Miners are concerned that abandoned apps are ranking high because the reviewers do not account for app improvements.
  • What we need is a way to reward meaningful progress on apps. This is hard to objectively measure. And if we did objectively measure it, it would likely be game-able.

Game-able objective measures:

Here are the suggestions from the proof-of-progress thread simply to illustrate the point:

  • Posting tweets: Obviously game-able.
  • Fixing bugs: Bugs can be purposely introduced and then fixed.
  • Adding new features: 100 new features in each app does not really equate to apps getting better.
  • Doing partnerships: Interesting, but it is easy to imagine bogus/pointless partnerships. Furthermore, how would a reviewer verify such a thing as partnerships?

Our true objective:

In my opinion our true objective is to reward apps that provide value to users as measured by retention.

Since we are delayed on retention, I'll propose a temporary, secondary objective: Reward apps that make quality improvements that benefit users. This is a subjective goal, and so I suggest we use subjective measures for scoring.

Release Notes Reviewer

Boot up:

  • PBC finds a Release Notes Reviewer
  • PBC and Release Notes Reviewer source five Evaluators.
  • In both cases we are looking for:
    • Familiar with building/evaluating technology products.
    • Responsible, thoughtful, and free of bias.

Monthly run:

  • Apps publish release notes before 1st of month using new feature on App.co. This is intended as a communication from the app creators to the app users.
  • App.co will export a snapshot of all app release notes and share with Release Notes Reviewer on the 1st of each month.
  • Evaluators will evaluate the release notes, attempt to verify the claims in the notes with the app, compare to past notes, and then score apps.
  • Release Notes Reviewer will combine the individual reviewer scores, average, and order retests where needed.

Scoring:

  • Value: Release notes describe improvements that are valuable to users (scored 1 to 10)
  • Accuracy: Changes in notes are consistent with current state of the app (scored 0 to 1)
    • 0 = Evaluator believes there are false claims in release notes or app is recycling bugs/features from previous notes. Or doing anything to undermine accurate scoring.
    • 0.1–1 = Evaluator's discretion. For example, if 1 of 2 claims are true.
  • Final raw score: Value*Accuracy
    • If there are no new release notes in the last 30 days, score is -1

Abstract considerations:

This system has some downsides, but I think it would encourage:

  • Fixing bugs and making sure they are described accurately and fully fixed in production.
  • Releasing few, very valuable features and making sure they work in the product and users can find them.
  • Discourage exaggeration and lying.

On the process side of things:

  • Unlike the user retention reviewer, this reviewer is ready to go with no new tech.
  • We can shut down this reviewer once retention metrics are working properly. This won't scale well and that is OK.

I'm not super attached to this idea. Just suggesting a path forward that I think could work given all the constraints and serve as temporary patch for rewarding App Miners who are shipping improvements each month. Feedback please 🙏

@stackatron stackatron self-assigned this Nov 13, 2019
This was referenced Nov 13, 2019
@sdsantos
Copy link

I like the idea of rewarding apps that are actively worked on, but I don't like the part of creating a full score out of something so subjective and app specific.

I would suggest something more simple:

  • The release notes go through a binary check: "Was there at least a minor change in the product?"

And the score could be something like:

  • Change in the last month: 1
  • Change only in the previous month: 0.50
  • Change only 2 months before: 0.25
  • Change more than 3 months old: 0

As a bonus, open source apps only needed to link to their CHANGELOG file, instead of writing something on app.co.

@larrysalibra
Copy link

I concur with the need to have something that incentivizes app devs to continue working on apps - it's also important that these changes are useful for users.

Subjective score will be controversial but it might help "good" apps if the evaluators are seen as fair and are respected by app developers.

Evaluators will evaluate the release notes, attempt to verify the claims in the notes with the app, compare to past notes, and then score apps.

One thing I'd like to point out is the time that will be required for this. If you have 5 evaluators and each evaluator evaluates each app's changes, that's 1250 release note evaluations that need to be conducted. If each evaluation takes 3 minutes, that's 62.5 man-hours of time or roughly a work day and a half of work per person. I imagine they'd need to spend much more than 3 minutes of time per app to do any sort of useful review of claimed features.

@larrysalibra
Copy link

I wrote up some thoughts on how measuring value delivered to users by app removes the need to measure "meaningful progress on apps" based on a call with @pstan26 https://forum.blockstack.org/t/measuring-user-value-hodling-stx-and-one-click-payments/9418?u=larry

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants