Skip to content
This repository has been archived by the owner on Aug 1, 2023. It is now read-only.

Update guidelines for Audit Period Changes #158

Closed
GinaAbrams opened this issue Sep 27, 2019 · 12 comments
Closed

Update guidelines for Audit Period Changes #158

GinaAbrams opened this issue Sep 27, 2019 · 12 comments
Assignees

Comments

@GinaAbrams
Copy link
Contributor

What is the problem you are seeing? Please describe.
During the audit period, in the event that apps are re-tested, we need to make sure that apps are tested for what they submitted at the beginning of the month and not changes made in the middle of the month.

We also need to publish a guide for what qualifies for retest.

How is this problem misaligned with goals of app mining?
App Mining is a system that was designed to serve all app founders, and want to ensure we're doing this fairly in the audit period as well.

What is the explicit recommendation you’re looking to propose?
Gather feedback and change proposals from the community, publish, and implement.

@friedger
Copy link
Contributor

In #159 I suggest

  • to use github issues for reporting and
  • to not accept incorrect audience settings in TMUI as changes.

@friedger
Copy link
Contributor

Highlight that the audit period is for finding systematic errors, not for improvement or decreasing the rank of individual apps.

For example, if an audience setting in tmui is wrong then the change should be that the setting is made public at the beginning before tests begin.

If the months since launch on PH is wrong then PBC could try to formalize how relaunches are handled.

@njordhov
Copy link

If the goal is to avoid allowing major changes before TMUI retest, the results could be published immediately after completion, with a short period (like 24h) to request retesting.

@GinaAbrams
Copy link
Contributor Author

The main item for discussion here is primarily TryMyUI related. Product Hunt months since launch is simple data that can be audited.

I'm not sure how making the review period shorter would improve the situation.

I think there can be more global rules such as:

  • Apps cannot be re-tested because of audience preferences. TryMyUI can only re-test apps that were not tested correctly (e.g. wrong app was reviewed by reviewers).

How should we go about any changes that might be NIL related? We've had requests from folks that their Gaia score needs to be updated, but we don't want to reward apps for changes made mid-review.

@njordhov
Copy link

njordhov commented Oct 4, 2019

I'm not sure how making the review period shorter would improve the situation.

The idea is to provide the TMUI score immediately upon completion with a short review period, so that if a retest is done there isn't much time to make major changes to the product.

@friedger
Copy link
Contributor

friedger commented Oct 5, 2019

Highlight that the audit period is for finding systematic errors, not for improvement or decreasing the rank of individual apps.

The audit period should not be used to handle retests (be it by TMUI or NIL or whatever reviewer) for an individual app. Only errors in the algorithms should be accepted, not in the reviewers judging.

Errors in reviews should be discussed in the monthly calls whether the reviewer is biased, not qualified, etc. and how the quality of a reviewer could be improved. The audit period should not be used for that. Handling review errors in the audit period could lead to manipulation of individual scores and does not scale. We should take the data input from the reviewers for the algorithms as immutable.

Examples for systematic errors could be

  • last months data was used instead of current months
  • apps flagged as ineligible are included in the ranking

@Walterion01
Copy link

From another point of view, it will not be perfect for limiting the audit period to systemic error. As we probably will increase or switch the reviewers and they are mostly human, they will be fault like the cause behind this issue. We should not let the developer take the loss for the reviewer fault.
Especially take into account that the flow of mining ranking is like that if an app goes down, it will be so much hard to bring it back up. So, in this case, if we let the developer take the hit and fix the issue for the next month, it will not help the app made with hard work.

Instead of erasing the question, I propose to optimising the flow and reviewers work to minimise the fault. I assume that in this way, no one can abuse it as if there were no issue in the process, no one can object to that.

For more clarification, it is not wise to let someone go to jail and give them no chance to defend because the judge has a mistake and say: "ok we will fix that for the next man in the line".

So I propose instead we work on the issues happening for reviewers and flow and try to fix them, eg fix the judge.

@friedger
Copy link
Contributor

friedger commented Oct 5, 2019

My view on this is that we are not talking about going to jail but about getting free lunch. Everybody gets nothing by default (is not in jail), the algorithms determine the free lunch for your app based on the data it received. If you don't like the outcome, you can fix the algorithm or fix the data. You should probably not mix both fixtures.

@Walterion1 Your concerns about the impact of last months score should be discussed in a separate issue.

I do agree that issues happening for reviewers should be fixed, but not in the audit period.

@Walterion01
Copy link

It is a metaphor, dear friedger.
When you worked on something for months and not for your fault, it gets an unfair reward; it will discourage you. And one of the flaws of the proposal is the effect of LastRoundScore so it should be discussed here, as it is a good thing already.

An example for even more clarification: Say in the future months we have 500 apps, and NIL check them all, and some of them are loading a new version of blockstack.js that has a bug in some cases happened to, eg 30 of them. How should we do in this case? Call for an emergency? In the case of this proposal, we will fix the bug, don't let them ask for a retest and say to them: "Try next month".
Should we not have a good structure, take the issue, eg in Github or a website for developers and if verified by PBC or another reviewer, let them have their rightful justice?

Honestly, I don't understand the motive behind letting someone else takes the damage for a judge instead of improving the workflow, do we like to go the easy way or the right way?

@Walterion01
Copy link

@GinaAbrams I propose to have a way to report such issues in the app dashboard or Github, and PBC or a reviewer verify the issue by checking with some clear rules.
Sample of the rules:

  • The reason for the issue must be only because of matters controlled by PBC or reviewers.
  • The developer must act and provide all the info as asked in the declared timing by PBC.
  • The developer should report such issues in the first two days of audit period to give time to check it, otherwise, it will not be guaranteed to examination.

@ViniciusBP
Copy link

ViniciusBP commented Oct 5, 2019

IMO, For Trymyui, each new retest should be asked here on github and the old and new videos should be reviewed to see if the developer made significant changes to the App. Just look what happened last month, will the old score of Arcane Maps be fixed if proved by videos that the app changed a lot? It is easy to check and see that the App changed a lot and had an unfair advantage, it is the same thing as giving a special deadline with almost one additional month of development time. Even worse now that the tmyui scores should change only in 2 months.

I don't like the idea of developers contacting reviewers directly and fixing their issues during audit time.

@GinaAbrams
Copy link
Contributor Author

The app mining process is mostly manual these days, but progressing toward more automatic. We have to treat app reviewers as the source of truth for app reviews, but also ensure a fair process for all. I propose a simple solution that was listed above, which we can revisit when the process is more programmatic.

Apps will not be re-reviewed unless there is something blatantly wrong done on the part of the app reviewer. If for example, the wrong app was reviewed, then we can re-test. But if it's in regards to audience preferences, it will not be re-reviewed.

This should be re-visited again in a few weeks when we launch the updated maker portal.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants