-
Notifications
You must be signed in to change notification settings - Fork 16
TryMyUI Scaling Proposal #153
Comments
Better spread the recurring reviews out to encourage continuous improvements. |
I also agree that it would be better to decrease the number of tests per month, but keep it monthly, than to spread them apart. |
Spreading means adding the results from as far back until the sum of number of tests per month is >= 10. @GinaAbrams Is that feasible with TryMyUI? |
The norm is typically that a volume of tests (best practices is 20 or more) are needed to produce accurate results, which is why we suggested bi-monthly reviews but with higher reliability. @friedger not quite sure what you're laying out here, can you break it down further? |
@GinaAbrams I tried to understand how spreading could work. Currently, it is not clear what it means if apps are tested bi-monthly. Will the score only change in November, January, .. for all apps? Will some apps get a new score in the months in between? My suggestion was that the scores from the tests are accumulated from a time window big enough to have 10 tests results. |
In the model proposed, apps would be tested every other month, and their scores would remain the same for month 1, month 2, and re-tested month 3. |
Can I ask how it will work for October? What apps will be checked? Even or odd ones? If I may, I want to suggest an improvement to this proposal: |
Just wanted to reiterate, I believe it would be better to keep the tests monthly, but decreasing them to 2 or 3 per month, after the first month. We would still incentivise frequent improvements, but decrease the TryMyUI load. |
We've discussed with TryMyUI and they recommend a volume of tests on a monthly basis. We're going to implement this change for October and can iterate as needed. Apps that were tested in September will next be tested in November, and TryMyUI scores will remain the same for October. |
What does |
On average, projects need around 20 user tests to get actionable and reliable feedback. If we decreased the number to say 2 or 3, the results would vary a lot more. The risk would be introducing volatility as opposed to reliability to the rankings. I think the best performing apps are going to be the ones making constant improvements regardless! 🚀 |
We could still start with 10 user tests, and then continue with 2 or 3 each month. The last 10-15-20 user tests would be counted for the ranking. |
@GinaAbrams has notified everyone by email on this. We can try every other month for a couple cycles then reevaluate. Moving to review. |
@sdsantos like your idea, if you want to rally support for that can you please start a new ticket? |
TryMyUI has a proposal as an app reviewer, given the number of tests that must be set up, tested, reviewed, and that have a fixed production cost.
The proposal: starting in October, start testing each app bi-monthly. Once we break 300 apps, start testing quarterly.
In this model, apps will have more time to implement changes, and TryMyUI can continue to quality control as they have been.
The text was updated successfully, but these errors were encountered: