Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reviewer: App Awareness #30

Closed
pstan26 opened this issue Feb 15, 2019 · 14 comments

Comments

Projects
None yet
7 participants
@pstan26
Copy link
Member

commented Feb 15, 2019

Right now it is too easy to game a metric like daily active users, and apps are not currently incentivized to grow their user base. A solution could be to add an App Reviewer that measures an app's awareness by reach.

@pstan26

This comment has been minimized.

Copy link
Member Author

commented Feb 15, 2019

@pstan26

This comment has been minimized.

Copy link
Member Author

commented Feb 15, 2019

Currently in talks with Awario. They feel confident they could remove gaming of this system to an acceptable degree by evaluating true "reach" of an app over "mentions" (which are easier to game). They seem interested in moving forward as an App Reviewer, would like to compare others though.

@pstan26

This comment has been minimized.

Copy link
Member Author

commented Feb 15, 2019

Also, I like Friedger's idea that "Maybe have the app review consists of one score from centralized services and one from decentralized services and over time shift the weights towards the latter." Starting 100% in the former camp sounds reasonable however at this stage.

@AC-FTW

This comment has been minimized.

Copy link

commented Feb 27, 2019

@pstan26 If your initial suggestion evolved to something like measuring recurring active user growth, that would be useful. If there was a decentralized analytics company you could rely upon for all blockstack apps that would be ideal.

Reach and mentions concern me because they both seem easy to game and don't really measure the true utility of an App, just the founder's social media and PR prowess.

Perhaps you are concerned that it would be easy for people to game recurring active user growth though--without grabbing identifying information in the analytic?

@cuevasm

This comment has been minimized.

Copy link
Collaborator

commented Feb 27, 2019

Hey, good thoughts, wanted to add some context here since I use Awario regularly. The raw mentions number could certainly be gamed (just have a bot tweet out your company name a bunch), however the reach is much harder to game and that's why we plan to focus on that as the key metric. Gaming this in any effective way would require getting large legitimate accounts that manage to escape Awario's filtering to mention your brand. Not only do they do a great job of assessing for accounts that are real, they also offer blacklisting tools so that if ever there was an account that was set up to game the system AND it managed to amass a bunch of followers/potential reach, we could very easily remove it from the counts. Furthermore, the idea is that all the mention and reach data will be available for everyone to look at and help us identify these likely rare cases and this would be in addition to audits by Awario and Blockstack PBC

More generally speaking, it is necessary to market any product or service if it is to be successful . Especially in the crypto world it seems, makers should be thinking about this earlier on. App Mining should encourage things that are proven to set you up for success. Users can't be acquired without awareness. I do agree that this type of App Reviewer gets even stronger when/if a true user count is introduced (my opinion is that it should). The two scores could work together to reward something like high conversion to eliminate any cases where an app is able to generate a ton of awareness but really low usage (probably unlikely but possible if they are marketing way way outside of their market). In the meantime though, it's important for apps to build their awareness and expose their offering to potential users.

@cuevasm

This comment has been minimized.

Copy link
Collaborator

commented Mar 1, 2019

We're going to proceed with Awario but do a dry run - this will give everyone a chance to see what type of data we'll be getting and how it would affect scores before it becomes actual. Ongoing is work to finalize how to normalize the score.

Open question: High-level, do you think that we should reward recent growth in this area fairly heavily? i.e. instead of focusing strictly on volume, it would look at the growth compared to last period. So, there's this notion of momentum and it's a bit more relative across apps.

@hstove

This comment has been minimized.

Copy link
Collaborator

commented Mar 6, 2019

Related to Mitchell's last post, we've been discussing potential algorithms for use with Awario. Here is my proposal:

Reach Score: Based on total reach. Your 'score' is log10(total_reach). So if you've reached 10 people, your score is 1. 100 is 2, 1000 is 3, and so on. This is much better than only using your actual reach, because outliers would totally skew the distribution. No matter what your reach is, you need to improve 10x your reach to increase this by 1. Using log10 is also similar to how we handle the 'theta' function in the algorithm, because the higher your score, the more you need to improve to bump your score.

Growth Score: Month-over-month growth in your total reach (not log 10). If you went from a reach of 1000 to 1500, your MoM growth is 0.5 (or 50%). If this is your first month in app mining, this score is not included.

Like all the other reviewers, we first calculate the z-score for each of these metrics, and then average your z-scores. Then we apply the theta function, and you have your 'final' Awario score.

This is just my proposal, and is not final.

@cuevasm

This comment has been minimized.

Copy link
Collaborator

commented Mar 6, 2019

Definitely makes sense to me. I like how making it log10 and including the growth score month over month further reduces gameability and controls for any small amount of error we might have in picking up reach count that wasn't actually about the project. It is also positive that newer applications will be able to compete even with well-established projects.

@jeffdomke

This comment has been minimized.

Copy link
Member

commented Mar 8, 2019

@hstove @pstan26 @cuevasm we should have a chat about finalizing this plan.

@avthars

This comment has been minimized.

Copy link

commented Mar 20, 2019

Love this idea. We should def incorporate this!

@cuevasm

This comment has been minimized.

Copy link
Collaborator

commented Mar 21, 2019

Everything is now in place to do the dry-run with Awario. Today we sent over a list of all current apps and the Awario team will be adding them to the tracking system. After enough data has been collected I'll post here! An announcement should be coming soon as well, we'll be working with their team on the exact rollout there. The plan is to proceed with Hank's proposal on the scoring methodology for the dry-run and then take feedback after having that to look at. Post here with any questions or concerns, thanks!

@cuevasm

This comment has been minimized.

Copy link
Collaborator

commented Apr 3, 2019

Hey Miners! Excited to introduce Awario. Here's the full announcement and remember, this first month will not count, we'll provide data and have a chance to review and implement feedback before they become official. Take a look at their announcement as well!

Details on scoring:

First things first, as you dive in here, I highly recommend checking out Awario’s docs, it should answer most specific questions about how the platform works - https://awario.com/help/.

Second, please note that Awario is finishing up work on a way for you to easily login to the interface and run your own reports, slice and dice data, see your Mentions, etc. In the interim we’ll be providing these manually via PDF and CSV, but we don’t anticipate that being the case for more than a month or two. Part of our agreement with Awario is to provide this feature as we felt it was really important for you to have direct access so that a) everything is transparent as possible and b) the information is accessible and actionable to you and your team.

With that, here’s the how we’re proposing the score to work and how it will be done for the dry-run. There will be opportunity to provide feedback before it becomes a part of your official score.

The Awareness Score

At a high-level, Awario focuses on two major aspects of awareness:

  • Mentions: These are captured mentions of a brand or app online, on social networks, and on news sites. A ‘mention’ is registered when the name the brand or app appears publicly. Example: A tweet mentioning the app name.
  • Reach: The estimated reach of the combined mentions collected for your brand. Example: How far the tweet about said app traveled online.

For the purposes of App Mining, the focus will be on Reach. Mentions themselves will be captured and provided to App Miners, but Reach is the much less gameable of these two numbers and thus more suitable for App Mining. For example, it would be fairly easy to create many fake individual Mentions (e.g a Twitter bot), but it would be unlikely those ‘fake’ Mentions generate much if any actual Reach.

Here’s how the scoring will work in more detail:

Reach Score: Based on total reach of all your eligible Mentions for the previous calendar month. Your 'score' is log10(total_reach). So if you've reached 10 people, your score is 1. 100 is 2, 1000 is 3, and so on. This is much better than only using your actual reach, because outliers would totally skew the distribution. No matter what your reach is, you need to improve 10x your reach to increase this by 1. Using log10 is also similar to how we handle the 'theta' function in the algorithm, because the higher your score, the more you need to improve to bump your score.

Growth Score: Month-over-month growth in your total reach (not log 10). If you went from a reach of 1000 to 1500, your MoM growth is 0.5 (or 50%). If this is your first month in app mining, this score is not included.

Some more conversation about why this way is above as well.

Other scoring notes:

  • The query and search parameters for ‘mention alerts’ are really important and this is why we have the Awario team handling it directly using their expertise on their own platform. For unique brand or app names, the query is fairly straightforward, the name is loaded into Awario and it begins crawling sites and networks for public instances of it. For common names or names where context is important, such as a brand name like Stealthy (hey guys!), it’s important to filter out non-relevant results like someone simply using the verb ‘stealthy’. Agora is another good example as there are other projects with the same name. For these cases, Awario sets up much more complex queries to trim it down to the ones actually related to the project. Here are some details on all the operators. Additionally the first month there is no score as the Awario will be honing that score by setting up a first version of the mention alert, watching it, and then updating as necessary to further zero in on only relevant mentions. We will have the Awario team here and on calls to answer more detailed questions on this should you have them.
  • App Miners will not receive a score from Awario in their first eligible run, as mentioned above this is because setting up the Mention queries and optimizing them to capture only relevant Mentions takes some time and they should be finely tuned before counting them.
  • ‘Websites’ are excluded from Reach totals. While this is generally pretty accurate and useful in a normal business use-case, it can be gamed because of the way Awario estimates the Reach of website. The sites Alexa rank is used in the Reach calculation meaning, for example, a mention on Github would register as massive Reach, even if that particular Mention didn’t really spread that far.
  • Like all the other reviewers, the z-score is first calculated for each of these metrics, and then then averaged. Then, the theta function is applied, resulting in a 'final' Awario score.
  • Mentions and associated ‘Reach’ from Blockstack accounts will not be counted. This is so that Blockstack PBC can continue to support apps publicly, without worrying that it needs to be evenly distributed across apps, which simply isn’t possible.

Auditing
As part of the monthly process for generating these scores, the query will not only be optimized for mentions of the eligible apps, Awario will also help to audit the Mentions coming in to be doubly sure none that shouldn’t count make their way in. Awario is confident in their ability to only collect and count relevant Mentions through their search operators and will be available to answer and questions or concerns App Miners may have. Last, with Awario’s platform, it is extremely easy to remove individual Mentions (and thus the associated Reach count) or to blacklist any accounts found to be fraudulent or accounting for false-positive Mentions.

@cuevasm

This comment has been minimized.

Copy link
Collaborator

commented Apr 4, 2019

An additional scoring note, we'll also be restricting the handles of all Blockstack PBC employees, meaning any mention and accompanying reach generated by a PBC employee will not count in anyone's score. This is so everyone at Blockstack PBC can continue to freely support applications without running the risk of unintentionally biasing the results or deciding not to support publicly for fear of impacting the results.

@GinaAbrams GinaAbrams closed this Apr 12, 2019

@cuevasm

This comment has been minimized.

Copy link
Collaborator

commented Apr 19, 2019

Notes about Awario Scores for April

Basics:

  • Your score is comprised of data captured from March 1st to March 31st, 2019
  • You’re getting two reports
    • Raw CSV of all Mentions we’ve captured to date (more on this below)
    • Mention/Reach/Influencer/etc. Summary PDFs (nicely formatted report including additional data about who spoke about you and how)
  • If there’s no spreadsheet or PDF linked, there was no data for you yet, go get some Mentions! :)
  • Reminder, this is how scoring works for Awario

Materials:

Important Notes:

  • Mentions from Blockstack PBC employees will not count toward your scores, we’re in the process of blacklisting these accounts. This is so the team can continue supporting Blockstack apps without fear of biasing the results. This is the same rule we applied to official Blockstack accounts like @blockstack. Please note: You will see mentions from the team included in the scores for this month only - since it is not impacting payouts, we prioritized other items, but will be sure to have those out before next month. And yes, they’ll be removed from the month previous (this month’s data) so growth calculations will still be accurate.
  • I’ve made your reports accessible to anyone with the link. This was for ease of sharing and so everyone can look at all the data (and there’s no personal data being exposed). If this is an issue, please let me know. This won’t be a problem in the coming months as we’ll have a direct login to Awario for you to use so you can access it yourself, allowing you to slice and dice in the way that works the best for you and generate any reports you need.
  • You will not have a ‘Growth’ score this month since this is the first month capturing data. The same will happen to every new app as described here. Please remember we excluded ‘Web’ as described in the scoring breakdown.
  • Reporting a Mention. If you’re looking through your data or someone else’s and see a Mention that isn’t about the app, you can let us know by sending us the URL available in the CSV. We’ll investigate and remove if it’s a false positive. Again, as we move forward, this will become a lot easier as we’ll be able to easily browse through Mentions right in Awario and likely use the ‘star’ function in the interface to mark mentions you believe are in error. We anticipate these to be pretty minimal and for any that do sneak through to be pretty inconsequential in the overall scoring picture, but it’ll be great to have us all watching in case there are outlier cases.
  • I’ve left Web mentions in your export of Mentions AND given you all the Mentions we’ve collected thus far (not just the March ones). Even though not all of these are included in your score, they’re still really helpful data and it may show you some nice Mentions you missed (I recommend following up to see if they will talk about you more, work with you somehow, or otherwise boost your app somehow!). With this data, you can filter the Sheet to the current date period (March 1-31) and exclude websites if you want to look at what comprises your current month’s score.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.