-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate new method for calculating score #283
Comments
If you are not mathematically inclined, this explanation of the same algorithm may provide some intuition: http://www.redditblog.com/2009/10/reddits-new-comment-sorting-system.html As I understand it, you don't accept a thing really is particularly bad, or good, until you have a lot of voters agreeing on it. While there are still a small number of votes, the score (in either direction) is reduced to compensate for the lack of evidence. |
Yup, that's how I understand it too. We don't have nearly as many votes as reddit, so I'm not sure how well it'll work. I'll post a sample when I get a chance. |
Here's the effect on the top 200 scripts:
|
Same thing, but if "OK" ratings count as 0.5 positive and 0.5 negative, sorted by the new score.
|
Would also need to decide how things with no ratings fit in here. Here's what happens at very few ratings: 1 pos: 0.21 I think no ratings fits in well below "1 OK". |
When display the score to the public, maybe better as base 10 with 1.d.p.? E.g. |
The numbers seem fair for ranking purposes, but for display purposes, I'm not so sure. If a script has one positive ranking, is it fair to say it's 2.1/10? I don't think so... Really, the number means "I am 95% sure that if everyone rated this, it would have a score of at least X (out of 1)". |
Multiply by 5 and truncate after the first decimal point. Then you have 0.21 * 5 = 1.05 => 1.0 for 1 like. And the score would be 1.0/10 What would be your suggestion? Because displaying the "old" rating in combination with the new order would confuse many people. |
Well, two problems with that. 1/10 says to me something is utter crap, while the ratings it got do not support that. Also, the top style would be approaching 50/10, which is nonsense... I don't intend on having both old and new rating. If I had a suggestion, I'd tell you it. I think it requires more thought. Perhaps a scale where we don't explicitly mention the top amount? |
no. best would be 4.6/5 in my suggestion and 9.3/10 with Jixuns => a higher first amount is not possible ;) The problem is that you always have a low score for scripts with a small amount of positive ratings and a high score for scripts with a high amount of positive ratings. So you will also have a small bar for scripts with small ratings and a high score for scripts with high ratings. => i see no option to change this :/ |
I suggest that display score should be different from score, which is more readable and user friendly. And I try to give a fomula here: Here is the result of this formula:
( Don't ask me why the fomular is like that, I just randomly write something... |
There is an issue with this kind of implementation. For example, in my script AntiAdware, a user asked some new features, and so puts the 'OK' rating. |
I'd say that this is very clearly:
So only if the user changes the rating it is not neutral. But you're right with the problem that user don't change their rating afterwards. => If you receive only a bad rating from a user it is not overwritten and is there persistent. We suggested to enable closing of thread (or marking them as fixed to be able to fix this problem in #234 but Jason was not convinced of it. So the solution at the moment is that you convince your user to give a good rating or change the old one as a positive rating has a bigger value than a bad rating. So a good rating prevents from a bad rating. |
I think that we should emphasize or separate the rating of a program and discussions about it. Many users are confused and believe that if there is a feature that is missing, they should put 'OK' or 'Bad', because to their mind, the script 'works, but could use improvement' (the feature they are asking would fix that), or it even "doesn't work" (indeed, the feature that they are asking doesn't work, because it's still not implemented). |
If you want to discuss how things get rated, I think you should put it in another issue. This one is about how to aggregate the ratings we get. |
To not let perfect be the enemy of good, I've gone with the "out of 100" score, with OK counting as half good and half bad. |
Crazy math in http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
The text was updated successfully, but these errors were encountered: