-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] A quantitative measure of cheating #57
Comments
Why not use the variance of p? Cheating algorithms tend to provide a small range of predictions, so the variance would be small, too. |
I think my measure is more intuitive. You can add both if you want to, you'll have to re-run the benchmark anyway. |
Yes, but we have to do this for all 20 000 collections and compare averages. We can't deicde whether an algorithm is cheating or not based on one collection. |
The main problem is we don't know the real distribution of retrievability. The ideas of you and me both assume that the real distribution is more flat than the distribution predicted by a cheat algorithm. |
Btw, don't forget about #55 |
If we still rank models by RMSE(bins), I tend not to include it. If we rank models by log loss, I will include it. |
Hmmm. Ok, let's sort by log-loss then. |
I have an idea how to measure the degree of "cheatiness" of an algorithm.
Do the same procedure that you do for plotting the calibration graph.
Record the number of values in the densest bin, aka the highest bar. Example:
![image](https://private-user-images.githubusercontent.com/83031600/305555609-386e1bf8-87b8-4355-aebf-6fba110ae65c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAyNzAwNjIsIm5iZiI6MTcyMDI2OTc2MiwicGF0aCI6Ii84MzAzMTYwMC8zMDU1NTU2MDktMzg2ZTFiZjgtODdiOC00MzU1LWFlYmYtNmZiYTExMGFlNjVjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA2VDEyNDI0MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU4NmM5YmM5MzEyZWJhMmUxNjRlMzUxOGYwZjYxYWRjMjAzYmIzNjQ2ODdhZDQwYWM1Mjg5MjA2ZTU3NDE3NmImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.9naeJ_2k98Hl6PHWZWmFB4vKNpxL5jxU3sEy7tqkxvA)
Divide it by the total number of reviews. For a cheating algorithm, this will be 100% since there is only one bin, so 100% of reviews fall into that bin.
Do this for every user for a given algorithm.
Calculate the (unweighted) average.
From a theoretical point of view, the issue is that the cutoff will be arbitrary. If the average is 90%, meaning that on average 90% of predicted R values fall within the same bin, is it cheating? What about 80%? Or 50%?
From a practical point of view, this will require re-running every single algorithm since this information cannot be obtained from .json result files right now. At the very least, you will have to re-run FSRS-4.5, ACT-R and DASH[ACT-R], since we are sure that FSRS-4.5 isn't cheating, and ACT-R algorithms are the main suspects. But of course, to get a better idea of what values of this metric are good and what values are bad, you need to re-run the entire benchmark.
Also, this is not intended to be included in the readme. It's for our internal testing.
The text was updated successfully, but these errors were encountered: