Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] A quantitative measure of cheating #57

Closed
Expertium opened this issue Feb 16, 2024 · 9 comments
Closed

[Feature request] A quantitative measure of cheating #57

Expertium opened this issue Feb 16, 2024 · 9 comments

Comments

@Expertium
Copy link
Contributor

Expertium commented Feb 16, 2024

I have an idea how to measure the degree of "cheatiness" of an algorithm.

  1. Do the same procedure that you do for plotting the calibration graph.

  2. Record the number of values in the densest bin, aka the highest bar. Example:
    image

  3. Divide it by the total number of reviews. For a cheating algorithm, this will be 100% since there is only one bin, so 100% of reviews fall into that bin.

  4. Do this for every user for a given algorithm.

  5. Calculate the (unweighted) average.

From a theoretical point of view, the issue is that the cutoff will be arbitrary. If the average is 90%, meaning that on average 90% of predicted R values fall within the same bin, is it cheating? What about 80%? Or 50%?
From a practical point of view, this will require re-running every single algorithm since this information cannot be obtained from .json result files right now. At the very least, you will have to re-run FSRS-4.5, ACT-R and DASH[ACT-R], since we are sure that FSRS-4.5 isn't cheating, and ACT-R algorithms are the main suspects. But of course, to get a better idea of what values of this metric are good and what values are bad, you need to re-run the entire benchmark.

Also, this is not intended to be included in the readme. It's for our internal testing.

@L-M-Sherlock
Copy link
Member

Why not use the variance of p? Cheating algorithms tend to provide a small range of predictions, so the variance would be small, too.

@Expertium
Copy link
Contributor Author

I think my measure is more intuitive. You can add both if you want to, you'll have to re-run the benchmark anyway.

@L-M-Sherlock
Copy link
Member

L-M-Sherlock commented Feb 17, 2024

I find out a typical cheating case:

image

image

When the algorithm is measured by itself, FSRS-4.5 is worse than DASH[ACT-R]. But when they are measured by each other, FSRS-4.5 is better than DASH[ACT-R]. Anyway, the log loss is uncheatable.

Here are their calibration graphs.

FSRS-4.5:

4

DASH[ACT-R]:

4

@Expertium
Copy link
Contributor Author

Expertium commented Feb 17, 2024

Yes, but we have to do this for all 20 000 collections and compare averages. We can't deicde whether an algorithm is cheating or not based on one collection.
I would suggest implementing both my original suggestion and your stdev of p.

@L-M-Sherlock
Copy link
Member

The main problem is we don't know the real distribution of retrievability. The ideas of you and me both assume that the real distribution is more flat than the distribution predicted by a cheat algorithm.

@Expertium
Copy link
Contributor Author

Expertium commented Feb 17, 2024

True. Well, do you think DASH[ACT-R] should be included in the table now?
EDIT: I think it's reasonable to assume that the true distribution is a beta distribution.
image
image

The thing is, beta distribution can look like what we're seeing with ACT-R, depending on alpha and beta. We could also add UM (as the third metric), with FSRS-4.5 as comparison, but that would be difficult to interpret.

@Expertium
Copy link
Contributor Author

Btw, don't forget about #55

@L-M-Sherlock
Copy link
Member

Well, do you think DASH[ACT-R] should be included in the table now?

If we still rank models by RMSE(bins), I tend not to include it. If we rank models by log loss, I will include it.

@Expertium
Copy link
Contributor Author

Hmmm. Ok, let's sort by log-loss then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants