Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Some more details from a ML perspective #86

Closed
lars76 opened this issue Apr 18, 2024 · 8 comments
Closed

[Question] Some more details from a ML perspective #86

lars76 opened this issue Apr 18, 2024 · 8 comments

Comments

@lars76
Copy link

lars76 commented Apr 18, 2024

Hey, first of all thank you for the dataset. I was wondering if you could provide some more details for people who want to train their own ML algorithms but might not be familiar with the internals of Anki.

Let me see if I understand the dataset correctly:

  • The dataset consists of n * m time series where n are the number of users and m are the number of cards.
  • Each time series has 3 features: days between last review, rating 1-4, current number of reviews

What is the target variable? And how are you handling the time series for RNNs?

@L-M-Sherlock
Copy link
Member

The training process is very similar to GPT's next token prediction. Assuming you have 10 reviews belonging to the same cards, the optimizer will train FSRS in the 1st review and treat 2nd review as the target. Then 1-2 reviews are used as the feature and the 3rd review is the target, and so on.

@lars76
Copy link
Author

lars76 commented Apr 19, 2024

I am still not 100% sure, maybe I can give an example.

card_id,review_th,delta_t,rating
6,22,-1,1
6,24,0,1
6,27,0,3
6,38,0,3
6,103,2,3
6,201,4,1

Let us say we simplify the rating from 2,3,4 to 1 and from 1 (again) to 0.

time series: [0 0 1 1 1 0]

Then the RNN could predict 0.8 as the next token. However, how do you know the number of days this 0.8 corresponds to?

Or are you converting delta_t to absolute days? Then let the RNN predict t+k tokens. Each k corresponds to a single day. Then we know the number of days for rescheduling?

@L-M-Sherlock
Copy link
Member

L-M-Sherlock commented Apr 19, 2024

time series: [0 0 1 1 1 0]

The real time series is [(0, 0), (2, 1), (4, 0)] because the short-term reviews are removed in current model.

srs-benchmark/other.py

Lines 1051 to 1057 in 51c6021

outputs, _ = self.model(sequences)
stabilities = outputs[
seq_lens - 1,
torch.arange(real_batch_size, device=device),
0,
]
retentions = self.model.forgetting_curve(delta_ts, stabilities)

The RNN doesn't predict the probability of recall directly. It outputs the stability, and the last delta_t will be used to calculate the probability.

For example, [(0, 0), (2, 1)] are the inputs of RNN, and then RNN will output the stability, e.g. 5. The last delta_t 4 and the predicted stability 5 are used to calculate the probability in the forgetting_curve function.

@lars76
Copy link
Author

lars76 commented Apr 19, 2024

Thank you, this makes it much clearer. Have you also tried directly modelling the problem without relying on this curve?

@L-M-Sherlock
Copy link
Member

L-M-Sherlock commented Apr 19, 2024

The DASH-series models can predict the probability directly.

srs-benchmark/other.py

Lines 1020 to 1029 in 51c6021

if isinstance(self.model, ACT_R):
outputs = self.model(sequences)
retentions = outputs[
seq_lens - 2, torch.arange(real_batch_size, device=device), 0
]
elif isinstance(self.model, DASH_ACTR):
retentions = self.model(sequences)
elif isinstance(self.model, DASH):
outputs = self.model(sequences.transpose(0, 1))
retentions = outputs.squeeze(1)

@L-M-Sherlock
Copy link
Member

L-M-Sherlock commented Apr 19, 2024

Have you also tried directly modelling the problem without relying on this curve?

Without the curve, the general model like RNN, LSTM and GRU may make some weird predictions. For example, the forgetting curve generated from them would not decay over time.

@lars76
Copy link
Author

lars76 commented Apr 19, 2024

Thank you for the information. I think a lot of feature engineering has to be done to get good results with direct modelling. For example, grouping similar users or cards. And then having only a single model. I will also do some experiments.

However, it would be easier for evaluating the performance by splitting the scripts some more. For datasets such as COCO, you have a single training and test dataset. You could just create a script that takes as input a single prediction csv file and outputs the metrics (like at Kaggle). While it is good to have a cross validation split, people might use different splits for their training.

@L-M-Sherlock
Copy link
Member

Our benchmark doesn't have cross validation split. It only uses time-series-split to avoid data leakage.

@lars76 lars76 closed this as completed Apr 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants