Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/flat power forgetting curve #134

Merged
merged 10 commits into from
Dec 18, 2023
Merged

Conversation

L-M-Sherlock
Copy link
Member

Idea: open-spaced-repetition/fsrs4anki#461 (comment)

Python Implementation: open-spaced-repetition/fsrs-optimizer@8ac5d5b

Help wanted: There are duplicated function named power_forgetting_curve. Is it possible to refactor them into one function?

@L-M-Sherlock L-M-Sherlock added the enhancement New feature or request label Dec 16, 2023
@asukaminato0721
Copy link
Collaborator

linux:

---- pre_training::tests::test_loss stdout ----
thread 'pre_training::tests::test_loss' panicked at src/pre_training.rs:327:9:
assertion `left == right` failed
  left: 14.5771
 right: 14.577101
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

win:

---- pre_training::tests::test_loss stdout ----
thread 'pre_training::tests::test_loss' panicked at src\pre_training.rs:327:9:
assertion `left == right` failed
  left: 14.5771
 right: 14.577101
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

floating point again :(

@dae
Copy link
Collaborator

dae commented Dec 17, 2023

I can't see an easy way to combine the three power_forgetting_curves, as you have one that uses raw floats, one that uses NdArray, and one that uses tensors. You could potentially just have a raw float one and then convert the tensors to floats and back again, but it might be a bit slower if it's called frequently.

src/inference.rs Outdated Show resolved Hide resolved
@asukaminato0721
Copy link
Collaborator

Is it possible to refactor them into one function?

They have different types, not something like Tensor<B, 1> vs Tensor<B, 2>

instead, &Array1 vs Array1 vs f32 vs f64 vs Tensor, so it's hard to use trait to merge them.

Maybe first we need to unify the array type...

@L-M-Sherlock L-M-Sherlock merged commit f0715e0 into main Dec 18, 2023
3 checks passed
@L-M-Sherlock L-M-Sherlock deleted the Feat/flat-power-forgetting-curve branch December 18, 2023 02:59
@asukaminato0721
Copy link
Collaborator

asukaminato0721 commented Dec 21, 2023

Is it possible to refactor them into one function?

They have different types, not something like Tensor<B, 1> vs Tensor<B, 2>

instead, &Array1 vs Array1 vs f32 vs f64 vs Tensor, so it's hard to use trait to merge them.

Maybe first we need to unify the array type...

Some small thought, maybe (I am not sure how to write the trait, this is a draft.)

fn power_forgetting_curve<T1, T2>(a: T1, b: T2) -> T1
where
    T1: Add<Rhs = xxx, Ouput = T1> + Mul<Rhs = xxx, Output = T1> + Pow<Rhs = xxx, Output = T1>,
    T2: Add<Rhs = xxx, Ouput = T2> + Mul<Rhs = xxx, Output = T2> + Pow<Rhs = xxx, Output = T2>,
{
    a.some_operator(b)
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants