Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize new queue? #14

Closed
ghost opened this issue Feb 20, 2022 · 14 comments
Closed

Optimize new queue? #14

ghost opened this issue Feb 20, 2022 · 14 comments

Comments

@ghost
Copy link

ghost commented Feb 20, 2022

Hey! Hope you're doing fine! :)
I was wondering if it's possible to optimize new cards through their content based on what one already learned (is:review).
If one learns sentences and they're all pretty similar in context one ends up learning too much of the same...
Just an idea!
Have a great day

@thiswillbeyourgithub
Copy link
Owner

Hi!

That's not a bad idea, I've had it in the passed but can't exactly remember why I didn't pursue it :/.

  • You can actually directly try it using the arguments highjack_due_query and highjack_rated_query. I think you should try 'deck:"my_deck" is:new' and deck:"my_deck" is:review.
  • The reference_order should be order_added or possibly relative_overdueness.
  • You would also have to adjust score_adjustment_factor to something weird like (1, 0.5) to make sure you don't spread the cards wayyy too much.

Would you mind trying and reporting back?

Btw the tone of your message made it really nice to read and made me happy, have a great day too!

@ghost
Copy link
Author

ghost commented Feb 21, 2022

Cool! Editing the file I noticed this:
stopwords_lang=["swedish", "english", "french"],

I'm currently learning Chinese and Swedish, do I need to edit this like that or something else?
Also, where do I put the deck:"my_deck" part? This is how my file looks atm:


                 deckname=None,
                 reference_order="order_added",  # any of "lowest_interval", "relative overdueness", "order_added"
                 task="filter_review_cards", # any of "filter_review_cards", "bury_excess_review_cards", "bury_excess_learning_cards"
                 target_deck_size="80%",  # format: 80%, 0.8, "all"
                 stopwords_lang=["swedish", "english", "french"],
                 rated_last_X_days=4,
                 score_adjustment_factor=(1, 0.5),
                 field_mappings="field_mappings.py",
                 acronym_file="acronym_file.py",
                 acronym_list=None,

                 # others:
                 minimum_due=15,
                 highjack_due_query=True,
                 highjack_rated_query=True,
                 log_level=2,  # 0, 1, 2
                 replace_greek=True,
                 keep_OCR=True,
                 tags_to_ignore=None,
                 tags_separator="::",
                 fdeckname_template=None,
                 show_banner=True,
                 skip_print_similar=False,

@thiswillbeyourgithub
Copy link
Owner

Hi!

  • Stopwords are for example the words in bold in "this is the best thing since a recent event`".

    You should always try to add the stopwords of the language in question when using AnnA but I don't know enough about chinese language to know if it's actually relevant here.
    An issue is that stop words are removed before using TF_IDF so also before tokenization. This could maybe be an issue with chinese but I don't know...
    Either way keeping stopwords is actually not very penalizing.
    It should also rarely be an issue to add too many languages to the stop words list.

  • I see you set the highjack values to True, this is not how it works at all, I edited the README.md, hopping it is now a bit clearer :)

@ghost
Copy link
Author

ghost commented Feb 21, 2022

Cool, understanding stopwords now! What would I need to set the highjack values to? I read the readme file but there are no possible values

@thiswillbeyourgithub
Copy link
Owner

thiswillbeyourgithub commented Feb 21, 2022

Open anki's browser, look for cards using a search query for example deck:"my_deck" is:due -rated:14 flag:1.

This query is the way you ask anki to find cards.

Well highjack arguments by default are set to None to disable them, but they can contain the same kind of queries as string :

  • highjack_rated_query is the query originaly used to find the cards that you rated in the last few days but if you highjack it you can set it to whatever you want.
  • highjack_due_query is the query originaly used to find which cards are due.

Tell me if it's more clear, in which case I'll link the README to this issue.

@ghost
Copy link
Author

ghost commented Feb 21, 2022

Oh! Perfectly understood now...hahaha I didn't get it at first. They're now like this:

                 highjack_due_query='deck:"Swedish" is:new',
                 highjack_rated_query='deck:"Swedish" is:review',

@thiswillbeyourgithub
Copy link
Owner

You might want to add something like rated:14 in the rated_query, depending on the size of your deck.

Don't forget to tell me if it works :) I suggest lowering the score adjustment factor to (1, 0.1) to try and see if it's better.

@ghost
Copy link
Author

ghost commented Feb 21, 2022

I've got approximately 15.000 sentence cards which I use to mine the language, so I'll try these out and report back! Thanks so much for your time :)

@ghost
Copy link
Author

ghost commented Feb 22, 2022

Working!! Swedish worked flawlessly :) Will report back with my chinese deck.

@ghost
Copy link
Author

ghost commented Feb 22, 2022

Hey again! So this error pops up when running the script on my chinese deck. Probably I'm running out of memory because my notebook only has 4gb of ram. I googled and it seems to be a python problem and not your script's. Anyways, maybe this'll happen to other people, so maybe you need to implement something here?


Vectorizing text using TFIDF: 100%|███████████████████████████████████████████████████████████████████████████| 23366/23366 [00:03<00:00, 6952.34it/s]

Reducing dimensions to 100 using SVD... Explained variance ratio after SVD on Tf_idf: 98.2%

Computing distance matrix on all available cores...
Killed

@thiswillbeyourgithub
Copy link
Owner

thiswillbeyourgithub commented Feb 22, 2022

Hi,

I implemented the argument "low_power_mode". If you set it to true, the tokenizer will use unigram instead of ngrams between 1 and 5.

This should considerably reduce the number of computation.

It's currently only in the dev branch, if you test it and i works I'll merge it with main.

Another thing you might want to test afterwards please is lowering TFIDF_dim, currently 100 dimensions is enough for 98.2% of the variance, which means you are wayyy overdoing it.

@ghost
Copy link
Author

ghost commented Feb 23, 2022

Reporting back!
Working splendidly after allocating more swap to the computer :)
low power mode and TFIDF_dim=60 resulted in python3 not being killed when analyzing a subdeck with 5k cards.
Trying either with TFIDF_dim=100 or 60, with or without low power mode on my main deck of 23k cards caused a kill and it never works.
Thanks so much for your help!!

@thiswillbeyourgithub
Copy link
Owner

Muchos gracias por tu mensaje! (btw, the name of this software is from an argentinian person :) )

I think it's better to use low_power_mode than to reduce the number of dim drastically.

That being said, the number of dim can and should be reduced anyway if you see it's keeping more than say 70% of the variance IMO.

@ghost
Copy link
Author

ghost commented Feb 23, 2022

Oh! That's so cool :)
Okay, I'll write down 70%... luckily it's working and putting out no less than 95% with dim 60 and it's speeding up the process a lot :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant