You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My only concern is about the order of hyperparameter sets which is different from hyperparameters_doc2vec.json, which is the same JSON file that I used for Word2Vec models where sg was replaced by dm.
For instance, the first 9 hyperparameter sets are for dm=0, while in hyperparameters_doc2vec.json, the even sets are for dm=0.
The text was updated successfully, but these errors were encountered:
The code exhibits inefficiency in memory usage, unnecessarily consuming additional memory by saving a single embedding file for every document via function create_document_embeddings of script embeddings.py: Last week when I executed the compact-code on de-NBI cloud I could get the results of all 18 sets at once, but using this new modified code I just had space to get the results of first 6 sets at once ...
And I really enjoyed using script show_avg.py, which produces a summary table of average results of all sets for each evaluation quantity --the same as the tables that we should fill in the spreadsheets-- and save it as a tsv file: So far I did fill all my tables in the spreadsheets row by row :)
Everything works fine.
My only concern is about the order of hyperparameter sets which is different from hyperparameters_doc2vec.json, which is the same JSON file that I used for Word2Vec models where sg was replaced by dm.
For instance, the first 9 hyperparameter sets are for
dm=0
, while in hyperparameters_doc2vec.json, the even sets are fordm=0
.The text was updated successfully, but these errors were encountered: