The main goal of the project is extending Ukrainian tonal dictionary. At first, we tried to achieve it by looking at words similar to ones with known tonality. Word2vec and LexVec models are used to find similar words. Then we built NN classifier and used word embeddings and existing tonal dictionary to train it.
split_to_chunks/subsample.py - is used to take a piece of files so it can be read with notepad:
utils.py - has some useful methods and folders paths
We have different sources of text: csv, txt and wiki. There are different files to preprocess them.
- split raw csv data to chunks and save as chunks:
Result items are saved in data\chunks
- read chunk items, tokenize text, save list of sentences:
Result items are saved in data\sents
split_to_chunks/txt_to_sentences to tokenize text and save chunks with lists of sentences
Result is saved to data\sents
split_to_chunks/wiki_to_sentences to tokenize text and save chunks with lists of sentences
train/clean.py - will read all existing sentences files and clean words
train/build_w2v_dict.py - will build word2vec model
train/learn.py - will train word2vec model
LexVec was used on the same data with settings identical to Word2Vec to calculate embeddings.
If you don't want to calculate word vectors for yourself, you can obtain them from http://lang.org.ua/models website or download from Google Drive (https://drive.google.com/file/d/0B9adEr6qDus4TjVVUW9CcEkzSjQ/view, https://drive.google.com/open?id=0B9adEr6qDus4dkRpaDZ4bWZCc2M)
predict/build_joined_vect_dict.py - is used to concatenate two models: LexVec and word2vec
predict/predict.py - predict, save the whole set, save subsample
predict/save_best.py - take best negative and positive candidates
Oleksandr Marykovskyi, Vyacheslav Tykhonov provided the seed dictionary
Serhiy Shehovtsov wrote the code and ran numerous experiments
Oles Petriv created and trained neural network model
Vsevolod Dyomkin proof-read the result and prepared it for publishing
Dmitry Chaplinsky led the project :)