Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
src
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Word2Vec

Build Status

  • Create an instance of WordEmbedding: embed = WordEmbedding(100, Word2Vec.random_inited, Word2Vec.huffman_tree, subsampling = subsampling)
  • To train sequentially: train(embed, inputfile)
  • Alternatively, to train in parallel:
    • add worker nodes: addprocs(N)
    • chunk the input file using Blocks: b = Block(File(inputfile), nworkers())
    • start training the chunks, also provide a filename that will be used to exchange data between workers and master node: train(embed, b, "/tmp/emb")
  • After successful training, query for similar words: find_nearest_words(embed, "query words")

This is still work in progress. Parallel training with weight averaging does not yeild very good results. May need to implement asynchronous stochastic gradient descent used by Mikolov 2013.

Datasets

Credits

This is based on this original code by Zhixuan Yang (https://github.com/yangzhixuan/embed)

About

Word2Vec in Julia

Resources

License

Releases

No releases published

Packages

No packages published

Languages