Automatically exported from code.google.com/p/word2vec
License
icha024/word2vec
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
master
Could not load branches
Nothing to show
Could not load tags
Nothing to show
{{ refName }}
default
Name already in use
A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code
-
Clone
Use Git or checkout with SVN using the web URL.
Work fast with our official CLI. Learn more about the CLI.
- Open with GitHub Desktop
- Download ZIP
Sign In Required
Please sign in to use Codespaces.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
Launching Xcode
If nothing happens, download Xcode and try again.
Launching Visual Studio Code
Your codespace will open once ready.
There was a problem preparing your codespace, please try again.
Latest commit
Git stats
Files
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
This is a mirror of the Google project: https://code.google.com/archive/p/word2vec/ ======================================================================================================== ------------------------------------------------------ Tools for computing distributed representtion of words ------------------------------------------------------ We provide an implementation of the Continuous Bag-of-Words (CBOW) and the Skip-gram model (SG), as well as several demo scripts. Given a text corpus, the word2vec tool learns a vector for every word in the vocabulary using the Continuous Bag-of-Words or the Skip-Gram neural network architectures. The user should to specify the following: - desired vector dimensionality - the size of the context window for either the Skip-Gram or the Continuous Bag-of-Words model - training algorithm: hierarchical softmax and / or negative sampling - threshold for downsampling the frequent words - number of threads to use - the format of the output word vector file (text or binary) Usually, the other hyper-parameters such as the learning rate do not need to be tuned for different training sets. The script demo-word.sh downloads a small (100MB) text corpus from the web, and trains a small word vector model. After the training is finished, the user can interactively explore the similarity of the words. More information about the scripts is provided at https://code.google.com/p/word2vec/
About
Automatically exported from code.google.com/p/word2vec
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published