Using NLP techniques to improve word relevance results.
In natural language modeling, word embeddings are popular way to capture the semantic similarity between words. However, one drawback they have as a result of their generalized vector representation of a given word, is that they don't capture the local context around it, that may be present at times. We use a context-aware biLSTM model with adaptive word embeddings to improve the word relevance results, of a given target word.
- Aniruddh Nautiyal
- Jae II Lee
- Nathaniel Schub
- ./Code: development jupyter notebooks, code (python) and analysis
- ./Report: final project report and presentation slides
- ./datasets: datasets used for evaluating the models
- ./materials: research papers for background as well as materials informing the choice(s) of approaches used
- ./pretrained: links to (big) training datasets or models used