Python implementation of an N-gram language model with Laplace smoothing and sentence generation.
-
Updated
Feb 9, 2018 - Python
Python implementation of an N-gram language model with Laplace smoothing and sentence generation.
Recurrent neural network implementations for protein secondary structure prediction and language models
UNB Fall-2018 NLP Assignments 💬
NLP course - language models - word tokenization - Leventsheim distance - Naive Bayes example
Using a combination of language models and web scraping for a robust named entity normalization engine to create a hierarchical graph data-structure.
Language models are open knowledge graphs ( non official implementation )
Transformer based Turkish language models
Zipf’s Law, Heap’s Law, Wordcloud, Language Models
Neural Network Language Model that generates text based off Lord of the Rings. Built with Pytorch.
Natural language processing project to visualize word choice patterns from coronavirus (and related) articles, and compute the average perplexity scores of language models generated from these articles when used with tweets about the subject matter
This repository contains our path generation framework Co-NNECT, in which we combine two models for establishing knowledge relations and paths between concepts from sentences, as a form of explicitation of implicit knowledge: COREC-LM (COmmonsense knowledge RElation Classification using Language Models), a relation classification system that we …
Keras implementations of three language models: character-level RNN, word-level RNN and Sentence VAE (Bowman, Vilnis et al 2016).
Re-Evaluating GermEval 2017: Document-Level and Aspect-Based Sentiment Analysis Using Pre-Trained Language Models
Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statements between two sentences, by (i) finetuning the models on corpora enriched with implicit information; and by (ii) constraining models with key concepts and commonsense knowledge paths connecting them.
Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statements between two sentences, by (i) finetuning the models on corpora enriched with implicit information; and by (ii) constraining models with key concepts and commonsense knowledge paths connecting them.
Data Utils for BERT models in Sentiment Attitude Extraction task
Official resource of the paper "Knowledge Enhanced Masked Language Model for Stance Detection", NAACL 2021
Train the Bi-LM model and use it as a feature extraction method
Python source code for EMNLP 2020 paper "Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT".
[ICCV 2021] On the hidden treasure of dialog in video question answering
Add a description, image, and links to the language-models topic page so that developers can more easily learn about it.
To associate your repository with the language-models topic, visit your repo's landing page and select "manage topics."