A Structured Self-attentive Sentence Embedding
-
Updated
Sep 22, 2019 - Python
A Structured Self-attentive Sentence Embedding
Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"
Implementation of the Paper Structured Self-Attentive Sentence Embedding published in ICLR 2017
attempt at implementing "Memory Architectures in Recurrent Neural Network Language Models" as a part of the ICLR 2018 reproducibility challenge
Python implementation of N-gram Models, Log linear and Neural Linear Models, Back-propagation and Self-Attention, HMM, PCFG, CRF, EM, VAE
Re-Implementation of "A Structured Self-Attentive Sentence Embedding" by Lin et al., 2017
Structured Self Attention implementation in tensorflow
Tensorflow-based framework which lists attentive implementation of the conventional neural network models (CNN, RNN-based), applicable for Relation Extraction classification tasks as well as API for custom model implementation
This repository provides a basic implementation of self-attention. The code demonstrates how attention mechanisms work in predicting the next word in a sequence. It's a basic implementation that demonstrates the core concept of attention but lacks the complexity of more advanced models like Transformers.
Jatext Classification
Add a description, image, and links to the self-attentive-rnn topic page so that developers can more easily learn about it.
To associate your repository with the self-attentive-rnn topic, visit your repo's landing page and select "manage topics."