Neural Machine Translation with Attention (Dynet)
-
Updated
Feb 26, 2017 - Python
Neural Machine Translation with Attention (Dynet)
Support material and source code for the model described in : "A Recurrent Encoder-Decoder Approach With Skip-Filtering Connections For Monaural Singing Voice Separation"
Source Code Generation Based On User Intention Using LSTM Networks
Design and build a chatbot using data from the Cornell Movie Dialogues corpus, using Keras
Classic spy Encoder-Decoder console game made with Python
LSTM autoencoder Model for Query-by-example for Spoken word detection
This is the sequential Encoder-Decoder implementation of Neural Machine Translation using Keras
Noise removal from images using Convolutional autoencoder
This is an implementation of the paper "Show and Tell: A Neural Image Caption Generator".
📺 An Encoder-Decoder Model for Sequence-to-Sequence learning: Video to Text
A Simple ChatBot Based Encoder-Decoder
This repository contains RNN, CNN, Transformer based Seq2Seq implementation.
An implementation of the paper "Context-aware Captions from Context-agnostic Supervision"
An implementation of surfaces in OpenGL using a hybrid of Bezier curves and first degree B-splines.
Encoder-Decoder for Face Completion based on Gated Convolution
An Implementation of Encoder-Decoder model with global attention mechanism.
An encoder-decoder translation model with or without attention
Add a description, image, and links to the encoder-decoder-model topic page so that developers can more easily learn about it.
To associate your repository with the encoder-decoder-model topic, visit your repo's landing page and select "manage topics."