Skip to content

samurainote/Text_Generation_using_GRU

Repository files navigation

GRU: Text_Generation

GRUリカーレントネットワークを用いた文章生成

Introduction

What is GRU()?

The Gated Recurrent Unit (GRU) is a model that makes LSTM a little simpler. The input gate and the forgetting gate are integrated into one gate as an "update gate".
Similar to LSTM, introducing such an oblivion and update gate makes it easy to maintain the memory of the features of events before long steps.
That's because it can be said that shortcut paths that bypass between each time step are efficiently generated.
Because of this, errors can be easily back-propagated during learning, which reduces the problem of gradient loss.

When GRU become "a better choice" than LSTMs?

The biggest difference between GRU and LSTM is that GRU is faster and easier to execute (but it is not rich in expressiveness).
In practice, the advantages tend to offset the weaknesses, such as at the expense of performance, as large networks are needed, for example, to enrich the expressive power. GRU performs better than LSTM when it does not require expressive power.

Technical Preferences

Title Detail
Environment MacOS Mojave 10.14.3
Language Python
Library Kras, scikit-learn, Numpy, matplotlib, Pandas, Seaborn
Dataset News Category Dataset
Algorithm GRU Network

Refference

About

This is my Text Generation by using LSTMs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published