Skip to content

Utilizing Python's Machine Learning libraries to implement an Autocomplete Generator.

License

Notifications You must be signed in to change notification settings

ojasonbernal/AutocompleteGenerator

Repository files navigation

Autocomplete Generator

Autocomplete Generator

Utilizing Python's Machine Learning libraries to implement an Autocomplete Generator.

Table of Contents
  1. About The Project
  2. Usage
  3. License
  4. Sources

About the Project

A Recurrent Neural Network (RNN) sequence-to-sequence (seq2seq) model was implemented to predict text. The main Python libraries used were TensorFlow and Keras. The well known Hutter Prize Wikipedia Dataset was utilized for the training, validation, and test data. Given a sequence of 30 characters, the model predicts the next 10 characters. The model achieved 89.52% accuracy.

Built With:

  • Python
  • Keras
  • TensorFlow
  • CSV

Simplified seq2seq model example.

System Design of seq2seq model implemented.

Usage

The input data is first parsed. Each unique character is tokenized to be used later in the model.

The model is then created and fed the tokenized data.

The model is then executed through a set number of epochs, a number times that the learning algorithm will work through the entire training dataset.

The model outputs a decoded sequence, which is the predicted text. The predicted text is bolded as shown below.

The model's accuracy and loss plot is shown below. The model achieved 89.52% accuracy and a 0.3433 in model loss.

Sources

Images

License

Distributed under the MIT License. See LICENSE for more information.

About

Utilizing Python's Machine Learning libraries to implement an Autocomplete Generator.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages