Skip to content

Hermes is a library built on top of TensorFlow 2 designed to provide simple, abstractions for natural language processing utilizing end to end deep learning models and aggressive modules.

Notifications You must be signed in to change notification settings

Nikhil-Xavier-DS/Hermes

Repository files navigation

Hermes - Deep Natural Language Processing Framework

Hermes is a library built on top of TensorFlow 2 designed to provide simple, abstractions for natural language processing utilizing end to end deep learning models and aggressive modules. Hermes utilizes latest natural language processing algorithm and builds in TensorFlow entirely or creates a wrapper using Tensorflow. Hermes is a model zoo of various NLP algorithms implemented with object-oriented programming for easy encapsulation and understanding.

Natural language processing is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human languages, in particular, programming computers to process and analyze large amounts of natural language data. Deep neural network-style machine learning methods are widespread in natural language processing and can achieve state-of-the-art results in many natural language tasks.

Some of the popular NLP tasks implemented in Hermes includes text classification, sentences inference, machine translation.

Text classification is the task of assigning a set of predefined categories to sentences. Text classification in Hermes deals with Sentiment analysis. Sentiment analysis is a natural language processing technique used to interpret and classify emotions in subjective data. For sentiment analysis, the algorithms use IMDB dataset.

  • Feedforward Attention based Text Classification Models
  • Parallel Recurrent based Text Classification Models
  • BERT based model for Text Classification Model
  • ALBERT based model for Text Classification Model
  • RoBERTa based model for Text Classification Model
  • XLNet based model for Text Classification Model

Natural language inference is the task of determining whether a “hypothesis” is true (entailment), false (contradiction), or undetermined (neutral) given a “premise”. For natural language inference, the algorithms use Standard Natural Language Inference (SNLI) dataset.

  • Decomposable Attention model for Sentence Inference
  • Text matching with Image Recognition using Match Pyramid
  • Enhanced Sequential inference model (ESIM) for Sentence Inference
  • BERT based model for Sentence Inference
  • ALBERT based model for Sentence Inference
  • RoBERTa based model for Sentence Inference
  • XLNet based model for Sentence Inference

Neural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.

  • RNN based Encoder-Decoder Seq-Seq Model
  • Bidirectional RNN based Encoder-Decoder Seq-Seq Model
  • GRU based Encoder-Decoder Seq-Seq Model
  • Bidirectional GRU based Encoder-Decoder Seq-Seq Model
  • LSTM based Encoder-Decoder Seq-Seq Model
  • Bidirectional LSTM based Encoder-Decoder Seq-Seq Model
Dataset to be added

Spoken language understanding (SLU) is an emerging field in between the areas of speech processing and natural language processing. The term spoken language understanding has largely been coined for targeted understanding of human speech directed at machines. For spoken language understanding, the algorithms use the ATIS dataset.

  • Bidirectional LSTM based Spoken Language Understanding Model
  • Transformer based Spoken Language Understanding Model
  • Bidirectional GRU based Spoken Language Understanding Model
  • BERT based model for Spoken Language Understanding Model
  • ALBERT based model for Spoken Language Understanding Model
  • RoBERTa based model for Spoken Language Understanding Model
  • XLNet based model for Spoken Language Understanding Model

Semantic parsing is the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning. Semantic parsing can thus be understood as extracting the precise meaning of an utterance. Applications of semantic parsing include machine translation, question answering, ontology induction, automated reasoning, and code generation.

  • Sequence to Sequence GRU based Semantic Parsing Model
  • Sequence to Sequence LSTM based Semantic Parsing Model
Work in progress
HERMES was the ancient Greek god of language. One of the cleverest and most mischievous of the Olympian gods, who invented the lyre!

About

Hermes is a library built on top of TensorFlow 2 designed to provide simple, abstractions for natural language processing utilizing end to end deep learning models and aggressive modules.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published