Skip to content

Commit

Permalink
Merge pull request #92 from IlyaKrotov/develop
Browse files Browse the repository at this point in the history
Fix grammar: Update README.md
  • Loading branch information
tkornuta-ibm committed Jul 5, 2019
2 parents f2d5ef0 + 9db698c commit 3518de3
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions README.md
Expand Up @@ -12,7 +12,7 @@

## Description

PyTorchPipe (PTP) is a component-oriented framework that facilitates development of computational _multi-modal pipelines_ and comparison of diverse neural network-based models.
PyTorchPipe (PTP) is a component-oriented framework that facilitates development of computational _multi-modal pipelines_ and comparison of diverse neural network-based models.

PTP frames training and testing procedures as _pipelines_ consisting of many components communicating through data streams.
Each such a stream can consist of several components, including one task instance (providing batches of data), any number of trainable components (models) and additional components providing required transformations and computations.
Expand Down Expand Up @@ -41,7 +41,7 @@ The framework offers full flexibility and it is up to the programmer to choose t
Such a decomposition enables one to easily combine many components and models into pipelines, whereas the framework supports loading of pretrained models, freezing during training, saving them to checkpoints etc.

**Model/Component Zoo:**
PTP provides several ready to use, out of the box components, from ones of general usage to very specialized ones:
PTP provides several ready to use, out of the box components, from ones of general usage to very specialised ones:

* Feed Forward Network (Fully Connected layers with activation functions and dropout, variable number of hidden layers, general usage)
* Torch Vision Wrapper (wrapping several models from Torch Vision, e.g. VGG-16, ResNet-50, ResNet-152, DenseNet-121, general usage)
Expand All @@ -50,14 +50,14 @@ PTP provides several ready to use, out of the box components, from ones of gener
* Recurrent Neural Network (different kernels with activation functions and dropout, a single model can work both as encoder or decoder, general usage)
* Seq2Seq (Sequence to Sequence model, classical baseline)
* Attention Decoder (RNN-based decoder implementing Bahdanau-style attention, classical baseline)
* Sencence Embeddings (encodes words using embedding layer, general usage)
* Sentence Embeddings (encodes words using embedding layer, general usage)

Currently PTP offers the following models useful for multi-modal fusion and reasoning:

* VQA Attention (simple question-driven attention over the image)
* Element Wise Multiplication (Multi-modal Low-rank Bilinear pooling, MLB)
* Multimodel Compact Bilinear Pooling (MCB)
* Miltimodal Factorized Bilinear Pooling
* Multimodal Factorized Bilinear Pooling
* Relational Networks

The framework also offers several components useful when working with text:
Expand All @@ -74,11 +74,11 @@ and several general-purpose components, from tensor transformations (List to Ten
**Workers:**
PTP workers are python scripts that are _agnostic_ to the tasks/models/pipelines that they are supposed to work with.
Currently framework offers three workers:

* ptp-offline-trainer (a trainer relying on classical methodology interlacing training and validation at the end of every epoch, creates separate instances of training and validation tasks and trains the models by feeding the created pipeline with batches of data, relying on the notion of an _epoch_)

* ptp-online-trainer (a flexible trainer creating separate instances of training and validation tasks and training the models by feeding the created pipeline with batches of data, relying on the notion of an _episode_)

* ptp-processor (performing one pass over the all samples returned by a given task instance, useful for collecting scores on test set, answers for submissions to competitions etc.)


Expand Down

0 comments on commit 3518de3

Please sign in to comment.