OpenI Dolphin Documentation ==========================
Welcome to project Dolphin
, which is an open source framework of algorithms using deep learning in several fields of computer vision based on PyTorch: Object Detection, Generative Adversarial Network, Video Action Analysis, Mono Depth Estimation, Activate Learning, Object Tracking and Segmentation, aiming to promote learning of deep-learning based computer vision algorithms and simultaneously simplifies algorithms research experiments.
Note
This documentation contains only API introduction of the project, the installation guidance please check our repo.
- Wide Coverage
A variety of computer vision algorithms are integrated, for every included field of them, there exists at least one specific algorithm:
- Object Detection: Faster RCNN
- Video Action Analysis: SlowFast for action recognition, BMN for temporal action localization, MOC-Detector for temporal and spatial action detection
- Generative Adversarial Network: CycleGAN
- Mono Depth Estimation: FCRN
- Activate Learning: Query-based Entropy Sampling
- Object Tracking: FairMOT
- Segmentation: FCN
- Modular Design
The workflow of algorithm is separated into several modules: dataset establishing, data augmentation, model building and so on, that is convenient for customization and combination. What's more, all of setting of modules and hypeparameters can be easily done in a simply configuration file.
- Flexible Engine
For adapting to some special algorithms, such as GAN, Activate Learning, a flexible workflow engine is created. It is compatible with controlling sequence of updating different models within a iteration (GAN) and special workflow phase (query-based activate learning algorithms).
The details about this system are provided as below, You can find out more in this documentation.
Module Configuration </configuration/index>
Module of Model </model/index>
Module of Dataset </dataset/index>
Module of Engine </engine/index>
/configuration/index.rst /model/index.rst /dataset/index.rst /engine/index.rst