Skip to content

pozhilsrini/pipeline

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PipelineAI Logo

PipelineAI Quick Start (CPU + GPU)

Train and Deploy your ML and AI Models in the Following Environments:

Having Issues? Contact Us Anytime... We're Always Awake.

PipelineAI Community Events

PipelineAI Home

PipelineAI Home PipelineAI Home 1 PipelineAI Home 2

PipelineAI Features

Consistent, Immutable, Reproducible Model Runtimes

Consistent Model Environments

Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.

Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.

Sample Machine Learning and AI Models

Click HERE to view model samples for the following:

  • Scikit-Learn
  • TensorFlow
  • Keras
  • Spark ML (formerly called Spark MLlib)
  • XGBoost
  • PyTorch
  • Caffe/2
  • Theano
  • MXNet
  • PMML/PFA
  • Custom Java/Python/C++ Ensembles

Nvidia GPU TensorFlow

Spark ML Scikit-Learn

R PMML

Xgboost Model Ensembles

Supported Model Runtimes (CPU and GPU)

  • Python (Scikit, TensorFlow, etc)
  • Java
  • Scala
  • Spark ML
  • C++
  • Caffe2
  • Theano
  • TensorFlow Serving
  • Nvidia TensorRT (TensorFlow, Caffe2)
  • MXNet
  • CNTK
  • ONNX

Supported Streaming Engines

  • Kafka
  • Kinesis
  • Flink
  • Spark Streaming
  • Heron
  • Storm

Advanced PipelineAI Product Features

  • Click HERE to compare PipelineAI Products.

Drag N' Drop Model Deploy

PipelineAI Drag n' Drop Model Deploy UI

Generate Optimize Model Versions Upon Upload

Automatic Model Optimization and Native Code Generation

Distributed Model Training and Hyper-Parameter Tuning

PipelineAI Advanced Model Training UI

PipelineAI Advanced Model Training UI 2

Continuously Deploy Models to Clusters of PipelineAI Servers

PipelineAI Weavescope Kubernetes Cluster

View Real-Time Prediction Stream

Live Stream Predictions

Compare Both Offline (Batch) and Real-Time Model Performance

PipelineAI Model Comparison

Compare Response Time, Throughput, and Cost-Per-Prediction

PipelineAI Compare Performance and Cost Per Prediction

Shift Live Traffic to Maximize Revenue and Minimize Cost

PipelineAI Traffic Shift Multi-armed Bandit Maxmimize Revenue Minimize Cost

Continuously Fix Borderline Predictions through Crowd Sourcing

Borderline Prediction Fixing and Crowd Sourcing

About

PipelineAI: Real-Time Enterprise AI Platform

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HTML 82.6%
  • Jupyter Notebook 16.3%
  • Python 0.6%
  • Scala 0.1%
  • Shell 0.1%
  • Java 0.1%
  • Other 0.2%