PipelineAI Quick Start (CPU + GPU)
Train and Deploy your ML and AI Models in the Following Environments:
Having Issues? Contact Us Anytime... We're Always Awake.
- Slack: https://joinslack.pipeline.ai
- Email: firstname.lastname@example.org
- Web: https://support.pipeline.ai
- YouTube: https://youtube.pipeline.ai
- Slideshare: https://slideshare.pipeline.ai
- Workshop: https://workshop.pipeline.ai
- Troubleshooting Guide
PipelineAI Community Events
- PipelineAI Monthly Webinar (TensorFlow + Spark + GPUs + TPUs)
- Advanced Spark and TensorFlow Meetup (Global)
Consistent, Immutable, Reproducible Model Runtimes
Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.
Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
Sample Machine Learning and AI Models
Click HERE to view model samples for the following:
- Spark ML (formerly called Spark MLlib)
- Custom Java/Python/C++ Ensembles
Supported Model Runtimes (CPU and GPU)
- Python (Scikit, TensorFlow, etc)
- Spark ML
- TensorFlow Serving
- Nvidia TensorRT (TensorFlow, Caffe2)
Supported Streaming Engines
- Spark Streaming
Advanced PipelineAI Product Features
- Click HERE to compare PipelineAI Products.