Skip to content
SKIL: Deep learning model lifecycle management for humans
Branch: master
Clone or download
maxpumperla Merge pull request #1 from SkymindIO/mp_moa_r
jobs, resources, credentials
Latest commit 54e7fc3 Apr 24, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
R spark Apr 24, 2019
man docs Apr 10, 2019
tests first test Apr 10, 2019
.gitignore ignore build Apr 10, 2019
DESCRIPTION package description Apr 10, 2019
LICENSE Initial commit Apr 10, 2019
NAMESPACE add skilr namespace Apr 10, 2019
README.md typo Apr 11, 2019
say_yolo_again.jpg cv2 example Apr 10, 2019
skilr.Rproj skilr R project Apr 10, 2019

README.md

SKIL: Deep learning model lifecycle management for humans

License

R client for Skymind's intelligence layer (SKIL)

SKIL is an end-to-end deep learning platform. Think of it as a unified front-end for your deep learning training and deployment process. SKIL supports many popular deep learning libraries, such as Keras, TensorFlow and Deeplearning4J. SKIL decreases time-to-value of your AI applications by closing the common gap between experiments and production - bringing models to production fast and keeping them there. SKIL effectively acts as middleware for your AI applications and solves a range of common production problems, namely:

  • Install and run anywhere: SKIL integrates with your current cloud provider, custom on-premise solutions and hybrid architectures.
  • Easy distributed training on Spark: Bring your Keras or TensorFlow model and train it on Apache Spark without any overhead. We support a wide variety of distributed storage and compute resources and can handle all components of your production stack.
  • Seamless deployment process: With SKIL, your company's machine learning product lifecycle can be as quick as your data scientist’s experimentation cycle. If you set up a SKIL experiment, model deployment is already accounted for, and makes product integration of deep learning models into a production-grade model server simple - batteries included.
  • Built-in reproducibility and compliance: What model and data did you use? Which pre-processing steps were done? What library versions were used? Which hardware was utilized? SKIL keeps track of all this information for you.
  • Model organisation and versioning: SKIL makes it easy to keep your various experiments organised, without interfering with your workflow. Your models are versioned and can be updated at any point.
  • Keep working as you're used to: SKIL does not impose an entirely new workflow on you, just stay right where you are. Happy with your experiment and want to deploy it? Tell SKIL to deploy a service. Your prototype works and you want to scale out training with Spark? Tell SKIL to run a training job. You have a great model, but massive amounts of data for inference that your model can't process quickly enough? Tell SKIL to run an inference job on Spark.

Installation

To install SKIL itself, head over to skymind.ai. Probably the easiest way to get started is by using docker:

docker pull skymindops/skil-ce
docker run --rm -it -p 9008:9008 skymindops/skil-ce bash /start-skil.sh

To use SKIL's R client, you have to install our Python client from PyPI first:

pip install skil

Next, you can install and load the R client from GitHub like this:

devtools::install_github("SkymindIO/skilr")
library("skilr")

Getting started:

In this section you're going to deploy a state-of-the-art object detection application. As a first step, download the TensorFlow model we pre-trained for you and store it locally as yolo.pb. As the name suggests, this model is a You Only Look Once (YOLO) model.If you haven't done already, install and start SKIL as described in the last section.

For this quick example you only need three (self-explanatory) concepts from SKIL. You first create a SKIL Model from the model file yolo.pb you just downloaded. This Model becomes a SKIL Service by deploying it to a SKIL Deployment. That's all there is to it:

library("skilr")

model <- Model('yolo.pb', model_id='yolo_42', name='yolo_model')
deployment <- Deployment()
service <- model$deploy(deployment, input_names=c('input'), output_names=c('output'))

Your YOLO object detection app is now live! You can send images to it using the detect_objects method of your service. We use OpenCV, imported as cv (through Python's cv2 library, using reticulate for interfacing with R), to load, annotate and write images. The full example (including model and images) is located here for your convenience.

library(reticulate)

cv <- import("cv2")
image <- cv$imread("say_yolo_again.jpg")
detection <- service$detect_objects(image)
image <- skil.utils.yolo.annotate_image(image, detection)
cv$imwrite('annotated.jpg', image)

Next, have a look at the SKIL UI at http://localhost:9008 to see how everything you just did is automatically tracked by SKIL. The UI is mostly self-explanatory and you shouldn't have much trouble navigating it. After logging in (use "admin" as user name and password), you will see that SKIL has created a workspace for you in the "Workspaces" tab. If you click on that workspace, you'll find a so called experiment, which contains the yolo model you just loaded into SKIL. Each SKIL experiment comes with a notebook that you can work in. In fact, if you click on "Open notebook" next to the experiment, you will be redirected to a live notebook that contains another interesting example that shows how to deploy Keras and DL4J models (the former in Python, the latter in Scala - all in the same notebook). If you like notebooks and a managed environment that provides you with everything you need out of the box, you can SKIL's notebooks for all your workload. For instance, you could copy and paste the 7 lines of code for the above YOLO app in a SKIL notebook and it will work the same way!

In the "Deployments" tab of the UI, you can see your deployed YOLO service, which consists of just one model, and you'll see that it is "Fully deployed". If you click on the deployment you'll see more details of it, for instance you can explicitly check the endpoints your service is available at. You could, among other things, also re-import the model again through the UI (in case you have a better version or needed to make other changes).

This completes your very first SKIL example.

You can’t perform that action at this time.