Python library for deploying models built using Python to Alteryx Promote.
Clone or download
Latest commit 906bf4b Nov 20, 2018

Alteryx Promote Python Client

Library for deploying Python models to Alteryx Promote.


Hello World - a very simple model

Hello Vectorized - a vectorized version of the 'Hello World' model

Iris Classifier - use a Support Vector Classifier to predict flower types

Article Summarizer - send a url with a news article and get a summary

DB Lookup - lookup a value in a database

Ensemble Model - build and deploy an ensemble model

Naivebayes Pomegranate - an Naive Bayes model using the pomegranate library

Weather Model - send Lat/Lon data and get real-time weather data and classify temperature



To install the promote library, execute the following code from a terminal session:

pip install promote

Please refer to the promote-r-client library for instructions on installing the R Client.


Please refer to the installation guide for instructions on installing the Promote App.


Model Directory Structure

├── requirements.txt
├── (optional)
├── helpers (optional)
│   ├──
│   └──
└── objects (optional)
    └── my_model.p
  • our primary model deployment script

  • requirements.txt: this file tells Promote which libraries to install as dependencies for the model

  • this file is executed before your model is built. It can be used to install low-level system packages such as Linux packages

  • helpers: use this directory to store helper scripts that can be imported by the main deployment script. This is helpful for keeping you deployment script code clean.

  • objects: use this directory to store model, data, and other artifacts that must be loaded into memory when the model is deployed


Initial Setup

Load the promote library that was previously installed:

import promote

# import pickle to deserialize your model
import pickle

# import json to parse your test data
import json

Import your saved model object:

# Previously saved model 'pickle.dump( my_model, open( "./objects/my_model.p", "wb" ) )'
my_model = pickle.load( open( "./objects/my_model.p", "rb" ) )

Model Function

The model function is used to define the API endpoint for a model and is executed each time a model is called. This is the core of the API endpoint.




  • data(list or dict): the parsed JSON sent to the deployed model


def modelFunction(data):


It is possible to decorate your model function with the promote.validate_json decorator. This validates that the input data to the model meets specific predefined criteria. Failure to meet these criteria will throw an error.




  • aSchema(Schema): a valid schema.Schema object


from schema import Schema, And

@promote.validate_json(Schema({'X1': And(int, lambda s: min([t > 0 for t in s]))},{'X2': And(int, lambda s: min([t > 0 for t in s]))}))
def modelFunction(data):

Test Data

It is a good practice to test the model function as part of the deployment script to make sure it successfully produces an output. Once deployed, the data being input into the model function will always be in the form of a python dict or list. The incoming JSON string will be parsed using the loads() method available from the json library.


testdata = '{"X1":[1,2,3],"X2":[4,5,6]}'


To deploy models, you'll need to add your username, API key, and URL of Promote to a new instance of the class Promote.


promote.Promote(username, apikey, url)


  • username(string): Your Promote username
  • apikey(string): Your Promote APIKEY
  • url(string): URL of your promote server


p = promote.Promote("username", "apikey", "")


Store custom metadata about a model version when it is deployed to the Promote servers. (limited to 6 key-value pairs).


  • key(string): the name of your metadata (limit 20 characters)
  • value: a value for your metadata (will be converted to string and limited to 50 characters)

Example: = 1
p.metadata["two"] = 2
p.metadata['three'] = "this is the third item"
p.metadata.array = [0, 1, 'two']
p.metadata.dict = {'a': 1, 'b': 'two'}


The deploy function captures the model function, any objects in the helpers and objects directories, the requirements.txt file, and the file, and sends them in a bundle to the Promote servers.


p.deploy(modelName, functionToDeploy, testdata, confirm=False, dry_run=False, verbose=1)


  • modelName(string): Name of the model you're deploying (this will be the name of the endpoint for the model as well)
  • functionToDeploy(function): Function you'd like to deploy to Promote
  • testdata(list or dict): Sample data that will be used to validate your model can successfully execute
  • confirm(boolean, optional): If True, deployment will pause before uploading to the server and validate that you actually want to deploy
  • dry_run(boolean, optional): If True, deployment will exit prior to uploading to the server and will instead return the bundle
  • verbose(int, optional): Controls the amount of logs displayed. Higher values indicate more will be shown


p.deploy("MyFirstPythonModel", modelFunction, testdata, confirm=False, dry_run=False, verbose=0)


The Promote.predict() method sends data to a deployed model via REST API request and returns a prediction.


p.predict(modelName, data, username=None)


  • modelName(string): Name of the model you'd like to query
  • data(list or dict): Data you'd like to send to the model to be scored
  • username(string, optional): Username of the model you'd like to query. This will default to the one set in the Promote constructor. However if you'd like to query another person's model or a production model, this will come in handy.


p.predict("MyFirstPythonModel", json.loads(testdata), username=None)


The requirements.txt file is how to specify the libraries that should be installed by the promote app upon deployment of the model. The promote library should always be listed in addition to any other model dependencies. You are also able to specify the version of each library that should be installed.



You can also install dependencies hosted on public or private hosted git repositories using a well-formatted https link (SSH is currently not supported). If the repository is private, you will need to first create a personal access token. Refer to the documentation for your hosting provider for best practices in structuring this link. Generally, a link with the following format will work:


You can also target a specific branch, tag or commit SHA. See the pip docs for more on how to structure these links:


The file can be included in your model directory. It is executed before your model is built and can be used to install low-level system packages such as Linux packages and other dependencies.


# Install Microsoft SQL Server RHEL7 ODBC Driver
curl > /etc/yum.repos.d/mssql-release.repo

yum remove unixODBC-utf16 unixODBC-utf16-devel #to avoid conflicts
ACCEPT_EULA=Y yum install msodbcsql17
# optional: for bcp and sqlcmd
ACCEPT_EULA=Y yum install mssql-tools
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
source ~/.bashrc

helpers Directory

Users can also add a helpers directory to the root directory of their project to store helper scripts that are used by the deployment script. Please refer to the Model Directory Structure above for an example. Adding an file to the helpers directory will allow the python files in the directory to be discoverable via the python import command.


from helpers import helper_funs

objects Directory

Users can also add an objects directory to the root directory of their project to store helper scripts that are used by the deployment script. Please refer to the Model Directory Structure above for an example. The objects directory is a great place to put pretrained models and other model dependencies. It is a best practice to train models outside of the script and to save the trained model to the objects directory. This prevents the Promote app from attempting to retrain the model on each redeploy.


my_model = pickle.load( open( "./objects/my_model.p", "rb" ) )


Currently, the only way to deploy a python model is to execute the script from a command line terminal. To do this, open a command line window, navigate to the root project directory and run the following: