Skip to content

Latest commit

 

History

History
103 lines (86 loc) · 5.3 KB

04-flask-deployment.md

File metadata and controls

103 lines (86 loc) · 5.3 KB

5.4 Serving the churn model with Flask

Slides

Notes

In this session we talked about implementing the functionality of prediction to our churn web service and how to make it usable in development environment.

  • To make the web service predict the churn value for each customer we must modify the code in session 3 with the code we had in previous chapters. Below we can see how the code works in order to predict the churn value.
  • In order to predict we need to first load the previous saved model and use a prediction function in a special route.
    • To load the previous saved model we use the code below:
    • import pickle
      
      with open('churn-model.bin', 'rb') as f_in:
        dv, model = pickle.load(f_in)
      
    • As we had earlier to predict a value for a customer we need a function like below:
    • def predict_single(customer, dv, model):
        X = dv.transform([customer])  ## apply the one-hot encoding feature to the customer data 
        y_pred = model.predict_proba(X)[:, 1]
        return y_pred[0]
      
    • Then at last we make the final function used for creating the web service.
    • @app.route('/predict', methods=['POST'])  ## in order to send the customer information we need to post its data.
      def predict():
      customer = request.get_json()  ## web services work best with json frame, So after the user post its data in json format we need to access the body of json.
      
      prediction = predict_single(customer, dv, model)
      churn = prediction >= 0.5
      
      result = {
          'churn_probability': float(prediction), ## we need to conver numpy data into python data in flask framework
          'churn': bool(churn),  ## same as the line above, converting the data using bool method
      }
      
      return jsonify(result)  ## send back the data in json format to the user
      
    • The whole code above is available in this link: churn_serving.py
    • At last run your code. To see the result can't use a simple request in web browser. We can run the code below to post a new user data and see the response
    • ## a new customer informations
      customer = {
        'customerid': '8879-zkjof',
        'gender': 'female',
        'seniorcitizen': 0,
        'partner': 'no',
        'dependents': 'no',
        'tenure': 41,
        'phoneservice': 'yes',
        'multiplelines': 'no',
        'internetservice': 'dsl',
        'onlinesecurity': 'yes',
        'onlinebackup': 'no',
        'deviceprotection': 'yes',
        'techsupport': 'yes',
        'streamingtv': 'yes',
        'streamingmovies': 'yes',
        'contract': 'one_year',
        'paperlessbilling': 'yes',
        'paymentmethod': 'bank_transfer_(automatic)',
        'monthlycharges': 79.85,
        'totalcharges': 3320.75
      }
      import requests ## to use the POST method we use a library named requests
      url = 'http://localhost:9696/predict' ## this is the route we made for prediction
      response = requests.post(url, json=customer) ## post the customer information in json format
      result = response.json() ## get the server response
      print(result)
      
  • Until here we saw how we made a simple web server that predicts the churn value for every user. When you run your app you will see a warning that it is not a WGSI server and not suitable for production environmnets. To fix this issue and run this as a production server there are plenty of ways available.
    • One way to create a WSGI server is to use gunicorn. To install it use the command pip install gunicorn, And to run the WGSI server you can simply run it with the command gunicorn --bind 0.0.0.0:9696 churn:app. Note that in churn:app the name churn is the name we set for our the file containing the code app = Flask('churn')(for example: churn.py), You may need to change it to whatever you named your Flask app file.
    • Windows users may not be able to use gunicorn library because windows system do not support some dependecies of the library. So to be able to run this on a windows machine, there is an alternative library waitress and to install it just use the command pip install waitress.
    • to run the waitress wgsi server use the command waitress-serve --listen=0.0.0.0:9696 churn:app.
    • To test it just you can run the code above and the results is the same.
  • So until here you were able to make a production server that predict the churn value for new customers. In the next session we can see how to solve library version conflictions in each machine and manage the dependencies for production environments.

Add notes from the video (PRs are welcome)

⚠️ The notes are written by the community.
If you see an error here, please create a PR with a fix.

Navigation