Skip to content


Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Sentim Python API

An emotion recognition api that tells you the emotion of text, and not just the connotation.

This Python package is automatically generated by the OpenAPI Generator project:

  • API version: 1.0.0
  • Package version: 1.0.0
  • Build package: org.openapitools.codegen.languages.PythonClientCodegen


Python 2.7 and 3.5+

Installation & Usage

pip install

If the python package is hosted on a repository, you can install directly using:

pip install git+

(you may need to run pip with root permission: sudo pip install git+

Then import the package:

import sentim


Install via Setuptools.

python install --user

(or sudo python install to install the package for all users)

Then import the package:

import sentim

Getting Started

Please follow the installation procedure. The following is a basic example to get you started. Remember to change the path to credentials to the path to your actual credential file.

View the documentation at for more information.

import time
import sentim
from import ApiException
from pprint import pprint

configuration = sentim.Configuration()
# Create an instance of the API class to request an api token
api_instance = sentim.DefaultApi(sentim.ApiClient(configuration))

path_to_credentials = "/path/to/credentials"
with open(path_to_credentials, "r") as f:
  client_id, client_secret = f.readlines()[1].split(",")
  # Configure OAuth2 access token for authorization: sentim_auth
  configuration.access_token = api_instance.get_access_token(client_id, client_secret)

  #override instance with access token now that we've authenticated
  api_instance = sentim.DefaultApi(sentim.ApiClient(configuration))

  # Detect sentiment of a list of strings
  textlist = ['string 1', 'string 2', 'string 3'] # list[str] | List of Text to classify
  lang = 'Eng' # str | Language of the input text.

  batch_text = sentim.BatchText(textlist, lang)
  api_response = api_instance.detect_batch_emotion(batch_text=batch_text)
  # Note: Access token will eventually expire, so if you are doing a long running program, you should handle reauthentication
except ApiException as e:
  print("Exception when calling DefaultApi->detect_batch_emotion: %s\n" % e)

API Endpoints

All URIs are relative to

Class Method HTTP request Description
DefaultApi detect_batch_emotion POST /emotion/batch Detect the emotion of a list of strings
DefaultApi detect_emotion POST /emotion/single Detect emotion of a conversation
DefaultApi detect_emotion_conversation POST /emotion/conversation Detect the emotion of every user message in a conversation
DefaultApi get_access_token POST /token Oauth 2.0 authentication handler
DefaultApi score_chatbot_conversation POST /chatbot_effectiveness/batch Score the effectiveness of every chatbot message in a conversation
DefaultApi score_chatbot_effect POST /chatbot_effectiveness/single Score the effectiveness of the last chatbot message in a conversation


All documentation is available at


Checking For And Reauthenticating

By default, access tokens are only valid for 24 hours (though that may change), so an error will be thrown if you try to use an access token that has expired. The error that you get is the same you would get if you entered an access token that never existed - Error: Unauthorized, "Invalid authorization header", error code: 401.

It's pretty simple to check for and reauthenticate for long running programs:

import sys

  # try to use an access token that has expired, e.g.
  invalid_out = api_instance.detect_batch_emotion(batch_text)
except ApiException as e:
  t, v, tb = sys.exc_info()
  # if unauthorized error, reauth, otherwise throw original error
  if e.status == 401:
    # use client_id and secret from your credentials file
    configuration.access_token = api_instance.get_access_token(client_id, client_secret)
    raise t, v, tb

Understanding Conversations Greater Than Max Input Size

One issue you might have is that you have a long conversation that goes on for longer than the maximum input size but you want to avoid processing the same part of the conversation multiple times. Thankfully, this is why the ignore_first parameter is part of the conversation object, so we have a not so complicated implementation of processing conversation at any size:

def process_conversation(api_instance, all_messages, lang, is_emotion = False):
  A processor to score or get the emotion for all messages in long conversations.
  Note: this method assumes no errors occur during its execution
  (e.g. no input data errors or server 500 errors).
  This should probably handle errors if this is used in production.

    api_instance: pre-authenticated sentim api instance
    all_messages: the really long list of messages in our conversation
    lang: language used, e.g. "eng"
    is_emotion: whether to call detect_emotion_conversation or score_chatbot_conversation.
      Default: False (i.e. call score_chatbot_conversation)

    The object the desired function would normally return,
    i.e. BatchEmotionResponse or ConversationResponse but with the data from the whole conversation.
  def add_to_response(response, error_list, result_list, base_index):
    Fix the indices of the current data and add the data to our response object.

      response: the out_response object
      error_list: the errors to add to our response
      result_list: the results to add to our response
      base_index: where in the conversation we started processing this data
    for error_item in error_list:
      error_item.index = error_item.index + base_index

    for item in result_list:
      item.index = item.index + base_index

  if is_emotion:
    conversation_fn = api_instance.detect_emotion_conversation
    out_response = sentim.BatchEmotionResponse([],[])
    conversation_fn = api_instance.score_chatbot_conversation
    out_response = sentim.ConversationResponse([],[])

  max_input_size = 25 # current maximum size of conversation to send to api
  ignore_first = False # originally want to process all parts of conversation

  # TODO handle error responses during any of these calls
  if len(all_messages) < 5:
    conv = sentim.Conversation(all_messages, lang=lang, ignore_first=ignore_first)
    return conversation_fn(conv)

  messages_counter = 0
  # plus 3 because we don't want to send an extra input call if we have already processed all of the data
  while messages_counter + 3 < len(all_messages):
    conv = sentim.Conversation(all_messages[messages_counter:messages_counter + max_input_size],
      lang=lang, ignore_first=ignore_first)
    api_response = conversation_fn(conv)

    # no longer want all the messages, instead ignore the ones we have already processed
    # Note: for detect_emotion_conversation this will ignore the first two whereas
    # for score_chatbot_conversation this will only ignore the first one - this is
    # because score_chatbot_conversation scores the 2nd and 4th messages (indices 1 and 3)
    # whereas detect_emotion_conversation gets the emotions for the
    # 1st, 3rd, and 5th messages (indices 0, 2, and 4)
    ignore_first = True

    # add conversation data to our response object
    add_to_response(out_response, api_response.error_list, api_response.result_list, messages_counter)

    # increment the counter so that include context for the next calls
    messages_counter += 22

  return out_response



Apache 2.0


API client to call Sentim's Emotion Recognition API and Chatbot Effectiveness API from.







No releases published


No packages published