Skip to content
This is a text-to-color demo code.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Emotion aware conversational interface - Text to Color

This was supported by Deep Learning Camp Jeju 2018 which was organized by TensorFlow Korea User Group.

This is a text-to-color demo code.
To obtain more face-to-face information on the conversational interface,
It designed as recognizing user's emotion from texts and displaying it in colors. 

I ran this webpage on Google Cloud Platform during the final presentation of Jeju DL Camp 2018. [presentation slide] and let audience access to demo webpage with their device(laptop, smartphone.. etc). Now, I've shut down the website. However, you can still run it locally with this code and access to demo webpage.

Code Overview

  • deepmoji/ contains underlying codes to use deepmoji model.
  • models/ contains pretrained model and vocabulary.
  • templates/ contains HTML files for the Text to Color demo web page.
  • main file to run the Text to Color Demo web page.


  • Python 3.x
  • Emoji 0.5
  • Flask 0.12
  • Requests 2.14.2
  • H5py 2.7.0
  • Text-unidecode 1.2
  • Keras 2.1.2

I ran this code on

  • Tensorflow (cpu-only) 1.8.0
  • Tensorflow-gpu 1.4.0 & CUDA Toolkit 8.0 & CuDNN v6.0

How to run

  1. Git clone.
  2. Run
  3. then you can see message "* Running on http://localhost:5000/ (Press CTRL+C to quit)". access to “http://localhost:5000” on your browser.
  4. put the sentence and test it.

How it works

The text is classified into emojis(I use it as emotional labels) and emojis are mapped to colors.

Text to Emoji

I use the DeepMoji model from MIT media lab as emotion classifier.
It is trained by 1246 million tweets, which is containing one of 64 different common emoticon.

There are embedding layer to project each word into a vector space.
( a hyper tangent activation enforce a constraint of each embedding dimension being within -1~1. )
two bidirectional LSTM layers to capture the context of each word.
And an attention layer that lets the model decide the importance of each word for the prediction.

Emoji to Color

The color code I use is rgba. (a = defines the opacity.)

I mapping color(rgb) based on dendrogram, which shows how the model learns to group emojis based on emotional content.
The y-axis is the distance on the correlation matrix of the model’s predictions. It measured using average linkage.

The output from the model is the probability of each 64 different emojis.
I use top 3 probability with normalization for define the opacity of the layers.
And these 3 layers are overlapped, and then determine the color of the screen.


  title={Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm},
  author={Felbo, Bjarke and Mislove, Alan and S{\o}gaard, Anders and Rahwan, Iyad and Lehmann, Sune},
  booktitle={Conference on Empirical Methods in Natural Language Processing (EMNLP)},
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.