Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Emotion aware conversational interface - Text to Color

This was supported by Deep Learning Camp Jeju 2018 which was organized by TensorFlow Korea User Group.

This is a text-to-color demo code.
To obtain more face-to-face information on the conversational interface,
It designed as recognizing user's emotion from texts and displaying it in colors. 

Code Overview

  • deepmoji/ contains underlying codes to use deepmoji model.
  • models/ contains pretrained model and vocabulary.
  • templates/ contains HTML files for the Text to Color demo web page.
  • app.py/ main file to run the Text to Color Demo web page.

Dependencies

  • Python 3.x
  • Emoji 0.5
  • Flask 0.12
  • Requests 2.14.2
  • H5py 2.7.0
  • Text-unidecode 1.2
  • Keras 2.1.2

I ran this code on

  • Tensorflow (cpu-only) 1.8.0
    and
  • Tensorflow-gpu 1.4.0 & CUDA Toolkit 8.0 & CuDNN v6.0

How to run

  1. Git clone.
  2. Run app.py.
  3. then you can see message "* Running on http://localhost:5000/ (Press CTRL+C to quit)". access to “http://localhost:5000” on your browser.
  4. put the sentence and test it.

How it works

The text is classified into emojis(I use it as emotional labels) and emojis are mapped to colors.

Text to Emoji

I use the DeepMoji model from MIT media lab as emotion classifier.
It is trained by 1246 million tweets, which is containing one of 64 different common emoticon.

There are embedding layer to project each word into a vector space.
( a hyper tangent activation enforce a constraint of each embedding dimension being within -1~1. )
two bidirectional LSTM layers to capture the context of each word.
And an attention layer that lets the model decide the importance of each word for the prediction.

Emoji to Color

The color code I use is rgba. (a = defines the opacity.)

I mapping color(rgb) based on dendrogram, which shows how the model learns to group emojis based on emotional content.
The y-axis is the distance on the correlation matrix of the model’s predictions. It measured using average linkage.

The output from the model is the probability of each 64 different emojis.
I use top 3 probability with normalization for define the opacity of the layers.
And these 3 layers are overlapped, and then determine the color of the screen.

Citation

@inproceedings{felbo2017,
  title={Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm},
  author={Felbo, Bjarke and Mislove, Alan and S{\o}gaard, Anders and Rahwan, Iyad and Lehmann, Sune},
  booktitle={Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year={2017}
}

About

This is a text-to-color demo code.

Resources

License

Releases

No releases published

Packages

No packages published