Skip to content

qingquansong/detext

 
 

Repository files navigation

Python application tensorflow License

DeText: A Deep Neural Ranking Framework with Text Understanding

Relax like a sloth, let DeText do the understanding for you

What is it

DeText is a Deep neural ranking framework with Text understanding. is a ranking framework that leverages semantic matching using deep neural networks to understand member intents in search and recommender systems. As a general ranking framework, currently DeText can be applied to many tasks, including search & recommendation ranking and query understanding tasks.

Highlight

Design principles for DeText framework:

  • Natural language understanding powered by state-of-the-art deep neural networks

    • Automatic feature extraction with deep models
    • End-to-end training
    • Interaction modeling between ranking sources and targets
  • A general framework with great flexibility to meet requirement of different ranking productions.

    • Flexible deep model types
    • Multiple loss function choices
    • User defined source/target fields
    • Configurable network structure (layer sizes and #layers)
    • Tunable hyperparameters ...
  • Reaching a good balance between effectiveness and efficiency to meet the industry requirements.

The framework

The DeText framework contains multiple components:

Word embedding layer. It converts the sequence of words into a d by n matrix.

CNN/BERT/LSTM for text embedding layer. It takes into the word embedding matrix as input, and maps the text data into a fixed length embedding. It is worth noting that we adopt the representation based methods over the interaction based methods. The main reason is the computational complexity: The time complexity of interaction based methods is at least O(mnd), which is one order higher than the representation based methods max(O(md), O(nd).

Interaction layer. It generates deep features based on the text embeddings. Many options are provided, such as concatenation, cosine similarity, etc.

Traditional Feature Processing. We combine the traditional features with the interaction features (deep features) in a wide & deep fashion.

MLP layer. The MLP layer is to combine traditional features and deep features.

It is an end-to-end model where all the parameters are jointly updated to optimize the click probability.

Model Flexibility

DeText is a general ranking framework that offers great flexibility for clients to build customized networks for their own use cases:

LTR layer: in-house LTR loss implementation, or tf-ranking LTR loss.

MLP layer: customizable number of layers and number of dimensions.

Interaction layer: support Cosine Similarity, Outer Product, Hadamard Product, and Concatenation.

Text embedding layer: support CNN, BERT, LSTM-Language-Model with customized parameters on filters, layers, dimensions, etc.

Continuous feature normalization: element-wise scaling, value normalization.

Categorical feature processing: modeled as entity embedding.

All these can be customized via hyper-parameters in the DeText template. Note that tf-ranking is supported in the DeText framework, i.e., users can choose the LTR loss and metrics defined in DeText.

How to use it

Setup dev environment

  1. Create & source your virtualenv
  2. Run setup for DeText:
python setup.py develop

Run tests

Run all tests:

pytest 

Running DeText model training toy example

The main script for running DeText model training is through src/detext/run_detext.py. Users could customize the hyperparmeters based on different requirements for specific tasks. Please refer to TRAINING.md for more details on the training data format and hyperparameter description. For a test run on a small sample dataset, please checkout the following script.

cd src/detext/resources
bash run_detext.sh

DeText training manual

Users have full control for custom designing DeText models. In the training manual (TRAINING.md), users can find information about the following:

  • Training data format and preparation
  • Key parameters to customize and train DeText models
  • Detailed information about all DeText training parameters for full customization

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

This project is licensed under the BSD 2-CLAUSE LICENSE - see the LICENSE.md file for details

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%