A platform for the warehousing and evaluation of neural open domain chatbot models.
Code and Data for the paper Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References SIGdial 2019
NLG and NLU for dialogue processing
D-score Framework For Open-domain Automatic Dialogue Evaluation
Scripts for ChatEval and Dialog Annotation
Code for the paper "Learning an Unreferenced Metric for Online Dialogue Evaluation", ACL 2020
A suite of tools for managing crowdsourcing tasks from the inception through to data packaging for research use
Public evaluation tool for non task driven neural open domain chatbots
Efficient Annotation of Scalar Labels
Code to publish HITs on Mechanical Turk to collect human baselines
Evaluate your dialog model with 17 metrics! (see paper)
All experiments and evaluation code for decoding diversity project!
Microservice to handle automatic evaluation of neural chatbot models. Multiple automated evaluation methods (including embedding-based metrics).
Chatbot comparison webapp built using React.
The dataset and code released with the submission of NAACL 2018 paper "RankME: Reliable Human Ratings for Natural Language Generation"
This organization has no public members. You must be a member to see who’s a part of this organization.