Skip to content
Koustuv Sinha edited this page Oct 20, 2017 · 11 revisions

ConvAI RLLBot

Status

Model Author Public Repository Trained Wrapper created CPU GPU RAM Data provided
HRED Twitter Mike
HRED Reddit Mike
Dual Encoder Nicolas
Dual Encoder Prasanna
Follow up questions Koustuv
DrQA Peter

N.B. CPU column indicates that this model inference can run only on CPU

System architecture

Docker container runs bot.py on startup, which loads all the models we have on the RAM. So we need to send an estimate of total RAM usage to Valentine (organizer) soon. The model_selection.py script will handle which response to select.

Dependency Requirements

Update the following list with the dependencies you might need apart from this list. Checked ones indicate they have been added to the Dockerfile.

  • Theano
  • Tensorflow
  • Lasagne
  • Pytorch
Clone this wiki locally