-
Notifications
You must be signed in to change notification settings - Fork 1
Home
Koustuv Sinha edited this page Oct 20, 2017
·
11 revisions
Model | Author | Public Repository | Trained | Wrapper created | CPU | GPU | RAM | Data provided |
---|---|---|---|---|---|---|---|---|
HRED Twitter | Mike | ❌ | ✅ | ✅ | ✅ | ✅ | ||
HRED Reddit | Mike | ❌ | ✅ | ✅ | ✅ | ✅ | ||
Dual Encoder | Nicolas | ✅ | ✅ | ✅ | ✅ | ✅ | ||
Dual Encoder | Prasanna | ⌛ | ⌛ | |||||
Follow up questions | Koustuv | ✅ | ✅ | ✅ | ✅ | ✅ | ||
DrQA | Peter | ✅ | ⌛ | ⌛ |
N.B. CPU column indicates that this model inference can run only on CPU
Docker container runs bot.py on startup, which loads all the models we have on the RAM. So we need to send an estimate of total RAM usage to Valentine (organizer) soon. The model_selection.py script will handle which response to select.
Update the following list with the dependencies you might need apart from this list. Checked ones indicate they have been added to the Dockerfile.
- Theano
- Tensorflow
- Lasagne
- Pytorch