-
Notifications
You must be signed in to change notification settings - Fork 604
Closed
Labels
questionFurther information is requestedFurther information is requested
Description
I've stumbled upon a blog by post by one of the creators of AI Dungeon, in which he explains how they have used Cortex for their scalable model serving (https://medium.com/@aidungeon/how-we-scaled-ai-dungeon-2-to-support-over-1-000-000-users-d207d5623de9). This inspired me to try to deploy my own large finetuned GPT-2 models using Cortex but I've been facing difficulties implementing this.
AI Dungeon displays GPT2 outputs in realtime, something I've had the most difficulty implementing so far, since Cortex doesn't yet support websockets to stream the output. How have the creators manage to make AI Dungeon so interactive using Cortex and is it possible to replicate those results?
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested