Generate python code based on based on the evaluation paper Co-Writing Screenplays and Theatre Scripts with Language Models, using huggingface transformer library, any available large language model pretrained checkpoint, and python. The functions should be divided into 3 hierarchical layers of abstraction. 

The highest layer is user input, a single sentence describing the central dramatic conflict. 
The middle layer generates character descriptions, a plot outline (a sequence of high-level scene descriptions together with corresponding locations), and location descriptions. 
The bottom layer is generated character dialogue for the text of the script.
Here is a sample code that implements the three-layer hierarchy described in the evaluation paper:


import transformers
import torch

class ScriptGenerator:
    def __init__(self):
        self.model = transformers.pipeline(text-generation, model=xlm-roberta-base)

    def generate_script(self, central_conflict):
        # highest layer: user input
        characters, plot_outline, locations = self.middle_layer(central_conflict)
        script_text = self.bottom_layer(characters, plot_outline, locations)
        return script_text

    def middle_layer(self, central_conflict):
        # generate character descriptions
        characters = self.generate_characters()
        # generate plot outline and locations
        plot_outline, locations = self.generate_plot_outline(central_conflict)
        return characters, plot_outline, locations

    def bottom_layer(self, characters, plot_outline, locations):
        # generate character dialogue
        script_text = self.generate_dialogue(characters, plot_outline, locations)
        return script_text

    def generate_characters(self):
        prompt = Generate a description of a character for a play or movie.
        character_description = self.model(prompt, max_length=1024, num_return_sequences=1)[0].generated_text
        return character_description

    def generate_plot_outline(self, central_conflict):
        prompt = fGenerate a plot outline for a play or movie with the central conflict: {central_conflict}
        plot_outline = self.model(prompt, max_length=1024, num_return_sequences=1)[0].generated_text
        # extract locations from plot outline
        locations = self.extract_locations(plot_outline)
        return plot_outline, locations

    def extract_locations(self, plot_outline):
        # code to extract locations from plot outline
        locations = []
        # dummy code, replace with actual implementation
        locations.append(A quiet coffee shop)
        locations.append(A busy city street)
        return locations

    def generate_dialogue(self, characters, plot_outline, locations):
        prompt = fGenerate dialogue for a play or movie with the following characters: {characters} and plot outline: {plot_outline} and locations: {locations}
        dialogue = self.model(prompt, max_length=1024, num_return_sequences=1)[0].generated_text
        return dialogue

if __name__ == __main__:
    generator = ScriptGenerator()
    central_conflict = Two estranged brothers must reunite to save their family business from financial ruin.
    script_text = generator.generate_script(central_conflict)
    print(script_text)


This code uses the Hugging Face Transformers library and a pre-trained language model checkpoint to generate script text. The functions are organized into three hierarchical layers, with the highest layer accepting a single sentence describing the central dramatic conflict, the middle layer generating character descriptions, a plot outline, and location descriptions, and the bottom layer generating character dialogue for the script.
Do I have to train the model myself or can I get one already trained?
This code uses a pre-trained language model checkpoint to generate script text.  That means that you are downloading a model that is already trained, so you do not need to train the model yourself.  You can use the model as the basis for your own training if you need the model to produce different results which are more applicable to your production pipeline.  Otherwise, as long as the pre-trained model meets your usage criteria, it is ready to use as-is.