Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's necessary in order to train dialogues with length longer than 1023? #5

Closed
dimeldo opened this issue Oct 29, 2019 · 6 comments
Closed

Comments

@dimeldo
Copy link

dimeldo commented Oct 29, 2019

I know the context supported by GPT2 is 1024, but I assume there's some technique they utilized to train and generate dialogues longer than that in their results. Also, I saw many gpt2-based repos training text with length longer than 1024. Can you please explain what's necessary to train longer dialogues? And, would you consider implementing that?

@qywu
Copy link
Owner

qywu commented Oct 29, 2019

You can set fixed window size and shift it during the generation. Thus, you can manage the length to be under 1024.

@dimeldo
Copy link
Author

dimeldo commented Oct 30, 2019

Thus, you can manage the length to be under 1024.

Longer than 1024, you mean?

Also, would you be able to support this in your code, if it's not too much of an effort? I'm lacking the abilities to implement it myself.

@qywu
Copy link
Owner

qywu commented Nov 1, 2019

You have a sampler to sample dialogs sequentially. I have attached a version that implements random sampling. You can just modify it.

class DialogFragmentSampler:
    def __init__(self, max_len=1024):
        """Sample dialog fragments from a dialog
        """
        self.max_tokens_len = max_len

    def __call__(self, dialog):
        """dialog is a dict which has key "token_ids" and "text" with list of turns
        """
        dialog_fragment = {}

        lengths = np.array([len(item) for item in dialog['token_ids']])

        # if the entire dialog is smaller than the max len
        if lengths.sum() < self.max_tokens_len:
            return dialog

        cumsum_len = lengths.cumsum()
        reverse_cumsum_len = cumsum_len[-1] - cumsum_len

        # based on the reverse cumsum, we can have a range to select from
        start_turns = np.arange(
            len(reverse_cumsum_len))[reverse_cumsum_len > self.max_tokens_len]
        # remove odd numbers
        start_turns = [idx for idx in start_turns if idx % 2 == 0]
        # randomly choose one
        random_start_turn = random.choice(start_turns)
        new_cumsum_len = cumsum_len - cumsum_len[random_start_turn]

        # find the maximum end turn (only odd turn)
        for i in reversed(range(len(new_cumsum_len))):
            if i % 2 == 1 and new_cumsum_len[i] < self.max_tokens_len:
                random_end_turn = i
                break

        dialog_fragment["text"] = dialog['text'][random_start_turn:
                                                 random_end_turn + 1]
        dialog_fragment["token_ids"] = dialog['token_ids'][random_start_turn:
                                                           random_end_turn + 1]

        return dialog_fragment

@dimeldo
Copy link
Author

dimeldo commented Nov 2, 2019

Isn't sampling for testing? I want the model to be able to learn to represent dialogs longer than 1024, not just generating. In any case, it seems like I'm inadequate to comprehend your code and what you're saying.

@qywu
Copy link
Owner

qywu commented Nov 4, 2019

The current model can't encode more than 1024. You can try to apply the idea to models like XL-Net, which might solve your problem.

@dimeldo
Copy link
Author

dimeldo commented Nov 4, 2019

Gotcha, thanks a lot.

@dimeldo dimeldo closed this as completed Nov 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants