-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What's necessary in order to train dialogues with length longer than 1023? #5
Comments
You can set fixed window size and shift it during the generation. Thus, you can manage the length to be under 1024. |
Longer than 1024, you mean? Also, would you be able to support this in your code, if it's not too much of an effort? I'm lacking the abilities to implement it myself. |
You have a sampler to sample dialogs sequentially. I have attached a version that implements random sampling. You can just modify it. class DialogFragmentSampler:
def __init__(self, max_len=1024):
"""Sample dialog fragments from a dialog
"""
self.max_tokens_len = max_len
def __call__(self, dialog):
"""dialog is a dict which has key "token_ids" and "text" with list of turns
"""
dialog_fragment = {}
lengths = np.array([len(item) for item in dialog['token_ids']])
# if the entire dialog is smaller than the max len
if lengths.sum() < self.max_tokens_len:
return dialog
cumsum_len = lengths.cumsum()
reverse_cumsum_len = cumsum_len[-1] - cumsum_len
# based on the reverse cumsum, we can have a range to select from
start_turns = np.arange(
len(reverse_cumsum_len))[reverse_cumsum_len > self.max_tokens_len]
# remove odd numbers
start_turns = [idx for idx in start_turns if idx % 2 == 0]
# randomly choose one
random_start_turn = random.choice(start_turns)
new_cumsum_len = cumsum_len - cumsum_len[random_start_turn]
# find the maximum end turn (only odd turn)
for i in reversed(range(len(new_cumsum_len))):
if i % 2 == 1 and new_cumsum_len[i] < self.max_tokens_len:
random_end_turn = i
break
dialog_fragment["text"] = dialog['text'][random_start_turn:
random_end_turn + 1]
dialog_fragment["token_ids"] = dialog['token_ids'][random_start_turn:
random_end_turn + 1]
return dialog_fragment |
Isn't sampling for testing? I want the model to be able to learn to represent dialogs longer than 1024, not just generating. In any case, it seems like I'm inadequate to comprehend your code and what you're saying. |
The current model can't encode more than 1024. You can try to apply the idea to models like XL-Net, which might solve your problem. |
Gotcha, thanks a lot. |
I know the context supported by GPT2 is 1024, but I assume there's some technique they utilized to train and generate dialogues longer than that in their results. Also, I saw many gpt2-based repos training text with length longer than 1024. Can you please explain what's necessary to train longer dialogues? And, would you consider implementing that?
The text was updated successfully, but these errors were encountered: