Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve training data #2

Closed
4 tasks done
0x000011b opened this issue Jan 21, 2023 · 8 comments
Closed
4 tasks done

Improve training data #2

0x000011b opened this issue Jan 21, 2023 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@0x000011b
Copy link
Collaborator

0x000011b commented Jan 21, 2023

I haven't been able to make any significant improvements on the models by twiddling around with hyperparameters and training objectives ever since around experiment 2, so I'm going to shift into focusing on improving the training data instead.

Some relevant points to consider:

  • I'd like to improve handling of example dialogue.
    • As of now this is being done in a really stupid way: during training the data is mostly discarded, and during inference time it's handled as regular chat history. Obviously not ideal.
  • I'd like the model to stick closer to the example dialogues, even if the user responds in a completely different format.
    • As of now, if you give a character a short greeting, it'll get stuck responding with short messages.
    • If you add some example dialogue where the character is very descriptive and detailed, it'll stay that way for the first few messages and then degrade to short responses again as the example dialogue is pushed out of the chat history.
    • Ideally, I'd like characters to follow the format in their example dialogue more closely even after a lot of conversation.
  • It might be worth looking into new data, even if it's non-conversational.
    • I'm thinking this might help the model generate more creative and interesting responses, given that dialogue datasets are usually boring as hell (hey. how have you been? good. thanks. ok nice talking to you)
  • It would be nice to add some special tokens to be able to inject external knowledge that the model could use to ground its responses on.
    • That way, Kobold users can make use of Author's Notes and World Info, and we could make use of internet search/long-term memory stores/whatever else on the official service.
    • This might imply looking up new datasets to add (BlenderBot 3 paper might be useful here thanks to the retrieval and grounding modules + their relevant datasets) or generating synthetic data, so this is more of a long/medium-term goal instead.
@0x000011b 0x000011b self-assigned this Jan 21, 2023
@0x000011b 0x000011b added the enhancement New feature or request label Jan 21, 2023
@Silver267
Copy link

I have a feeling that this step is going to be the key towards approaching CAI, especially the idea of injecting external knowledge...

@0x000011b
Copy link
Collaborator Author

Another report I got: model calling the user random unrelated names. Might be worth running some NLP toolkit over the data to see if there's any significant imbalances towards certain names, and try to clean that up.

@0x000011b
Copy link
Collaborator Author

As for non-conversational data: a recent paper by Google seems to indicate that starting from an instruction-tuned model rather than a regular pre-trained LM might actually improve downstream task performance when performing further fine-tuning. Twitter thread about this, relevant arxiv and code repository.

@lloorree
Copy link

lloorree commented Feb 5, 2023

With respect to more data sources, some thoughts:

  • Non-conversational sources could be parsed into conversational ones. Having used commoncrawl before it would be feasible to choose some set of forums/social media/et cetera sites and parse it out as, for example: the original post in the forum is the scenario, and each user is a speaker. This would give longer, more detailed text than a lot of casual conversational datasets as well.
  • Television and movie scripts. There are a lot of publicly available ones, and for TV episodes blurbs could be the scenario. With way more time commitment you could even start going video -> generated script -> input data. I’ve been playing with this with the full scripts of a show and even having a full series in the character definition json helps a lot.

I have time to help out with either of these if it would be useful.

@0x000011b
Copy link
Collaborator Author

Hey @lloorree! Indeed, forums seem like they might be a good source. The community has contributed around ~350MB of forum posts that I'll attempt to write some parsing code for. It's all in SQlite databases, so the code will be somewhat similar to the Discord DHT parsing stuff I wrote. If you're interested in helping out with that, let me know and I can send you a sample.

As for TV stuff, it didn't seem that great to me because a big portion of the context is the stuff that's happening on screen, so if you take just the text/dialogue, it's usually pretty bland and uninteresting.

@lloorree
Copy link

lloorree commented Feb 6, 2023

Hey! That would be perfect, thanks. The main other thing I'd need to get started would be a few lines of one of the correctly parsed output files to double-check that what I'm getting running locally is right.

Fair enough about the dialogue.

@0x000011b
Copy link
Collaborator Author

@lloorree: I created an issue to track the implementation of the forum datasets (#4), let me know if you're interested in giving it a shot, if you reach out to me on Matrix I can help get you situated with the data-toolbox repo and share some example files.

@TearGosling
Copy link
Contributor

Old discussion, very informative, but closing so that I can tidy up this repo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Done
Development

No branches or pull requests

4 participants