How It Works: 1. Pre-processing: You run pre_processing.py to load, format, and tokenize your joke dataset, saving it in the tokenized_jokes_dataset directory. 2. Training: Once your dataset is ready, you run train_lora.py to load the tokenized dataset, apply LoRA layers to your pretrained model, and start training. After training, the model is saved in the fine_tuned_lora_model directory.
-
Notifications
You must be signed in to change notification settings - Fork 0
jonamar/got-jokes
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Ongoing experiment to train an llm to be funny. Come back tomorrow for jokes.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published