Skip to content

jonamar/got-jokes

Repository files navigation

How It Works: 1. Pre-processing: You run pre_processing.py to load, format, and tokenize your joke dataset, saving it in the tokenized_jokes_dataset directory. 2. Training: Once your dataset is ready, you run train_lora.py to load the tokenized dataset, apply LoRA layers to your pretrained model, and start training. After training, the model is saved in the fine_tuned_lora_model directory.

About

Ongoing experiment to train an llm to be funny. Come back tomorrow for jokes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages