Skip to content

danielpradilla/python-tuning-llm

Repository files navigation

Fine-tuning experiments

This repo contains various experiments done in April-May 2024 tuning open and private LLMs to write in the style of a particular author. A sample of about 3000 diary entries written at different ages was used as training data.

The objective, beyond the experiment in itself, was to undersand how fine-tuning LLMs work in different environments: AWS SageMaker Studio using GPU instances, SageMarker only for the training runs, AWS Bedrock, locally in an Apple M1, OpenAI, Azure.

About

tuning experiments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published