RLHF pipeline for Hinge bio generation — human preferences → reward model → PPO alignment
-
Updated
Mar 23, 2026 - Jupyter Notebook
RLHF pipeline for Hinge bio generation — human preferences → reward model → PPO alignment
LoRA fine-tuning of LFM2.5-1.2B to improve spatial reasoning on StepGame — AIPI 590.03 Intelligent Agents, Project 1
Add a description, image, and links to the aipi590 topic page so that developers can more easily learn about it.
To associate your repository with the aipi590 topic, visit your repo's landing page and select "manage topics."