Skip to content
@RLHFlow

RLHFlow

Code for the Workflow of Reinforcement Learning from Human Feedback (RLHF)

Popular repositories Loading

  1. RLHF-Reward-Modeling RLHF-Reward-Modeling Public

    Recipes to train reward model for RLHF.

    Python 1.4k 100

  2. Online-RLHF Online-RLHF Public

    A recipe for online RLHF and online iterative DPO.

    Python 519 49

  3. Online-DPO-R1 Online-DPO-R1 Public

    Codebase for Iterative DPO Using Rule-based Rewards

    Python 247 31

  4. Self-rewarding-reasoning-LLM Self-rewarding-reasoning-LLM Public

    Recipes to train the self-rewarding reasoning LLMs.

    Python 222 10

  5. Minimal-RL Minimal-RL Public

    Python 206 10

  6. Directional-Preference-Alignment Directional-Preference-Alignment Public

    Directional Preference Alignment

    57 3

Repositories

Showing 10 of 10 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics