Skip to content

kdeng03/sr-chatbot

Repository files navigation

Self-Rewarding LLM

This is a general framework of self-rewarding LLM, where the LLM is asked to generate questions and answers (to the corresponding questions) itself first, then a reward-LLM (itself), after a fine-tuned on reward capability, is asked to review and judge the answers.

With DPO method, we can use preference learning by judging the generated answer to build a stronger model (both in instruction-following and rewarding) after iterations.

This project explores Self Rewarding Language Models from Yuan et al., 2024, utilizing LLM-as-a-Judge to allow a model to self-improve. It integrates Low-Rank Adaptation from Hu et al., 2021 optimizing adaptability without full tuning.

The code base is from here.

About

This is a general framework of self-rewarding LLM. With DPO method, we can use preference learning by judging the generated answer to build a stronger LLM after iterations.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors