On the Limited Generalization Capability of the Implicit Reward Model Induced by Direct Preference Optimization


Reinforcement Learning from Human Feedback (RLHF) is an effective approach for aligning language models to human preferences. Central to RLHF is learning a reward function for scoring human preferences. Two main approaches for learning a reward model are 1) training an explicit reward model as in RLHF, and 2) using an implicit reward learned from preference data through methods such as Direct Preference Optimization (DPO). Prior work has shown that the implicit reward model of DPO can approximate a trained reward model, but it is unclear to what extent DPO can generalize to distribution shifts, an issue which can occur due to limited preference data, or changing language from the trained model. We address this question by comparing the accuracy at distinguishing preferred and rejected answers using both DPO and RLHF rewards. Our findings indicate that DPO’s implicit reward performs similarly to RLHF rewards on in-distribution data, but severely under-performs RLHF reward models. Across five out-of-domain settings, DPO has a mean drop in accuracy of 3% and a maximum drop of 7%, highlighting the shortcomings of DPO’s implicit reward model for preference optimization. These findings highlight that DPO’s implicit reward model has limited generalization ability and substantiates the integration of an explicit reward model in iterative DPO approaches.



Source link