AI Heap
Published on

VinePPO: Refining Credit Assignment in RL Training of LLMs

arXiv:2410.01679 - [arXiv,PDF]
Authors
  • Name
    Amirhossein Kazemnejad
  • Name
    Milad Aghajohari
  • Name
    Eva Portelance
  • Name
    Alessandro Sordoni
  • Name
    Siva Reddy
  • Name
    Aaron Courville
  • Name
    Nicolas Le Roux
  • Affiliation
    University of Tehran
  • Affiliation
    Sharif University of Technology
  • Affiliation
    University of Montreal
  • Affiliation
    Mila - Quebec AI Institute
  • Affiliation
    University of Alberta
  • Affiliation
    Facebook AI Research
Large language models (LLMs) are increasingly applied to complex reasoning tasks that require executing several complex steps before receiving any reward. Properly assigning credit to these steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a common reinforcement learning (RL) algorithm used for LLM finetuning, employs value networks to tackle credit assignment. However, recent approaches achieve strong results without it, raising questions about the efficacy of value networks in practice. In this work, we systematically evaluate the efficacy of value networks and reveal their significant shortcomings in reasoning-heavy LLM tasks, showing that they often produce poor estimate of expected return and barely outperform a random baseline when comparing alternative steps. This motivates our key question: Can improved credit assignment enhance RL training for LLMs? To address this, we propose VinePPO, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates. Our method consistently outperforms PPO and other baselines across MATH and GSM8K datasets in less wall-clock time (up to 3.0x). Crucially, it achieves higher test accuracy for a given training accuracy, capturing more generalization signal per sample. These results emphasize the importance of accurate credit assignment in RL training of LLM.