nice....
@zstevenwu
Computer science professor at Carnegie Mellon. Researcher in machine learning. Algorithmic foundations of responsible AI (e.g., privacy, uncertainty quantification), interactive learning (e.g., imitation/reinforcement learning). https://zstevenwu.com/
nice....
I was lucky enough to be invited give a talk on our new paper on the value of RL in fine-tuning at Cornell last week! Because of my poor time management skills, the talk isn't as polished as I'd like, but I think the "vibes" are accurate enough to share: youtu.be/E4b3cSirpsg.
1.5 yrs ago, we set out to answer a seemingly simple question: what are we *actually* getting out of RL in fine-tuning? I'm thrilled to share a pearl we found on the deepest dive of my PhD: the value of RL in RLHF seems to come from *generation-verification gaps*. Get ready to ๐คฟ:
can you present other people's results :-)
that makes sense to me.... i should go to bed....
@gswamy.bsky.social et al propose SPO which builds a game from a preferences, solving for the minimax winner. Handles non-Markovian, intransitive, and stochastic preferences. Nice empirical eval ranging from small demonstrative domains to huge RL domain (Mujoco).
arxiv.org/abs/2401.04056
2/3.
I have become a fan of the game-theoretic approaches to RLHF, so here are two more papers in that category! (with one more tomorrow ๐
)
1. Self-Play Preference Optimization (SPO).
2. Direct Nash Optimization (DNO).
๐งต 1/3.
1....