Check out the full paper for more details. And download our code to play with the assistant!
arxiv.org/abs/2504.07091
github.com/cassidylaidl...
Check out the full paper for more details. And download our code to play with the assistant!
arxiv.org/abs/2504.07091
github.com/cassidylaidl...
We conclude our paper with a vision of how AssistanceZero could be applied to post-training of LLMs. We think that our approach could remove incentives for deception and other unsafe behavior in LLMs and make them more helpful. We may or may not be already working on this π
Real human users rate our AssistanceZero assistant much higher than one trained via a pretraining+SFT pipeline! And, it enables people to build houses while placing fewer blocks than building alone.
Our new RL algorithm, AssistanceZero, trains an assistant that displays emergent helpful behaviors like *active learning* and *learning from corrections*.
In Minecraft, we use an assistance game formulation where a simulated human is given random houses to build, and an AI assistants learns via RL to help the human out. The assistant can't see the goal house, so it has to predict the goal and maintain uncertainty to be helpful.
Unlike RLHF, assistance games explicitly treat the user-assistant interaction as a two player game, where the user knows their goal but the assistant doesn't. AGs model *communication* about the goal from the user to the assistant and *collaboration* between them to achieve it.
A better assistant would maintain *uncertainty* about its goal and ask clarification questions until it really understood, leading to a better solution. Assistance games can enable this.
RLHF is great but it encourages short-term optimization: trying to solve the user's entire problem in a single response. For example, if you ask ChatGPT to "clean up some disk space," it will immediately give you a program to run without asking which files are okay to delete!
We built an AI assistant that plays Minecraft with you.
Start building a houseβit figures out what youβre doing and jumps in to help.
This assistant *wasn't* trained with RLHF. Instead, it's powered by *assistance games*, a better path forward for building AI assistants. π§΅
Our work provides a more principled step towards preventing reward hacking and ensuring the safety of increasingly powerful AI. Check out the paper for all the details!
arxiv.org/abs/2403.03185
Joint with @shivs01.bsky.social and Anca Dragan
Action distribution and occupancy measure regularization are equivalent for most of today's RLHF implementations (which are effectively contextual bandits). However, once LLMs are optimized for multi-turn interaction or tool use this will no longer be the case.
Experiments show that ΟΒ² occupancy measure regularization outperforms KL action distribution regularization in all the environments we study! Our regularization scheme allows for larger improvements in true reward compared to base policies while preventing reward hacking.
Regularization is already used to prevent reward hacking in RLHF, but our theory suggests two key changes: regularize based on occupancy measures rather than action distributions and use ΟΒ² divergence instead of KL divergence.
Our definition also leads to a principled method for preventing reward hacking: regularize optimization to the base policy based on ΟΒ² occupancy measure divergence. We prove that this regularized objective gives a lower bound on improvement in the true reward.
We define reward hacking as when optimizing a proxy breaks the correlation, resulting in lower true reward than the base policy. Our definition captures intuitive cases of reward hacking in realistic environments, including RLHF, traffic control, and glucose monitoring.
We argue that a good proxy *correlates* with the true reward for states and actions sampled from some reasonable "base policy.β For example, in RLHF a natural base policy is the SFT policy.
However, formally defining reward hacking is tricky because we have to define what makes a proxy reward "reasonable." If we optimize a reward function that's totally unrelated to our objective, then it's unsurprising that it doesn't work and it arguably isn't "reward hacking."
Reward hacking is when we optimize a reward function that seems reasonable, but it ceases to be a good proxy and we end up with a policy that performs poorly under the unknown "true" reward function. It's ubiquitous because real-world objectives are really hard to specify.
When RLHFed models engage in βreward hackingβ it can lead to unsafe/unwanted behavior. But there isnβt a good formal definition of what this means! Our new paper provides a definition AND a method that provably prevents reward hacking in realistic settings, including RLHF. π§΅
Thanks for the shoutout! And for giving me a reason to finally get on bluesky π
e introduce the effective horizon, a property of MDPs that controls how difficult RL is. Our analysis is mo- tivated by Greedy Over Random Policy (GORP), a simple Monte Carlo planning algorithm (left) that exhaustively ex- plores action sequences of length k and then uses m random rollouts to evaluate each leaf node. The effective horizon combines both k and m into a single measure. We prove sample complexity bounds based on the effective horizon that correlate closely with the real performance of PPO, a deep RL algorithm, on our BRIDGE dataset of 155 deterministic MDPs (right).
Kind of a broken record here but proceedings.neurips.cc/paper_files/...
is totally fascinating in that it postulates two underlying, measurable structures that you can use to assess if RL will be easy or hard in an environment