π
π
π Apply to CMU LTIβs Summer 2026 βLanguage Technology for Allβ internship! π Open to preβdoctoral students new to language tech (nonβCS backgrounds welcome). π¬ 12β14 weeks inβperson in Pittsburgh β travel + stipend paid. πΈ Deadline: Feb 20, 11:59pm ET. Apply β forms.gle/cUu8g6wb27Hs...
π¨New paper: Reward Models (RMs) are used to align LLMs, but can they be steered toward user-specific value/style preferences?
With EVALUESTEER, we find even the best RMs we tested exhibit their own value/style biases, and are unable to align with a user >25% of the time. π§΅
Thanks to my collaborators @kghate.bsky.social @monadiab77.bsky.social @daniel-fried.bsky.social @atoosakz.bsky.social @maxkw.bsky.social
for their support in making this work possible!
Please reach out if you'd like to chat about this work! We hope ConflictScope helps researchers study how models handle value conflicts that matter to their communities.
Code and data: github.com/andyjliu/con...
Arxiv: www.arxiv.org/abs/2509.25369
ConflictScope can also be used to evaluate different approaches toward steering models. We find that including detailed target rankings in system prompts consistently improves model alignment with the target ranking while under conflict, but with plenty of room for improvement.
We find significant shifts between modelsβ expressed and revealed preferences under conflict! Models say they prefer actions that support protective values (e.g. harmlessness) when asked directly, but support personal values (e.g. helpfulness) in more realistic evaluations.
To address issues with multiple-choice evaluation, we focus on open-ended evaluation with a simulated user. Annotation studies show strong correlation between LLM and human judgments of which action a model took in a given scenario, allowing us to automate open-ended evaluations.
We introduce new metrics to measure how morally challenging a dataset is for models. We find that ConflictScope produces datasets that elicit more disagreement and stronger preferences than moral dilemma datasets, while alignment data frequently elicits indifference from models.
Given a set of values, ConflictScope generates scenarios in which an LLM-based assistant faces a conflict between a pair of values in the set. It then evaluates which value a target LLM supports more in each scenario before combining scenario-level judgments into a value ranking.
π¨New Paper: LLM developers aim to align models with values like helpfulness or harmlessness. But when these conflict, which values do models choose to support? We introduce ConflictScope, a fully-automated evaluation pipeline that reveals how models rank values under conflict.
(π· xkcd)
Placing LLMs in simulated markets helps us quantitatively and qualitatively measure their propensity to collude, as well as how environmental changes affect this. Read below or find @veronateo.bsky.social at the ICML multi-agent systems workshop to learn more!
very cool!
CMU LTI is hosting predoc interns this summer, centered around "Language Technologies for All"! Please apply and circulate! lti.cs.cmu.edu/news-and-eve...
these are great, thanks! will check them out
started Axiomatic but didnβt get very far - Permutation City looks fun though, thanks
looking for 2025 book recs!
things i've previously liked, for reference -
nonfiction: the structure of scientific revolutions, cybernetic revolutionaries, seeing like a state
fiction: stories of your life and others, one hundred years of solitude, project hail mary, recursion
PRISM has preference scores for different models that you can convert into pairwise labels
Looking for all your LTI friends on Bluesky? The LTI Starter Pack is here to help!
go.bsky.app/NhTwCVb
could I be added? thanks for curating :)