Thank you venkat!
Thank you venkat!
Thank you kanishka!
If you are interested in legal language, you should also check it out, as we focus on real courtroom cross-examination dialogues.
Here is the paper website: asherz720.github.io/SDA/
We explore how strategic effectiveness can be quantified by a bunch of discourse properties and evaluate a suite of LLMs in terms of how they understand such strategic dialogues under adversarial settings.
Very excited to share that the paper w/
@jessyjli.bsky.social @DavidBeaver
"Strategic Dialogue Assessment: The Crooked Path to Innocence" (used to have the name COBRA) was accepted by Dialogue and Discourse Vol 17 No.1. Check it out! πhttps://journals.uic.edu/ojs/index.php/dad/article/view/14503
Current LLMs are not able to jailbreak cooperative principles and still show limited understanding of strategic language. We believe this work lays foundations for sophisticated strategic reasoning and safety monitoring in downstream tasks.
π: arxiv.org/abs/2506.01195
By analyzing model reasoning, we find extra reasoning introduces overcomplication (img left), misunderstanding, and internal inconsistency (img right). This shows the current LLMs still lack sophisticated pragmatic understanding in many ways.
We evaluate a range of LLMs in terms of how good they are at perceiving strategic language. We show models struggle with our metrics while showing an overall good understanding of Gricean principles. Model size tends to have a positive effect, while reasoning does not help.
(2) BaT and PaT are valid terms that reflect strategic gains/losses, which can to some extent predict conversational outcomes. In addition, our metrics are more objective. When conditioned on cases where the outcome is made based on logical arguments, the predictive power rises.
We also introduce CHARM, an annotated dataset of real legal cross-examination dialogues. By applying our framework, we show (1) (non-)cooperative discourse are distinct over the identified properties (img left), and BaT and PaT show such a distributional distinction (img right).
Based on the components above, we introduce three metricsβBenefit at Turn (BaT), Penalty at Turn (PaT), and Normalized Relative Benefit at Turn (NRBaT)βto measure the strategic gains, losses, and cumulative benefits at a turn.
For example, one witness can make a commitment that leads to a win for her but violates the maxim of manner to make her less liable to the commitment. The commitment itself is beneficial, but the gains are reduced due to vagueness.
We derive non-cooperativity from both Gricean and game-theoretic pragmatics. In our framework, a strategic move is evaluated based on two components: the commitment it expresses (base value) and the violation of maxims to maintain consistency (penalties/compensations).
Language is often strategic, but LLMs tend to play nice. How strategic are they really? Probing into that is key for future safety alignment.
πIntroducing CoBRAπ, a framework that assesses strategic language.
Work with my amazing advisors @jessyjli.bsky.social and @David I. Beaver!
Have that eerie feeling of dΓ©jΓ vu when reading model-generated text π, but canβt pinpoint the specific words or phrases π?
β¨We introduce QUDsim, to quantify discourse similarities beyond lexical, syntactic, and content overlap.
πββοΈ