Jessy Li's Avatar

Jessy Li

@jessyjli

https://jessyli.com Associate Professor, UT Austin Linguistics. Part of UT Computational Linguistics https://sites.utexas.edu/compling/ and UT NLP https://www.nlp.utexas.edu/

2,482
Followers
467
Following
58
Posts
21.09.2023
Joined
Posts Following

Latest posts by Jessy Li @jessyjli

Check out our special theme: new missions for NLP research!

05.03.2026 22:39 πŸ‘ 12 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1
Title card of our paper: "Which course? Discourse! Teaching Discourse and Generation in the Era of LLMs" by Junyi Jessy Li, Yang Janet Liu, Valentina Pyatkin, and William Sheffield.

Title card of our paper: "Which course? Discourse! Teaching Discourse and Generation in the Era of LLMs" by Junyi Jessy Li, Yang Janet Liu, Valentina Pyatkin, and William Sheffield.

Nearly 2 years ago, @jessyjli.bsky.social, @janetlauyeung.bsky.social, @valentinapy.bsky.social, and I decided that it's time to bring discourse structure to the center of NLP teaching.

05.02.2026 03:53 πŸ‘ 11 πŸ” 3 πŸ’¬ 2 πŸ“Œ 0

Check out @asher-zheng.bsky.social's work on quantifying strategic language in dialogue, just appeared in the Dialogue and Discourse journal.
We study non-cooperative moves that are subtle to capture, where modern AI still have trouble comprehending.
Work w/ David_Beaver

31.01.2026 15:44 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Title page of our paper: "Bears, all bears, and some bears. Language Constraints on Language Models' Inductive Inferences"

Title page of our paper: "Bears, all bears, and some bears. Language Constraints on Language Models' Inductive Inferences"

β€œAll bears have a property”, β€œSome bears have a property”, β€œBears have a property” are different in terms of how the property is generalized to a specific bear – a great example of how language constrains thought!

This holds for kids, adults, and according to our new work, (V)LMs! 🧡

27.01.2026 16:16 πŸ‘ 24 πŸ” 10 πŸ’¬ 1 πŸ“Œ 1

🚨Be careful with LLMs when you ask health related questions -- even when the model relies on "evidence"! Kaijie's paper reveals a key weakness and the tricky balance between safety and faithfulness πŸ‘‰

21.01.2026 19:28 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Accepted at EACL - excited about Morocco!

04.01.2026 14:52 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."

Screenshot of a figure with two panels, labeled (a) and (b). The caption reads: "Figure 1: (a) Illustration of messages (left) and strings (right) in toy domain. Blue = grammatical strings. Red = ungrammatical strings. (b) Surprisal (negative log probability) assigned to toy strings by GPT-2."

New work to appear @ TACL!

Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.

Yet they often assign higher probability to ungrammatical strings than to grammatical strings.

How can both things be true? πŸ§΅πŸ‘‡

10.11.2025 22:11 πŸ‘ 91 πŸ” 20 πŸ’¬ 2 πŸ“Œ 3

Incredibly honored to serve as #EMNLP 2026 Program Chair along with @sunipadev.bsky.social and Hung-yi Lee, and General Chair @andre-t-martins.bsky.social. Looking forward to Budapest!!

(With thanks to Lisa Chuyuan Li who took this photo in Suzhou!)

08.11.2025 02:39 πŸ‘ 18 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Delighted Sasha's (first year PhD!) work using mech interp to study complex syntax constructions won an Outstanding Paper Award at EMNLP!

Also delighted the ACL community continues to recognize unabashedly linguistic topics like filler-gaps... and the huge potential for LMs to inform such topics!

07.11.2025 18:22 πŸ‘ 33 πŸ” 8 πŸ’¬ 1 πŸ“Œ 0

Think your LLMs β€œunderstand” words like although/but/therefore? Think again!

They perform at chance for making inferences from certain discourse connectives expressing concession

16.10.2025 17:02 πŸ‘ 19 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Preview
PLSemanticsBench: Large Language Models As Programming Language Interpreters As large language models (LLMs) excel at code reasoning, a natural question arises: can an LLM execute programs (i.e., act as an interpreter) purely based on a programming language's formal semantics?...

Test your models and see if they just memorize or truly understand!

PLSemanticsBench - where formal meets informal!

arxiv.org/abs/2510.03415

Team: Aditya Thimmaiah, Jiyang Zhang, Jayanth Srinivasa, Milos Gligoric

14.10.2025 02:32 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

So what's really happening⁉️
LLMs aren't interpreting rules -- they're recalling patterns.
Their "understanding" is promising... but shallow.

πŸ’‘It's time to test semantics, not just syntax.πŸ’‘
To move from surface-level memorization β†’ true symbolic reasoning.

14.10.2025 02:32 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Change the rules -- swap (+ with -) or replace (+ with novel symbols) operators -- and accuracy collapses.
Models that were "near-perfect" drop to single digits. 😬

14.10.2025 02:32 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

🚨 Does your LLM really understand code -- or is it just really good at remembering it?
We built **PLSemanticsBench** to find out.
The results: a wild mix.

βœ…The Brilliant:
Top reasoning models can execute complex, fuzzer-generated programs -- even with 5+ levels of nested loops! 🀯

❌The Brittle: 🧡

14.10.2025 02:32 πŸ‘ 29 πŸ” 6 πŸ’¬ 1 πŸ“Œ 3
Post image

Find my students and collaborators at COLM this week!

Tuesday morning: @juand-r.bsky.social and @ramyanamuduri.bsky.social 's papers (find them if you missed it!)

Wednesday pm: @manyawadhwa.bsky.social 's EvalAgent

Thursday am: @anirudhkhatry.bsky.social 's CRUST-Bench oral spotlight + poster

07.10.2025 18:03 πŸ‘ 9 πŸ” 5 πŸ’¬ 0 πŸ“Œ 1

We’re hiring faculty as well! Happy to talk about it at COLM!

08.10.2025 01:17 πŸ‘ 9 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Can we quantify what makes some text read like AI "slop"? We tried πŸ‘‡

24.09.2025 13:28 πŸ‘ 8 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Language Models Fail to Introspect About Their Knowledge of Language There has been recent interest in whether large language models (LLMs) can introspect about their own internal states. Such abilities would make LLMs more interpretable, and also validate the use of s...

I’m at #COLM2025 from Wed with:

@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513

@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850

@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002

I’ll talk at INTERPLAY too. Come say hi!

06.10.2025 15:57 πŸ‘ 20 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0
Post image

On my way to #COLM2025 🍁

Check out jessyli.com/colm2025

QUDsim: Discourse templates in LLM stories arxiv.org/abs/2504.09373

EvalAgent: retrieval-based eval targeting implicit criteria arxiv.org/abs/2504.15219

RoboInstruct: code generation for robotics with simulators arxiv.org/abs/2405.20179

06.10.2025 15:50 πŸ‘ 12 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Preview
Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models Language models (LMs) tend to show human-like preferences on a number of syntactic phenomena, but the extent to which these are attributable to direct exposure to the phenomena or more general propert...

Traveling to my first @colmweb.org🍁

Not presenting anything but here are two posters you should visit:

1. @qyao.bsky.social on Controlled rearing for direct and indirect evidence for datives (w/ me, @weissweiler.bsky.social and @kmahowald.bsky.social), W morning

Paper: arxiv.org/abs/2503.20850

06.10.2025 15:22 πŸ‘ 13 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0

Here is a genuine one :) CosmicAI’s AstroVisBench, to appear at #NeurIPS

bsky.app/profile/nsfs...

02.10.2025 14:03 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

All of us (@kanishka.bsky.social @kmahowald.bsky.social and me) are looking for PhD students this cycle! If computational linguistics/NLP is your passion, join us at UT Austin!

For my areas see jessyli.com

30.09.2025 19:30 πŸ‘ 4 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0

Can AI aid scientists amidst their own workflows, when they do not know step-by-step workflows and may not know, in advance, the kinds of scientific utility a visualization would bring?

Check out @sebajoe.bsky.social’s feature on ✨AstroVisBench:

25.09.2025 20:52 πŸ‘ 8 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

πŸ“£ NEW HCTS course developed in collaboration with @tephi-tx.bsky.social: AI in Health Communication πŸ“£

Explore responsible applications and best practices for maximizing impact and building trust with @utaustin.bsky.social experts @jessyjli.bsky.social & @mackert.bsky.social.

πŸ’»: rebrand.ly/HCTS_AI

04.09.2025 17:02 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1

Would be great to chat at COLM!

16.08.2025 05:11 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
NoCha leaderboard

long range narrative understanding, even basic fact checking that humans easily get near perfect on, has barely improved in LMs over years novelchallenge.github.io

15.08.2025 15:55 πŸ‘ 9 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb.

At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories."

In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.

The top shows the title and authors of the paper: "Whither symbols in the era of advanced neural networks?" by Tom Griffiths, Brenden Lake, Tom McCoy, Ellie Pavlick, and Taylor Webb. At the bottom is text saying "Modern neural networks display capacities traditionally believed to require symbolic systems. This motivates a re-assessment of the role of symbols in cognitive theories." In the middle is a graphic illustrating this text by showing three capacities: compositionality, productivity, and inductive biases. For each one, there is an illustration of a neural network displaying it. For compositionality, the illustration is DALL-E 3 creating an image of a teddy bear skateboarding in Times Square. For productivity, the illustration is novel words produced by GPT-2: "IKEA-ness", "nonneotropical", "Brazilianisms", "quackdom", "Smurfverse". For inductive biases, the illustration is a graph showing that a meta-learned neural network can learn formal languages from a small number of examples.

πŸ€– 🧠 NEW PAPER ON COGSCI & AI 🧠 πŸ€–

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n

15.08.2025 16:27 πŸ‘ 101 πŸ” 16 πŸ’¬ 8 πŸ“Œ 3

Yes, at least need other data (like Echos in AI), quality measure (LitBench), also what we did in QUDsim was to make sure the stories are from posts pre-LLM to prevent AI stories. Further, The way they measure style + semantic diversity doesn't align with how they define it (only capture lexical)

15.08.2025 13:20 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I agree this thread's headline claim seems premature. Let me add our recent ACL Findings paper, with Dexter Ju and @hagenblix.bsky.social, which found syntactic simplification in at least some LMs, in a novel domain regeneration setting: aclanthology.org/2025.finding...

15.08.2025 04:35 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Nice, reading level, syntactic complexity, and sentence structures are great angles to study this!!

15.08.2025 05:20 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0