Kanishka Misra's Avatar

Kanishka Misra

@kanishka

Assistant Professor of Linguistics, and Harrington Fellow at UT Austin. Works on computational understanding of language, concepts, and generalization. πŸ•ΈοΈπŸ‘οΈ: https://kanishka.website

2,528
Followers
288
Following
298
Posts
06.07.2023
Joined
Posts Following

Latest posts by Kanishka Misra @kanishka

Martin = one of the kindest people I know! Don’t miss this opportunity to learn from one of the best in their field!

12.03.2026 23:41 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Laboratory Coordinator - 138788 Laboratory Coordinator - 138788 | Careers at UC San Diego

I'm hiring a new lab manager for my lab @ UCSD! For more info on the lab, check out our website: lillab.ucsd.edu

Target start date is June 1 (flexible) and application deadline is March 26. Please share with anyone you think might be a good fit!

Apply here: employment.ucsd.edu/laboratory-c...

12.03.2026 22:46 πŸ‘ 22 πŸ” 21 πŸ’¬ 0 πŸ“Œ 2
Post image

πŸ“’ PhD position in Developmental Language Modelling
(PLZ RT)

What can human language acquisition teach us about training language models? Join us as a PhD!
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-developmental-language @carorowland.bsky.social
@mpi-nl.bsky.social

10.03.2026 13:12 πŸ‘ 23 πŸ” 33 πŸ’¬ 1 πŸ“Œ 2

Thanks to everyone who gave us feedback: @lampinen.bsky.social, Ellie Pavlick, @glupyan.bsky.social, @phillipisola.bsky.social, and others!

Work with Tianyang Xu, @mudtriangle.com, Karen Livescu, and Greg Shakhnarovich!

10.03.2026 20:53 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

This relates more broadly to literature reconciling how meaning obtained from relational grounding in language interacts with that obtained from other forms of grounding (see Mollo and Millere/@raphaelmilliere.com) and lays out a research program on the role of category coherence in learning!

11/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This suggests that representations learned from language are structured so as to expect incoming category information to cohere in a specific way in order to show cross-modal generalization!

10/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Results from counterfactual shuffling experiments. Models tend to generalize equally well when the coherence was preserved and not so well when it was disrupted, even in the absence of all hypernyms.

Results from counterfactual shuffling experiments. Models tend to generalize equally well when the coherence was preserved and not so well when it was disrupted, even in the absence of all hypernyms.

If models were generalizing arbitrarily, then we shouldn’t see any differences in their performance across these settings (i.e., no matter what, crow == bird). However, we find that models seem to only generalize when the training data preserves category coherence!

9/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Macro F1 scores on unseen images vs. Visual coherence across the 53 hypernym categories for the Qwen3-1.7B backbone (at 100% ablation). r (Pearson’s correlation) = .43, indicating positive relation.

Macro F1 scores on unseen images vs. Visual coherence across the 53 hypernym categories for the Qwen3-1.7B backbone (at 100% ablation). r (Pearson’s correlation) = .43, indicating positive relation.

By coherence we mean the visual similarity between members of the same category, which we calculate using the DINOv2 embeddings used in our VLM training. Even in the original configuration, we found models to perform better on categories that were visually more coherent

8/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Examples of image-leaf mappings resulting from our counterfactual shuffles, in comparison with the original configuration (top). VC indicates the visual coherence of the category under the data configuration. VC for birds in the original set: .30; for within-category shuffles: .30; for across-category shuffle: .12.

Examples of image-leaf mappings resulting from our counterfactual shuffles, in comparison with the original configuration (top). VC indicates the visual coherence of the category under the data configuration. VC for birds in the original set: .30; for within-category shuffles: .30; for across-category shuffle: .12.

To test this, we created counterfactual data: 1) where category-label pairings were shuffled across categories (πŸͺ›= β€œrobin”; 🎸= β€œcrow”) and 2) where they were shuffled within categories (πŸ¦…=β€œrobin”; 🦜=β€œcrow”). These swaps also manipulate the categories’ visual coherence

7/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
figure depicting two hypotheses that models might entertain – 1) arbitrary prediction of hypernyms regardless of what the input looks like during supervision; 2) sensitivity to the fact that the category (e.g., birds) is not visually coherent.

figure depicting two hypotheses that models might entertain – 1) arbitrary prediction of hypernyms regardless of what the input looks like during supervision; 2) sensitivity to the fact that the category (e.g., birds) is not visually coherent.

Are LMs simply executing something like β€œIf crow THEN bird?” regardless of what the image shows? E.g., if during supervision we label images of kayaks as β€œcrow” would the model still generalize to birds or does the model expect categories to have some level of coherence?

6/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Main results (see fig 4 in the paper). Salient result: models tend to generalize to hypernyms without any evidence encountered during training, suggesting that they show cross-modal generalization.

Main results (see fig 4 in the paper). Salient result: models tend to generalize to hypernyms without any evidence encountered during training, suggesting that they show cross-modal generalization.

Having established these preconditions to our task, we then find that models are also able to generalize (non-trivially) to hypernyms without ever having β€œseen” them explicitly, suggesting that LM representations support cross-modal generalization!

5/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
left: Plot showing that models using the DINOv2 Encoder, which has never seen text information tend to generalize similar to those using the SigLIP encoder, which has seen text information. Right: Table showing that both Qwen3 LMs to demonstrate non-trivial hypernymy knowledge.

left: Plot showing that models using the DINOv2 Encoder, which has never seen text information tend to generalize similar to those using the SigLIP encoder, which has seen text information. Right: Table showing that both Qwen3 LMs to demonstrate non-trivial hypernymy knowledge.

We establish that this paradigm works in the first place with a vision encoder that has never been trained on language data (i.e., ❌ SigLIP βœ…DINO), that the models learn the task on the lower-level categories themselves, and that the LMs indeed have taxonomic knowledge

4/

10.03.2026 20:53 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
3 papers on hypernym acquisition in models (Hearst, 1992; Geffet and Dagan, 2005) and humans (Wilson et al., 2023) - see paper for details.

3 papers on hypernym acquisition in models (Hearst, 1992; Geffet and Dagan, 2005) and humans (Wilson et al., 2023) - see paper for details.

Taxonomic knowledge is interesting because of number of hypotheses about the learnability of category knowledge from linguistic cues, for both computational models and humans. Evidence of cross-modal generalization would lend strong support for these hypotheses!

3/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Figure depicting an instance of our experiments. During training, the projector is deprived of explicit supervision on high-level categories (hypernyms, e.g., animal) at various amounts, and is trained to detect the presence (and absence) of lower-level categories (e.g., koala), keeping the image encoder and the LM backbone frozen. After training, the VLM is tested for generalization to hypernym categories, given previously unseen images.

Figure depicting an instance of our experiments. During training, the projector is deprived of explicit supervision on high-level categories (hypernyms, e.g., animal) at various amounts, and is trained to detect the presence (and absence) of lower-level categories (e.g., koala), keeping the image encoder and the LM backbone frozen. After training, the VLM is tested for generalization to hypernym categories, given previously unseen images.

We use a VLM-training paradigm (frozen vision encoder w/o language training mapped to frozen LM) where we partially supervise on lower level categories during training, and then test if the LM recovers hypernymy knowledge from what it has seen in language data.

2/

10.03.2026 20:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
title section of the paper: β€œCross-Modal Taxonomic Generalization in (Vision) Language Models” by Tianyang Xu, Marcelo Sandoval-CastaΓ±eda, Karen Livescu, Greg Shakhnarovich, Kanishka Misra.

title section of the paper: β€œCross-Modal Taxonomic Generalization in (Vision) Language Models” by Tianyang Xu, Marcelo Sandoval-CastaΓ±eda, Karen Livescu, Greg Shakhnarovich, Kanishka Misra.

What is the interplay between representations learned from (language) surface forms alone, and those learned from more grounded evidence (e.g.,vision)?

Excited to share new work understanding β€œCross-modal taxonomic generalization” in (V)LMs

arxiv.org/abs/2603.07474

1/

10.03.2026 20:53 πŸ‘ 32 πŸ” 12 πŸ’¬ 1 πŸ“Œ 0

I want to unwatch this

10.03.2026 19:42 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

@tylerachang.bsky.social and I will be presenting the Goldfish as an oral at #LREC2026 in Mallorca! 🌴

09.03.2026 16:35 πŸ‘ 16 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Preview
The no-magic approach to understanding intelligent systems Today I want to write a bit about the philosophy I think underlies much of the work that my collaborators and I (as well as many other researchers that I respect) have done on understanding artificial...

Short post on what I call the "no-magic approach to understanding intelligent systems" β€” the philosophy I think of as motivating our work on understanding intelligence without resorting to magical thinking about AI or humans!
infinitefaculty.substack.com/p/the-no-mag...

07.03.2026 20:58 πŸ‘ 32 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1
Post image

🚨New Paper!🚨 How do reasoning LLMs handle inferences that have no deterministic answer? We find that they diverge from humans in some significant ways, and fail to reflect human uncertainty… 🧡(1/10)

04.03.2026 16:13 πŸ‘ 55 πŸ” 20 πŸ’¬ 3 πŸ“Œ 1

Check out our special theme: new missions for NLP research!

05.03.2026 22:39 πŸ‘ 12 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1

What’s a paper that made you think that way πŸ‘€

05.03.2026 22:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science πŸ“•πŸ‘‡

Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!

Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!

12.02.2026 22:22 πŸ‘ 52 πŸ” 22 πŸ’¬ 0 πŸ“Œ 1

Congratulations Andreas!!

03.03.2026 19:37 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Some days you finish 5 meta-reviews in ~one go, and some days you take 1.5 days to complete one meta-review. Such is the AC life!

03.03.2026 15:35 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Woohoo, will be in touch soon!

03.03.2026 05:17 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Wow!! Good luck with whatever it is you do next β€” so excited for you!!

03.03.2026 05:17 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Watch slow horses already!!

02.03.2026 17:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Japonaise and Jahunger mentioned in same thread 😍 my fav places in Boston!

02.03.2026 17:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
South by Semantics Workshop:
"New horizons in evaluating pragmatic competence in language models",  Jennifer Hu (Johns Hopkins University), March 6, 2026.

South by Semantics Workshop: "New horizons in evaluating pragmatic competence in language models", Jennifer Hu (Johns Hopkins University), March 6, 2026.

I'm looking forward to @jennhu.bsky.social's South by Semantics talk next week at UT Austin! She'll discuss "micro-pragmatics" inferences and world modeling in language models πŸ€–

01.03.2026 20:36 πŸ‘ 8 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Preview
Assistant Teaching Professor in Computational Social Science and Cognitive Science University of California, San Diego is hiring. Apply now!

Our department is hiring an Assistant Teaching Professor!! This is a joint-appointed position with Computational Social Sciences (css.ucsd.edu). It's 75+ degrees F and sunny today, just thought I'd mention apol-recruit.ucsd.edu/JPF04461

27.02.2026 14:42 πŸ‘ 43 πŸ” 28 πŸ’¬ 1 πŸ“Œ 4