The submission deadline for CMCL is coming up in less than a month! (Feb 25) CMCL will be co-located with LREC and take place on May 16!🌴https://sites.google.com/view/cmclworkshop/cfp
Paper accepted to #EACL2026 main conference 🎉
@taniseceron.bsky.social, Sebastian Padó and I test multilingual LLMs before and after English-only fine-tuning and find strong cross-lingual political opinion transfer across five Western languages.
www.arxiv.org/abs/2508.05553
Does it matter how you prompt an LLM with a persona? Do LLMs respond differently to natural conversation history compared to names and explicit mentions? Go check out our new preprint! 👀
Our paper has been accepted to EACL 2026!🎉 We systematically evaluate several vision-language (VLMs) and language-only models, measuring their alignment with brain responses to concept words. Our results show that vision-language models offer a promising tool to model human concept processing
The CfP for CMCL is out!🌴 We are looking forward to receiving many interesting submissions! ✨ (Deadline: February 25, 2026) sites.google.com/view/cmclwor...
Thank you so much for having me! @milanlp.bsky.social 😊
🚨 New main paper out at #EMNLP2025! 🚨
⚡ We show that personalization of content moderation models can be harmful and perpetuate hate speech, defeating the purpose of the system and hurting the community.
We argue that personalized moderation needs boundaries, and we show how to build them.
Thrilled to be heading to Suzhou with a big team of GroNLP'ers 🐮
Interested in Interpretable, Cognitively inspired, Low-resource LMs? Don't miss our posters & talks #EMNLP2025!
Next week, I'll be at #EMNLP presenting our work "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" 🎉
📍 Ethics, Bias, and Fairness (Poster Session 2)
📅 Wed, November 5, 11:00-12:30 - Hall C
📖 Check the paper! arxiv.org/abs/2505.16467
See you in Suzhou! 👋
Three panel thing. In the left panel we use error bars. In the second, we take statistical significance as the biggest number but still have error bars. In LLM science, we just have the biggest number
What if we did a single run and declared victory
🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
Hi all, there is a postdoc position open in the group I'm currently based in! ✨ Let me know if you are interested or have questions 🙂 Please share if you know someone who might be interested www.uu.nl/en/organisat...
📢 Are you interested in a PhD in #NLProc to study and improve how AI model emotions and social signals?
🚨Exciting news:🚨 I’m hiring a PhD candidate at LIACS,
@unileiden.bsky.social.
📍 Leiden, The Netherlands
📅 Deadline: 17 Nov 2025
👉 Position details and application link: tinyurl.com/5x5v6zsa
🚨 Are you looking for a PhD in #NLProc dealing with #LLMs?
🎉 Good news: I am hiring! 🎉
The position is part of the “Contested Climate Futures" project. 🌱🌍 You will focus on developing next-generation AI methods🤖 to analyze climate-related concepts in content—including texts, images, and videos.
Q. Who aligns the aligners?
A. alignmentalignment.ai
Today I’m humbled to announce an epoch-defining event: the launch of the 𝗖𝗲𝗻𝘁𝗲𝗿 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗼𝗳 𝗔𝗜 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗖𝗲𝗻𝘁𝗲𝗿𝘀.
Interspeech paper title: What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training Authors: Marianne de Heer Kloots, Hosein Mohebbi, Charlotte Pouw, Gaofei Shen, Willem Zuidema, Martijn Bentum
✨ Do self-supervised speech models learn to encode language-specific linguistic features from their training data, or only more language-general acoustic correlates?
At #Interspeech2025 we presented our new Wav2Vec2-NL model and SSL-NL evaluation dataset to test this!
📄 arxiv.org/abs/2506.00981
⬇️
Delighted to share that our paper "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" (joint work with @arianna-bis.bsky.social and Raquel Fernández) got accepted to the main conference of #EMNLP
Can't wait to discuss our work at #EMNLP2025 in Suzhou this November!
Our paper on multilingual reasoning is accepted to Findings of #EMNLP2025! 🎉 (OA: 3/3/3.5/4)
We show SOTA LMs struggle with reasoning in non-English languages; prompt-hack & post-training improve alignment but trade off accuracy.
📄 arxiv.org/abs/2505.22888
See you in Suzhou! #EMNLP
What a privilege to have #CCN2025 in (an exceptionally warm and sunny) Amsterdam this year!
It was my first time attending the conference, and being surrounded by so many talented researchers whose interests are similar to mine has been a deeply enriching experience ✨
Some amazing @amsterdamnlp.bsky.social people in Vienna💫#acl2025 Raquel Fernández Sandro Pezzelle Katia Shutova @esamghaleb.bsky.social @veraneplenbroek.bsky.social @annabavaresco.bsky.social + @leobertolazzi.bsky.social
🧑🤝🧑 @ecekt.bsky.social, @alberto-testoni.bsky.social
📍 Monday, July 28, 11:00-12:30, Hall 4/5
See you in Vienna! ✨ @aclmeeting.bsky.social
🧑🤝🧑 @michaelwhanna.bsky.social, @akoller.bsky.social, @andre-t-martins.bsky.social, @pmondorf.bsky.social, Vera Neplenbroek, Sandro Pezzelle, @barbaraplank.bsky.social, @davidschlangen.bsky.social, Alessandro Suglia, @akskuchi.bsky.social
2️⃣ LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks (Main Conference)
🧑🤝🧑 @annabavaresco.bsky.social, @raffagbernardi.bsky.social, @leobertolazzi.bsky.social, @delliott.bsky.social, Raquel Fernández, Albert Gatt, @esamghaleb.bsky.social, Mario Giulianelli
🎉 Happy to share that I will be presenting two papers at ACL 2025.
1️⃣ Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation (Findings)
🧑🤝🧑 Vera Neplenbroek, @arianna-bis.bsky.social, Raquel Fernández
📍 Monday, July 28, 18:00-19:30, Hall 4/5
[4/4] We hope to inspire future research into methods that counter the influence of stereotypical associations on the model’s latent representation of the user, particularly when the user’s demographic group is unknown.
Code and data:
github.com/Veranep/impl...
[3/4] Our findings reveal that LLMs infer demographic info based on stereotypical signals, sometimes even when the user explicitly identifies with a different demographic group. We mitigate this by intervening on the model’s internal representations using a trained linear probe.
[2/4] We systematically explore how LLMs respond to stereotypical cues using controlled synthetic conversations, by analyzing the models’ latent user representations through both model internals and generated answers to targeted user questions.
Do LLMs assume demographic information based on stereotypes?
We (@arianna-bis.bsky.social, Raquel Fernández and I) answered this question in our new paper: "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization".
🧵
arxiv.org/abs/2505.16467