Home New Trending Search
About Privacy Terms
Posts
Vera Neplenbroek's posts

The submission deadline for CMCL is coming up in less than a month! (Feb 25) CMCL will be co-located with LREC and take place on May 16!🌴https://sites.google.com/view/cmclworkshop/cfp

1 month ago 3 2 0 1
Post image

Paper accepted to #EACL2026 main conference 🎉
@taniseceron.bsky.social, Sebastian Padó and I test multilingual LLMs before and after English-only fine-tuning and find strong cross-lingual political opinion transfer across five Western languages.

www.arxiv.org/abs/2508.05553

1 month ago 9 2 0 1

Does it matter how you prompt an LLM with a persona? Do LLMs respond differently to natural conversation history compared to names and explicit mentions? Go check out our new preprint! 👀

1 month ago 18 3 0 0

Our paper has been accepted to EACL 2026!🎉 We systematically evaluate several vision-language (VLMs) and language-only models, measuring their alignment with brain responses to concept words. Our results show that vision-language models offer a promising tool to model human concept processing

1 month ago 14 4 1 0

The CfP for CMCL is out!🌴 We are looking forward to receiving many interesting submissions! ✨ (Deadline: February 25, 2026) sites.google.com/view/cmclwor...

2 months ago 7 2 0 1

Thank you so much for having me! @milanlp.bsky.social 😊

3 months ago 5 0 0 0
Video thumbnail

🚨 New main paper out at #EMNLP2025! 🚨

⚡ We show that personalization of content moderation models can be harmful and perpetuate hate speech, defeating the purpose of the system and hurting the community.

We argue that personalized moderation needs boundaries, and we show how to build them.

4 months ago 11 3 1 1

Thrilled to be heading to Suzhou with a big team of GroNLP'ers 🐮

Interested in Interpretable, Cognitively inspired, Low-resource LMs? Don't miss our posters & talks #EMNLP2025!

4 months ago 14 3 1 0
Post image

Next week, I'll be at #EMNLP presenting our work "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" 🎉

📍 Ethics, Bias, and Fairness (Poster Session 2)
📅 Wed, November 5, 11:00-12:30 - Hall C
📖 Check the paper! arxiv.org/abs/2505.16467

See you in Suzhou! 👋

4 months ago 6 1 0 0
Three panel thing. In the left panel we use error bars. In the second, we take statistical significance as the biggest number but still have error bars. In LLM science, we just have the biggest number

Three panel thing. In the left panel we use error bars. In the second, we take statistical significance as the biggest number but still have error bars. In LLM science, we just have the biggest number

What if we did a single run and declared victory

4 months ago 340 70 13 9
Post image

🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!

4 months ago 44 16 1 4
Preview
Postdoctoral Researcher in Memory access in language Help uncover how memory shapes language use. As a postdoctoral researcher at the Institute for Language Sciences, you will join the ERC-funded MEMLANG project.

Hi all, there is a postdoc position open in the group I'm currently based in! ✨ Let me know if you are interested or have questions 🙂 Please share if you know someone who might be interested www.uu.nl/en/organisat...

5 months ago 3 3 0 0
Preview
PhD Candidate in Emotionally and Socially Aware Natural Language Processing The Faculty of Science and the Leiden Institute of Advanced Computer Science (LIACS) are looking for a:PhD Candidate in Emotionally and Socially Aware Natural Language Processing (1.0fte)Project descr...

📢 Are you interested in a PhD in #NLProc to study and improve how AI model emotions and social signals?

🚨Exciting news:🚨 I’m hiring a PhD candidate at LIACS,
@unileiden.bsky.social.

📍 Leiden, The Netherlands
📅 Deadline: 17 Nov 2025

👉 Position details and application link: tinyurl.com/5x5v6zsa

5 months ago 9 8 0 1

🚨 Are you looking for a PhD in #NLProc dealing with #LLMs?
🎉 Good news: I am hiring! 🎉
The position is part of the “Contested Climate Futures" project. 🌱🌍 You will focus on developing next-generation AI methods🤖 to analyze climate-related concepts in content—including texts, images, and videos.

5 months ago 23 14 1 0
Preview
Center for the Alignment of AI Alignment Centers We align the aligners

Q. Who aligns the aligners?
A. alignmentalignment.ai

Today I’m humbled to announce an epoch-defining event: the launch of the 𝗖𝗲𝗻𝘁𝗲𝗿 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗼𝗳 𝗔𝗜 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗖𝗲𝗻𝘁𝗲𝗿𝘀.

6 months ago 406 124 29 44
Interspeech paper title: What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training

Authors: Marianne de Heer Kloots, Hosein Mohebbi, Charlotte Pouw, Gaofei Shen, Willem Zuidema, Martijn Bentum

Interspeech paper title: What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training Authors: Marianne de Heer Kloots, Hosein Mohebbi, Charlotte Pouw, Gaofei Shen, Willem Zuidema, Martijn Bentum

✨ Do self-supervised speech models learn to encode language-specific linguistic features from their training data, or only more language-general acoustic correlates?

At #Interspeech2025 we presented our new Wav2Vec2-NL model and SSL-NL evaluation dataset to test this!

📄 arxiv.org/abs/2506.00981

⬇️

6 months ago 19 6 1 0

Delighted to share that our paper "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" (joint work with @arianna-bis.bsky.social and Raquel Fernández) got accepted to the main conference of #EMNLP

Can't wait to discuss our work at #EMNLP2025 in Suzhou this November!

6 months ago 14 2 0 0

Our paper on multilingual reasoning is accepted to Findings of #EMNLP2025! 🎉 (OA: 3/3/3.5/4)

We show SOTA LMs struggle with reasoning in non-English languages; prompt-hack & post-training improve alignment but trade off accuracy.

📄 arxiv.org/abs/2505.22888
See you in Suzhou! #EMNLP

6 months ago 7 3 0 0
Post image

What a privilege to have #CCN2025 in (an exceptionally warm and sunny) Amsterdam this year!

It was my first time attending the conference, and being surrounded by so many talented researchers whose interests are similar to mine has been a deeply enriching experience ✨

6 months ago 29 4 2 0
Post image Post image

Some amazing @amsterdamnlp.bsky.social people in Vienna💫#acl2025 Raquel Fernández Sandro Pezzelle Katia Shutova @esamghaleb.bsky.social @veraneplenbroek.bsky.social @annabavaresco.bsky.social + @leobertolazzi.bsky.social

7 months ago 8 2 0 0

#ACL2025

7 months ago 0 0 0 0

🧑‍🤝‍🧑 @ecekt.bsky.social, @alberto-testoni.bsky.social
📍 Monday, July 28, 11:00-12:30, Hall 4/5

See you in Vienna! ✨ @aclmeeting.bsky.social

7 months ago 1 0 0 0

🧑‍🤝‍🧑 @michaelwhanna.bsky.social, @akoller.bsky.social, @andre-t-martins.bsky.social, @pmondorf.bsky.social, Vera Neplenbroek, Sandro Pezzelle, @barbaraplank.bsky.social, @davidschlangen.bsky.social, Alessandro Suglia, @akskuchi.bsky.social

7 months ago 1 0 1 0

2️⃣ LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks (Main Conference)
🧑‍🤝‍🧑 @annabavaresco.bsky.social, @raffagbernardi.bsky.social, @leobertolazzi.bsky.social, @delliott.bsky.social, Raquel Fernández, Albert Gatt, @esamghaleb.bsky.social, Mario Giulianelli

7 months ago 1 0 1 0

🎉 Happy to share that I will be presenting two papers at ACL 2025.
1️⃣ Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation (Findings)
🧑‍🤝‍🧑 Vera Neplenbroek, @arianna-bis.bsky.social, Raquel Fernández
📍 Monday, July 28, 18:00-19:30, Hall 4/5

7 months ago 3 0 2 0
Preview
GitHub - Veranep/implicit-personalization-stereotypes Contribute to Veranep/implicit-personalization-stereotypes development by creating an account on GitHub.

[4/4] We hope to inspire future research into methods that counter the influence of stereotypical associations on the model’s latent representation of the user, particularly when the user’s demographic group is unknown.

Code and data:
github.com/Veranep/impl...

9 months ago 0 0 0 0
Post image

[3/4] Our findings reveal that LLMs infer demographic info based on stereotypical signals, sometimes even when the user explicitly identifies with a different demographic group. We mitigate this by intervening on the model’s internal representations using a trained linear probe.

9 months ago 0 0 1 0

[2/4] We systematically explore how LLMs respond to stereotypical cues using controlled synthetic conversations, by analyzing the models’ latent user representations through both model internals and generated answers to targeted user questions.

9 months ago 0 0 1 0
Post image

Do LLMs assume demographic information based on stereotypes?

We (@arianna-bis.bsky.social, Raquel Fernández and I) answered this question in our new paper: "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization".

🧵

arxiv.org/abs/2505.16467

9 months ago 5 0 1 2
Preview
GitHub - Veranep/implicit-personalization-stereotypes Contribute to Veranep/implicit-personalization-stereotypes development by creating an account on GitHub.

[4/4] We hope to inspire future research into methods that counter the influence of stereotypical associations on the model’s latent representation of the user, particularly when
the user’s demographic group is unknown.

Code and data: github.com/Veranep/impl...

9 months ago 0 0 0 0
Vera Neplenbroek
Vera Neplenbroek
@veraneplenbroek
83 Followers 137 Following 18 Posts
Posts Following