Congrats!!
Congrats!!
To enable this, we introduce MUREL, an 85.2M-token multicultural resource, and a comprehensive pipeline to separate language- vs. culture-related neurons and assess their roles via targeted ablations.
Grateful to @lukasgalke.bsky.social for his guidance and support!
We study how cultural information is represented inside multilingual LLMs by localizing and intervening on neuron subsets across four models and six cultures, including English, German, Danish, Chinese, Russian, and Persian.
๐ Excited to share our latest work, "Isolating Culture Neurons in Multilingual Large Language Models".
๐ป Data & code: github.com/namazifard/C...
๐ Preprint: arxiv.org/abs/2508.02241
Danial Namazifard, Lukas Galke
Isolating Culture Neurons in Multilingual Large Language Models
https://arxiv.org/abs/2508.02241
Paria Khoshtab, Danial Namazifard, Mostafa Masoudi, Ali Akhgary, Samin Mahdizadeh Sani, Yadollah Yaghoobzadeh
Comparative Study of Multilingual Idioms and Similes in Large Language Models
https://arxiv.org/abs/2410.16461