Danke!
Danke!
Gibt es das Paper irgendwo? Finde nur einen toten Link
We are opening an investigation into Grok because we believe that X may have breached the DSA.
We have seen antisemitic content, non-consensual deepfakes of women, and child sexual abuse material.
In Europe, no company will make money by violating our fundamental rights.
link.europa.eu/Fh8h84
ich werfe mich hiermit in den deutungskampf rund um definition, bedeutung und konsequenzen von "slop" (free link!)
www.zeit.de/2026/04/ki-i...
Kevin Glocker, K\"atriin Kukk, Romina Oji, Marcel Bollmann, Marco Kuhlmann, Jenny Kunz
Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation
https://arxiv.org/abs/2512.10772
Introducing Global PIQA, a new multilingual benchmark for 100+ languages. This benchmark is the outcome of this yearโs MRL shared task, in collaboration with 300+ researchers from 65 countries. This dataset evaluates physical commonsense reasoning in culturally relevant contexts.
Screenshot of the Viabundus website.
A neat tool I just came across: Viabundus, a digital road map of northern Europe 1350-1650, that lets you calculate contemporary travel routes/times. In 1500, going Amiens โ Kรถln by horse took almost 7 days and 13 toll payments.
#medievalsky
www.landesgeschichte.uni-goettingen.de/handelsstras...
๐ข Announcing the First Workshop on Multilingual and Multicultural Evaluation (MME) at #EACL2026 ๐ฒ๐ฆ
MME focuses on resources, metrics & methodologies for evaluating multilingual systems! multilingual-multicultural-evaluation.github.io
๐
Workshop Mar 24โ29, 2026
๐๏ธ Submit by Dec 19, 2025
โno deanonymized preprint may be posted in the month prior to submission.โ Is that a mistake?
how well do *you* understand how AI reasoning works? test yourself here:
do-you-understand-ai.com
Sjukt att kyrkflytten marknadsfรถrs som slow tv nรคr det bokstavligen รคr vรคrldens just nu snabbaste kyrka
We have over 200 volunteers now for 90+ languages! We are hoping to expand the diversity of our language coverage and are still looking for participants who speak these languages. Check out how to get involved below, and please help us spread the word!
An abbreviation (ABB) in a journal article (JA) or Grant Application (GA) is rarely worth the words it saves. Every ABB requires cognitive resources (CR) and at my age by the time I'm halfway through a JA or GA I no longer have the CR to remember what your ABB stood for.
We all know AI models can now create realistic images and videos but how do they fare at identifying where a real image was taken? Bellingcat researchers have put Large Language Models to the test: www.bellingcat.com/resources/ho...
Pope Leo XIV has a degree in mathematics from Villanova university. To get where he is he has had to demonstrate a sound knowledge of original sin, but he will be the first pope to completely grasp original cos and original tan.
Are you tired of context-switching between coding models in @pytorch.org and paper writing on @overleaf.com?
Well, Iโve got the fix for you, Neuralatex! An ML library written in pure Latex!
neuralatex.com
To appear in Sigbovik (subject to rigorous review process)
Executive Summary A pro-Russia content aggregation network, Pravda, appears to be set up to flood large-language models with pro-Kremlin content, The American Sunlight Project has found. Over the past several months, ASP researchers have investigated 108 new domains and subdomains belonging to the Pravda network, a previously-established ecosystem of largely identical, automated web pages that previously targeted many countries in Europe as well as Africa and Asia with pro-Russia narratives about the war in Ukraine. ASPโs research, in combination with that of other organizations, brings the total number of associated domains and subdomains to 182. The networkโs older targets largely consisted of states belonging to or aligned with the West. Notably, this latest expansion includes many countries in Africa, the Asia-Pacific, the Middle East, and North America. It also includes entities other than countries as targets, specifically non-sovereign nations, international organizations, audiences for specific languages, and prominent heads of state. The top objective of the network appears to be duplicating as much pro-Russia content as widely as possible. With one click, a single article could be autotranslated and autoshared with dozens of other sites that appear to target hundreds of millions of people worldwide. ASP researchers also believe the network may have been custom-built to flood large language models (LLMs) with pro-Russia content. The network is unfriendly to human users; sites within the network boast no search function, poor formatting, and unreliable scrolling, among other usability issues. This final finding poses foundational implications for the intersection of disinformation and artificial intelligence (AI), which threaten to turbocharge highly automated, global information operations in the future.
A pro-Russia content aggregation network is churning out at least 3 MILLION pieces of propaganda per year, all on sites that are virtually unusable by humans.
So what's the goal? We explore the idea that it might be to flood LLMs with pro-Russia content:
static1.squarespace.com/static/6612c... 1/
Sug pรฅ den, era jรคvla smรฅlรคnningar
Gravestone with a picture of a dog, on Wiktionary page for โhรคr ligger en hund begravenโ
I appreciate how someone on Wiktionary found a great illustration of this Swedish idiom (โa dog lies buried hereโ) in 2009, long before AI slop
๐ข Weโre looking to hire a postdoc within the TrustLLM project!
Full-time position, two years, no teaching obligation. Research areas include language adaptation, modularisation, tokenization, and evaluation for multilingual LLMs.
Apply by 2025-02-05!
โถ๏ธ liu-nlp.ai/postdoc-trus...
This is a limited study, only considering one language and only evaluating summarization performance, but so far these findings seem to hold even in follow-up experiments in other setups. Thereโll be more on this coming up next year!
Even with the quite limited data I used, having more trainable parameters consistently lead to better scores.
The best setup was LoRA in the feed-forward module, followed by bottleneck adapters. LoRA in the attention module was less stable & performed worse, especially considering the number of parameters added. Prefix tuning and (IA)^3 didnโt really work in comparison.
New short paper, to appear at NoDaLiDa 2025! Iโve written up findings from ablation studies on language adaptation of small LLMs with limited data/compute using different PEFT methods and setups: arxiv.org/abs/2412.12674
Hello world!!!!
We are now on BlueSky as well ๐.
#NLP #NLProc #nodalida #baltichlt
Great points! The example prompt on the first page is truly painful๐
I had a great couple of days at SLTC in Linkรถping. The Swedish NLP community is doing well. Thanks @jeku.bsky.social and others for organizing!
1) I want his job
2) I need to create a dataset that contains this sentence
Iโd like to be on there :)
bsky.app/profile/did:...