Oliver Eberle's Avatar

Oliver Eberle

@eberleoliver

Senior Researcher Machine Learning at BIFOLD | TU Berlin πŸ‡©πŸ‡ͺ Prev at IPAM | UCLA | BCCN Interpretability | XAI | NLP & Humanities | ML for Science

553
Followers
499
Following
13
Posts
20.11.2024
Joined
Posts Following

Latest posts by Oliver Eberle @eberleoliver

Post image

πŸš€ Visit our #NeurIPS posters at @neuripsconf.bsky.social!
Meet and interact with our authors at all locations β€” San Diego, Mexico City, and Copenhagen.
Details in the thread.
πŸ‘‡πŸ‘‡πŸ‘‡

28.11.2025 15:11 πŸ‘ 13 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Post image

Pleased to share new work with @sflippl.bsky.social @eberleoliver.bsky.social @thomasmcgee.bsky.social & undergrad interns at Institute for Pure and Applied Mathematics, UCLA.

Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models
www.arxiv.org/pdf/2510.15987

🧡1/n

27.10.2025 18:13 πŸ‘ 74 πŸ” 16 πŸ’¬ 1 πŸ“Œ 0

Very excited to receive this award and see this work at the intersection of AI and the Humanities recognized by the Heinz Billing Foundation of the @maxplanck.de!
Special thanks to @mpiwg.bsky.social, and M Valleriani and colleagues J BΓΌttner & H El-Hajj, as well as KR MΓΌller and G Montavon. πŸ“œπŸ€–

23.10.2025 13:54 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ₯³ Super happy to have our work on multi-concept feature descriptions accepted at #NeurIPS2025!

19.09.2025 12:43 πŸ‘ 16 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Position: We Need An Algorithmic Understanding of Generative AI What algorithms do LLMs actually learn and use to solve problems? Studies addressing this question are sparse, as research priorities are focused on improving performance through scale, leaving a...

πŸ“― Come visit our #ICML25 Spotlight Poster and meet @taylorwwebb.bsky.social to discuss our work: "Toward an Algorithmic Evaluation and Understanding of Generative AI."

Paper: openreview.net/forum?id=eax...
Poster: icml.cc/media/Poster...

16.07.2025 07:07 πŸ‘ 8 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Our position paper on algorithmic explanations is outβ€”excited to share it! πŸ™Œ

Proud of this collaborative effort toward a scientifically grounded understanding of generative AI.

@tuberlin.bsky.social @bifold.berlin @msftresearch.bsky.social @UCSD & @UCLA

20.06.2025 17:12 πŸ‘ 18 πŸ” 8 πŸ’¬ 1 πŸ“Œ 0

🚨 New preprint! Excited to share our work on extracting and evaluating the potentially many feature descriptions of language models

πŸ‘‰ arxiv.org/abs/2506.15538

19.06.2025 16:44 πŸ‘ 19 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Preview
Large language models that power AI should be publicly owned | Letter Letters: The future of public knowledge rests on building open-access LLMs driven by ethics rather than profit, writes Prof Dr Matteo Valleriani

"Who owns the tools that shape our understanding of the past?"

27.05.2025 07:35 πŸ‘ 7 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

THE NEVERENDING CURE – new exhibition by kennedy + swan
πŸ—“ May 28 | πŸ•• 6PM | πŸ“ UNI_VERSUM, TU Berlin

Art + AI + Snacks = A night not to miss!
With intros by BIFOLD scientists + artists.

www.bifold.berlin/news-events/...

#ArtOfEntanglement #ArtAndScience
@tuberlin.bsky.social
#ScheringStiftung

15.05.2025 11:38 πŸ‘ 5 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1
Preview
Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments Small and mid-sized generative language models have gained increasing attention. Their size and availability make them amenable to being analyzed at a behavioral as well as a representational level, a...

πŸ–ΌοΈ At the Re-Align workshop, @tomneuhaeuser.bsky.social and I presented "Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments", joint work with Lenka TΔ›tkovΓ‘ and @eberleoliver.bsky.social .
πŸ“ƒ arxiv.org/abs/2504.07965

03.05.2025 09:21 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

πŸ“œ πŸŽ‰ Happy to announce our workshop "AI-based Methods for the Humanities"! Bringing together #ML and #Humanities researchers to discuss frontiers in #DigitalHumanities, #NLP, #XAI and more!

Hosted together with Matteo Valleriani and
@bifold.berlin @tuberlin.bsky.social @mpiwg.bsky.social

04.03.2025 16:28 πŸ‘ 10 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Poster showing the MPIWG Institute’s Colloquium 2024–25 program.

Poster showing the MPIWG Institute’s Colloquium 2024–25 program.

Next up in our Institute’s Colloquium Series β€œHistory of Science in Public”: Kate Crawford @katecrawford.bsky.social will talk on "Mapping #AI: How to See Planetary-Scale #ArtificialIntelligence.”

πŸ—“οΈ Mar 25, 2025 (14:00 CET)
πŸ”— bitly.cx/m0KeS
πŸ“ relocated: MPIWG Main conference room

#HistSci #SciComm

26.02.2025 11:18 πŸ‘ 14 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0

Danke fΓΌr den Austausch @tuberlin.bsky.social – es bleibt spannend! πŸ˜‰

03.02.2025 09:01 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Work with Jochen BΓΌttner, Hassan El-Hajj, GrΓ©goire Montavon, Klaus-Robert MΓΌller, and Matteo Valleriani.

#ScienceAdvances
#DigitalHumanities
#HistSci

27.12.2024 09:23 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ“œ History repeats itself: We investigated how early modern communities have embraced scholarly advancements, reshaping scientific views and exploring scientific roots amidst a changing world.

www.science.org/doi/10.1126/...

@mpiwg.bsky.social @tuberlin.bsky.social @bifold.berlin @science.org

27.12.2024 09:20 πŸ‘ 15 πŸ” 3 πŸ’¬ 1 πŸ“Œ 2
Stellenausschreibung IV-618/24: Research Assistant - salary grade E13 TV-L Berliner Hochschulen - For qualification – Stellenausschreibungen der Technischen UniversitΓ€t Berlin

First #jobalert on Bluesky! Postdoc (or PhD position with strong ML background), for a joint project with Gregoire Montavon’s lab at #BIFOLD #TU-Berlin .

Topic is explainable AI /ML for self supervised LLMs in multi/spatial omics & gene regulation.

www.jobs.tu-berlin.de/stellenaussc...

19.11.2024 16:20 πŸ‘ 18 πŸ” 10 πŸ’¬ 1 πŸ“Œ 0

Here a gathered a complementary Explainable AI/Interpretability starter pack ;) Nice to see so many of us here now!

26.11.2024 20:31 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Welcome, you should be in it now :)

26.11.2024 15:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I am thrilled to share that our paper "Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations" was accepted to LREC-COLING
πŸ“œ arxiv.org/pdf/2402.19133…

We analyse low-cost eye-tracking data as an alternative to human rationale annotations when evaluating XAI methods.

01.03.2024 09:20 πŸ‘ 9 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
24.11.2024 18:13 πŸ‘ 25 πŸ” 5 πŸ’¬ 17 πŸ“Œ 0