Antonia Wüst @ NeurIPS's Avatar

Antonia Wüst @ NeurIPS

@toniwuest

PhD student at AIML Lab TU Darmstadt Interested in concept learning, neuro-symbolic AI and program synthesis

70
Followers
57
Following
26
Posts
18.02.2025
Joined
Posts Following

Latest posts by Antonia Wüst @ NeurIPS @toniwuest

Big thank you to my great co-authors @wolfstammer.bsky.social @hshindo.bsky.social Lukas Helff @devendradhami.bsky.social @kerstingaiml.bsky.social !

25.02.2026 21:07 👍 2 🔁 0 💬 0 📌 0

Excited to share that our paper "Synthesizing Visual Concepts as Vision-Language Programs" has been accepted to #CVPR2026! 🎉

We propose a novel method that combines VLMs with symbolic program synthesis to learn reliable programs of visual concepts.

🌐 ml-research.github.io/vision-langu...

25.02.2026 21:05 👍 3 🔁 2 💬 1 📌 0
Preview
RMU AI and Creativity Symposium

📣 Call for Contributions: Do you have interesting work to share? We invite you to submit your abstract for our poster session featuring innovative projects in this exciting field: eveeno.com/342278190

10.02.2026 15:35 👍 3 🔁 2 💬 0 📌 0

We are pleased to announce a one-day symposium on AI and Creativity in Darmstadt! Join us for an inspiring lineup of speakers and a full day dedicated to exploring creativity in modern machine learning models and the relationship between biological and artificial creation. 🎨🤖

10.02.2026 15:34 👍 7 🔁 1 💬 1 📌 0
Vision-Language Programs - Antonia Wüst
Vision-Language Programs - Antonia Wüst YouTube video by Ndea

Super excited that our recent work got featured in the Abstract Synthesis podcast! 🚀
I joined Brian to discuss inductive reasoning in vision and how we can combine Vision-Language Models with Program Synthesis to enable more reliable and interpretable reasoning 💡

Podcast: youtu.be/uefqvsButp8?...

21.01.2026 16:50 👍 6 🔁 1 💬 0 📌 0

Thanks! The functions (like exist_object, get_objects) are predefined, however the symbols like "round" in this case are discovered by the VLM. By that we get an expressive DSL that still can adapt to the tasks at hand

01.12.2025 16:48 👍 1 🔁 0 💬 0 📌 0

Thank you, very happy to hear that! I think there are definitely merits in combining the best of both worlds, that should not be overshadowed by the current focus on LLMs. Happy to discuss the topic!

01.12.2025 16:43 👍 3 🔁 0 💬 1 📌 0

2 Dec
📌 Poster: Vision-Language Programs at WIML Workshop
🕕 6:00 pm to 9:00 pm

4 Dec
📌 Poster: Object-Centric Concept-Bottleneck with David Steinmann
🕟 4:30 pm to 7:30 pm

7 Dec
🎤 Oral: Vision-Language Programs at 01:30 pm
📌 Poster: 4:05 pm to 5:00 pm

01.12.2025 00:19 👍 1 🔁 0 💬 0 📌 0

After an amazing time in LA and Joshua Tree Park, I’m excited to head to NeurIPS next week. My colleagues and I will be presenting some of our recent work (see below).

Looking forward to connecting and starting new conversations. Feel free to reach out if you want to chat! 💬

01.12.2025 00:16 👍 2 🔁 0 💬 1 📌 0
Synthesizing Visual Concepts as Vision-Language Programs Synthesizing Visual Concepts as Vision-Language Programs

Check out the work here: ml-research.github.io/vision-langu...

Work together with my great co-authors @wolfstammer.bsky.social, Hikaru Shindo, Lukas Helff, @devendradhami.bsky.social , @kerstingaiml.bsky.social 💫

30.11.2025 01:37 👍 6 🔁 1 💬 0 📌 1
Post image

With VLP, we introduce VLM functions as a perceptual interface and combine them with symbolic operators. By that, VLP can discover concise concepts in the form of functional programs that faithfully follow the few-shot image examples.

30.11.2025 01:33 👍 5 🔁 0 💬 2 📌 0

Instead of letting the VLM do all the reasoning in natural language, what if we only use it for perception, and then let a symbolic program do the reasoning on top of that? 💡

30.11.2025 01:33 👍 3 🔁 0 💬 1 📌 0
Post image

Problem: Vision-language models are great at visual recognition, but often fail at faithful visual reasoning.
They can output rules that sound plausible but violate the task constraints or contradict the images.

30.11.2025 01:32 👍 3 🔁 0 💬 1 📌 0
Post image

🚨 New paper alert!
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.

30.11.2025 01:32 👍 15 🔁 7 💬 1 📌 2
Preview
Post-hoc Probabilistic Vision-Language Models Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descripti...

Unfortunately, our submission to #NeurIPS didn’t go through with (5,4,4,3). But because I think it’s an excellent paper, I decided to share it anyway.

We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!

📝 arxiv.org/abs/2412.06014

18.09.2025 20:34 👍 51 🔁 16 💬 2 📌 1
Post image

And last but not least: the spirals are still spinning, each in their own direction 🌀

20.08.2025 16:57 👍 1 🔁 0 💬 0 📌 0
Preview
bongard-in-wonderland/demo.ipynb at main · ml-research/bongard-in-wonderland Contribute to ml-research/bongard-in-wonderland development by creating an account on GitHub.

💻 We also added a demo of the evaluation to our GitHub repo! Check it out here: github.com/ml-research/...

20.08.2025 16:53 👍 0 🔁 0 💬 1 📌 0
Bongard in Wonderland

📊 Updated results are also on our webpage!
Link: ml-research.github.io/bongard-in-w...
Curious to hear - should we evaluate other models too? 🤖

20.08.2025 16:53 👍 0 🔁 0 💬 1 📌 0
Post image

🔎 Importantly, Task 2 continues to expose inconsistencies between the solved problems in Task 1 (64) and the problems where the model can correctly classify the individual images of the problem (only 34), given the gt options (Task 2).

20.08.2025 16:52 👍 0 🔁 0 💬 1 📌 0
Post image

🤔 Surprisingly, even some easy problems like BP8 remain unsolved…

20.08.2025 16:52 👍 0 🔁 0 💬 1 📌 0
Post image

Can the new GPT-5 model finally solve Bongard Problems? 👉Not quite yet!
Using our ICML Bongard in Wonderland setup, it solved 64/100 problems - the best score so far! 📈
However, some issues still persist ⬇️

20.08.2025 16:50 👍 6 🔁 0 💬 1 📌 0
Post image

Can concept-based models handle complex, object-rich images? We think so! Meet Object-Centric Concept Bottlenecks (OCB) — adding object-awareness to interpretable AI. Led by David Steinmann w/ @toniwuest.bsky.social & @kerstingaiml.bsky.social .
📄 arxiv.org/abs/2505.244...
#AI #XAI #NeSy #CBM #ML

07.07.2025 15:55 👍 10 🔁 4 💬 0 📌 0

I'll be at #ICML2025 next week presenting our recent work on VLMs and Bongard Problems! Feel free to reach out, happy to have a chat ☺️

12.07.2025 12:17 👍 3 🔁 0 💬 0 📌 0

Work together with my amazing co-authors @philosotim.bsky.social
Lukas Helff @ingaibs.bsky.social @wolfstammer.bsky.social @devendradhami.bsky.social @c-rothkopf.bsky.social @kerstingaiml.bsky.social ! ✨

02.05.2025 08:00 👍 4 🔁 1 💬 0 📌 0
Post image

We also identified 10 particularly challenging Bongard Problems that none of the models could solve under any setting. The challenge remains wide open!
3 examples of the challenging BPs:

02.05.2025 07:57 👍 2 🔁 1 💬 1 📌 1
Post image

Interestingly, success in solving the BPs (Open Question) doesn't translate to correctly categorizing individual images 👉 the sets of BPs solved in each task are not the same!
This suggests that getting the right final answer doesn’t always mean genuine understanding 🤔

02.05.2025 07:55 👍 1 🔁 1 💬 1 📌 0
Post image

Our evaluation shows the top-performing model (o1) solved 43 out of 100 problems, with the others trailing far behind. There’s still a long way to go for current AI models!

02.05.2025 07:53 👍 0 🔁 1 💬 1 📌 0
Post image

Excited to share that our paper got accepted at #ICML2025!! 🎉

We challenge Vision-Language Models like OpenAI’s o1 with Bongard problems, classic visual reasoning challenges and uncover surprising shortcomings.

Check out the paper: arxiv.org/abs/2410.19546
& read more below 👇

02.05.2025 07:47 👍 25 🔁 10 💬 1 📌 1