Emile van Krieken's Avatar

Emile van Krieken

@emilevankrieken.com

Post-doc @ VU Amsterdam, prev University of Edinburgh. Neurosymbolic Machine Learning, Generative Models, commonsense reasoning https://www.emilevankrieken.com/

4,225
Followers
1,072
Following
294
Posts
31.12.2023
Joined
Posts Following

Latest posts by Emile van Krieken @emilevankrieken.com

LLMs are nothing more than models of the distribution of the word forms in their training data, with weights modified by post-training to produce somewhat different distributions.

07.03.2026 07:09 ๐Ÿ‘ 85 ๐Ÿ” 2 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 0

The AI discourse sometimes seems to center on "Is AI good or is it bad?"

I find this framing unproductive. AI is not a fixed thing.

I would prefer to ask "How might we use this technology for good, and mitigate the bad?"

What a shame if the best use we can come up with is no use at all.

06.03.2026 05:29 ๐Ÿ‘ 38 ๐Ÿ” 5 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 2
Preview
Graph Homomorphism Distortion: A Metric to Distinguish Them All and in the Latent Space Bind Them A large driver of the complexity of graph learning is the interplay between structure and features. When analyzing the expressivity of graph neural networks, however, existing approaches ignore featur...

To kick off the PhD journey with @pseudomanifold.topology.rocks:

What are the limitations of the WL metric, and what is an ๐˜ช๐˜ฏ๐˜ง๐˜ฐ๐˜ณ๐˜ฎ๐˜ข๐˜ต๐˜ช๐˜ท๐˜ฆ ๐˜ฎ๐˜ฆ๐˜ต๐˜ณ๐˜ช๐˜ค?

We answer these questions with our ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต ๐—›๐—ผ๐—บ๐—ผ๐—บ๐—ผ๐—ฟ๐—ฝ๐—ต๐—ถ๐˜€๐—บ ๐——๐—ถ๐˜€๐˜๐—ผ๐—ฟ๐˜๐—ถ๐—ผ๐—ป

arxiv.org/abs/2511.03068

@olgatticus.bsky.social, Kavir and @erikjbekkers.bsky.social

04.03.2026 09:51 ๐Ÿ‘ 12 ๐Ÿ” 4 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
Post image

"He is from [MASK] [MASK]" โ†’ "San York"? dLLMs fail because they ignore token dependencies. This Factorization Barrier arises from a structural misspecification: models are restricted to fully factorized outputs. We break this barrier with CoDD, enabling coherent parallel generation. ๐Ÿš€

04.03.2026 06:25 ๐Ÿ‘ 18 ๐Ÿ” 5 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 4

Sam is a snake

28.02.2026 07:08 ๐Ÿ‘ 79 ๐Ÿ” 2 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 0
Post image

time traveler from 12 months from now just sent me this

27.02.2026 21:25 ๐Ÿ‘ 1619 ๐Ÿ” 205 ๐Ÿ’ฌ 66 ๐Ÿ“Œ 34

In light of the current funding situation (worldwide), a modest proposal: instead of pouring billions of dollars into GenAI claiming "it *could* accelerate science and research," consider putting 1% of that amount in what *will* accelerate science and research. Namely, funding science and research.

26.02.2026 12:44 ๐Ÿ‘ 71 ๐Ÿ” 8 ๐Ÿ’ฌ 9 ๐Ÿ“Œ 2

why do science? it won,t make the model Bigger

24.02.2026 22:19 ๐Ÿ‘ 47 ๐Ÿ” 4 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 0
Preview
Storchastic: A Framework for General Stochastic Automatic Differentiation Modelers use automatic differentiation (AD) of computation graphs to implement complex Deep Learning models without defining gradient computations. Stochastic AD extends AD to stochastic computation g...

I spent way too long trying to understand stop gradients lol arxiv.org/abs/2104.00428 (see the first appendix).

I'd argue it is about the loss, but rather you're defining a surrogate loss that should optimise the true loss you're interested in.

23.02.2026 17:18 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

X is hiring a creative writing specialist at $40 an hour to make Grok better at writing and a true LOL at the qualifications

30.01.2026 20:14 ๐Ÿ‘ 5744 ๐Ÿ” 1148 ๐Ÿ’ฌ 475 ๐Ÿ“Œ 817
Post image

New open source: cuthbert ๐Ÿ›

State space models with all the hotness: (temporally) parallelisable, JAX, Kalman, SMC

30.01.2026 16:26 ๐Ÿ‘ 35 ๐Ÿ” 9 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1

Best conference with the best people and in the best place ๐Ÿ˜Ž ๐Ÿ˜œ

Also the submission deadline is conveniently one month later than #ICML2026, just in case you needed it ๐Ÿ˜…

27.01.2026 13:20 ๐Ÿ‘ 14 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1
Call for Papers NeSy AI is the association for neurosymbolic Artificial Intelligence. It runs NeSy, the premier international conference on neural-symbolic learning and reasoning, yearly since 2005, with a focus on n...

๐Ÿฆ•The 20th conference on Neurosymbolic AI will be in Lisbon, Portugal, September 1-4, 2026!

The CFP is out: 2026.nesyconf.org/call-for-pap... with two phases:
๐Ÿšจ Deadline 1: Feb 24 (abstract), Mar 3 (full)
๐Ÿšจ Deadline 2: Jun 9 (abstract), Jun 16 (full)

#neurosymbolic #NeSy2026

20.01.2026 15:35 ๐Ÿ‘ 6 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
Post image

We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7

07.01.2026 17:27 ๐Ÿ‘ 143 ๐Ÿ” 34 ๐Ÿ’ฌ 9 ๐Ÿ“Œ 9

Good call! I maintain a list of Neurosymbolic folks on Bsky, see here ๐Ÿฆ•
go.bsky.app/RMJ8q3i

13.01.2026 09:36 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I am recruiting 1 PhD student (4-year position) and 2 postdocs (3-year positions) to work on logic and machine learning at the University of Helsinki:
- PhD 1: jobs.helsinki.fi/job/Helsinki...
- Postdoc 1: jobs.helsinki.fi/job/Helsinki...
-Postdoc 2: jobs.helsinki.fi/job/Helsinki...

10.01.2026 15:01 ๐Ÿ‘ 17 ๐Ÿ” 6 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

#XAI, #neurosymbolic methods #nesy and #causal #representation #learning #CRL all care about learning #interpretable #concepts, but in different ways.

We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other ๐Ÿฆพ๐Ÿ”ฅ๐Ÿ’ฅ

Submission deadline: 30 Jan 2026

22.12.2025 16:41 ๐Ÿ‘ 13 ๐Ÿ” 4 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 2

Thanks for the fantastic talk, and totally agree! (Writing this in the train from Copenhagen :-))

08.12.2025 13:37 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Emile will present our work on Knowledge Graph Embeddings at Eurips' Salon des Refusรฉs on Friday!
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. ๐Ÿ”จ

03.12.2025 16:12 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
NeSy conference The NeSy conference studies the integration of deep learning and symbolic AI, combining neural network-based statistical machine learning with knowledge representation and reasoning from symbolic appr...

Recordings of the NeSy 2025 keynotes are now available! ๐ŸŽฅ

Check out insightful talks from @guyvdb.bsky.social, @tkipf.bsky.social and D McGuinness on our new Youtube channel www.youtube.com/@NeSyconfere...

Topics include using symbolic reasoning for LLM, and object-centric representations!

29.11.2025 08:21 ๐Ÿ‘ 7 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

๐Ÿšจ New paper alert!
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.

30.11.2025 01:32 ๐Ÿ‘ 15 ๐Ÿ” 7 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 2

Interested in meeting up in Copenhagen? Do shoot a message!

28.11.2025 17:30 ๐Ÿ‘ 4 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

And finally #3

๐Ÿ”จ Rank bottlenecks in KGEs:

At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271

28.11.2025 17:30 ๐Ÿ‘ 6 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
Post image

#2
๐Ÿ‡ GRAPES: At Tuesday's ELLIS Unconference poster session.
We study adaptive graph sampling for scaling GNNs!

Work with Taraneh Younesian, Daniel Daza, @thiviyan.bsky.social, @pbloem.sigmoid.social.ap.brid.gy

arxiv.org/abs/2310.03399

28.11.2025 17:30 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Almost off to @euripsconf.bsky.social in Copenhagen ๐Ÿ‡ฉ๐Ÿ‡ฐ ๐Ÿ‡ช๐Ÿ‡บ! I'll present 3 posters:

๐Ÿง  Neurosymbolic Diffusion Models: Thursday's poster session.

Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138

28.11.2025 17:30 ๐Ÿ‘ 21 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Beyond Smoothed Analysis: Analyzing the Simplex Method by the Book Narrowing the gap between theory and practice is a longstanding goal of the algorithm analysis community. To further progress our understanding of how algorithms work in practice, we propose a new alg...

The simplex algorithm is super efficient. 80 years of experience says it runs in linear time. Nobody can explain _why_ it is so fast.

We invented a new algorithm analysis framework to find out.

27.10.2025 01:43 ๐Ÿ‘ 212 ๐Ÿ” 49 ๐Ÿ’ฌ 5 ๐Ÿ“Œ 13

Precies hetzelfde hier...

08.11.2025 22:26 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Want to use your favourite #NeSy model but afraid of the reasoning shortcuts?๐Ÿซฃ

Fear not๐Ÿ’ช๐ŸปIn our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!

06.11.2025 10:40 ๐Ÿ‘ 14 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
Post image

I'm in Suzhou to present our work on MultiBLiMP, Friday @ 11:45 in the Multilinguality session (A301)!

Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)

arxiv.org/abs/2504.02768

06.11.2025 07:08 ๐Ÿ‘ 27 ๐Ÿ” 7 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

๐ŸŒIntroducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!

15.10.2025 10:53 ๐Ÿ‘ 44 ๐Ÿ” 16 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 4