If you care about enforcing constraints over time without breaking your computational resources, then read our new blog post over at @aihub.org!
It focuses on showing how our neurosymbolic Markov models beat the SOTA in out-of-distribution generalisation and so much more.
24.02.2026 09:22
๐ 9
๐ 4
๐ฌ 0
๐ 0
1/5 Tomorrow Iโll talk about the ๐ฉ๐ซ๐จ๐๐๐๐ข๐ฅ๐ข๐ฌ๐ญ๐ข๐ ๐ฉ๐ซ๐จ๐ ๐ซ๐๐ฆ๐ฆ๐ข๐ง๐ ๐ฌ๐๐ฆ๐๐ง๐ญ๐ข๐๐ฌ ๐จ๐ ๐๐ข๐๐๐๐ซ๐๐ง๐ญ๐ข๐๐๐ฅ๐ ๐ฉ๐ซ๐จ๐ฏ๐ข๐ง๐ at #NeurIPS San Diego (poster #614 11am).
๐ openreview.net/pdf?id=rEUbD...
๐บ www.youtube.com/watch?v=sOTX...
05.12.2025 00:16
๐ 10
๐ 2
๐ฌ 1
๐ 1
Just under 10 days left to submit your latest endeavours in #tractable probabilistic models!
Join us at TPM @auai.org #UAI2025 and show how to build #neurosymbolic / #probabilistic AI that is both fast and trustworthy!
14.05.2025 17:48
๐ 11
๐ 9
๐ฌ 0
๐ 0
We developed a library to make logical reasoning embarrasingly parallel on the GPU.
For those at ICLR ๐ธ๐ฌ: you can get the juicy details tomorrow (poster #414 at 15:00). Hope to see you there!
23.04.2025 08:12
๐ 24
๐ 7
๐ฌ 1
๐ 2
If you're at #AAAI2025, come check out our demo on neurosymbolic reinforcement learning with probabilistic logic shields ๐ค Tomorrow (Sat, March 1) from 12:30โ2:30 PM during the poster session ๐ป
28.02.2025 22:53
๐ 4
๐ 1
๐ฌ 0
๐ 0
We all know backpropagation can calculate gradients, but it can do much more than that!
Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.
27.02.2025 23:45
๐ 27
๐ 10
๐ฌ 1
๐ 0
๐ฅ Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Markov Models (NeSy-MMs) next week at #AAAI2025! ๐
๐ Paper: arxiv.org/pdf/2412.13023
๐ป Code: github.com/ML-KULeuven/...
๐งตโฌ๏ธ
25.02.2025 11:01
๐ 24
๐ 11
๐ฌ 1
๐ 1
See you at #AAAI2025!
Site: dtai.cs.kuleuven.be/projects/nes...
Video: youtu.be/3uLVxwlcSQc?...
@daviddebot.bsky.social, @gabventurato.bsky.social, @giuseppemarra.bsky.social, @lucderaedt.bsky.social
#ReinforcementLearning #AI #MachineLearning #NeurosymbolicAI
(8/8)
24.02.2025 12:29
๐ 0
๐ 0
๐ฌ 0
๐ 0
Open-source & easy to use!
๐ท Code: github.com/ML-KULeuven/...
๐ท Based on MiniHack & Stable Baselines3
๐ท Define new shields in just a few lines of code!
๐ Letโs make RL safer & smarter, together!
(7/8)
24.02.2025 12:28
๐ 0
๐ 0
๐ฌ 1
๐ 0
Want to try it yourself? ๐ฎ
Use our interactive web demo!
๐ท Modify environments (add lava, monsters!)
๐ท Test shielded vs. non-shielded agents
๐ฅ๏ธ Play with it here: dtai.cs.kuleuven.be/projects/nes...
(6/8)
24.02.2025 12:28
๐ 0
๐ 0
๐ฌ 1
๐ 0
Why does this matter?
๐ท Faster training โ
๐ท Safer exploration ๐
๐ท Better generalization ๐
(5/8)
24.02.2025 12:27
๐ 0
๐ 0
๐ฌ 1
๐ 0
How does it work? ๐ค๐ก๏ธ
The shield:
โ
Exploits symbolic data from sensors ๐
โ
Uses logical rules ๐
โ
Prevents unsafe actions ๐ซ
โ
Still allows flexible learning ๐ค
A perfect blend of symbolic reasoning & deep learning!
(4/8)
24.02.2025 12:27
๐ 1
๐ 0
๐ฌ 1
๐ 0
Enter MiniHack, our demo's testing ground! ๐ฐ๐ก๏ธ
There, RL agents face:
โ
Lava cliffs & slippery floors
โ
Chasing monsters
โ
Locked doors needing keys
Findings:
๐ท Standard RL struggles to find an optimal, safe policy.
๐ท Shielded RL agents stay safe & learn faster!
(3/8)
24.02.2025 12:27
๐ 0
๐ 0
๐ฌ 1
๐ 0
Deep RL is powerful, but...
โ ๏ธ It can take dangerous actions
โ ๏ธ It lacks safety guarantees
โ ๏ธ It struggles with real-world constraints
Yang et al.'s probabilistic logic shields fix this, enforcing safety without breaking learning efficiency! ๐
(2/8)
24.02.2025 12:26
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ Do you care about safe AI? Do you want RL agents that are both smart & trustworthy?
At #AAAI2025, we present our demo for neurosymbolic RLโcombining deep learning with probabilistic logic shields for safer, interpretable AI in complex environments. ๐ฐ๐ฅ
๐งต๐
(1/8)
24.02.2025 12:26
๐ 7
๐ 4
๐ฌ 1
๐ 1
Interpretable Concept-Based Memory Reasoning - NeurIPS 2024
YouTube video by David Debot
A short overview video can be found on YouTube: youtu.be/CgSDhQKESD0?...
#NeurIPS2024
23.12.2024 10:23
๐ 0
๐ 0
๐ฌ 0
๐ 0
Or check out our Medium post: ๐ medium.com/@pyc.devteam... (7/7)
04.12.2024 08:50
๐ 2
๐ 0
๐ฌ 0
๐ 0
NeurIPS Poster Interpretable Concept-Based Memory ReasoningNeurIPS 2024
With CMR, weโre reaching the sweet spot of accuracy and interpretability. Check it out at our poster at #NeurIPS2024! ๐ neurips.cc/virtual/2024... (6/7)
04.12.2024 08:49
๐ 3
๐ 0
๐ฌ 1
๐ 0
During training, CMR learns embeddings as latent representations of logic rules, and a neural rule selector identifies the most relevant rule for each instance. Due to a clever factorization and rule selector, inference is linear in the number of concepts and rules. (5/7)
04.12.2024 08:49
๐ 1
๐ 0
๐ฌ 1
๐ 0
CMR makes a prediction in 3 steps:
1) Predict concepts from the input
2) Neurally select a rule from a memory of learned logic rules โจ Accuracy
3) Evaluate the selected rule with the concepts to make a final prediction โจ Interpretability (4/7)
04.12.2024 08:48
๐ 1
๐ 0
๐ฌ 1
๐ 0
CMR has:
โก State-of-the-art accuracy that rivals black-box models
๐ Pure probabilistic semantics with linear-time exact inference
๐๏ธ Transparent decision-making so human users can interpret model behavior
๐ก๏ธ Pre-deployment verifiability of model properties (3/7)
04.12.2024 08:47
๐ 1
๐ 0
๐ฌ 1
๐ 0
CMR is our latest neurosymbolic concept-based model. A proven ๐ถ๐ฏ๐ช๐ท๐ฆ๐ณ๐ด๐ข๐ญ ๐ฃ๐ช๐ฏ๐ข๐ณ๐บ ๐ค๐ญ๐ข๐ด๐ด๐ช๐ง๐ช๐ฆ๐ณ irrespective of the concept set, CMR achieves near-black-box accuracy by combining ๐ฟ๐๐น๐ฒ ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด and ๐ป๐ฒ๐๐ฟ๐ฎ๐น ๐ฟ๐๐น๐ฒ ๐๐ฒ๐น๐ฒ๐ฐ๐๐ถ๐ผ๐ป! (2/7)
04.12.2024 08:47
๐ 1
๐ 0
๐ฌ 1
๐ 0
๐จ Interpretable AI often means sacrificing accuracyโbut what if we could have both? Most interpretable AI models, like Concept Bottleneck Models, force us to trade accuracy for interpretability.
But not anymore, due to Concept-Based Memory Reasoner (CMR)! #NeurIPS2024 (1/7)
04.12.2024 08:45
๐ 24
๐ 7
๐ฌ 2
๐ 0