๐ Had an amazing time at NeSy 2025 @nesyconf.org in Santa Cruz! Very well-organized conference, great talks, inspiring discussion and of course enjoying the beautiful beach and Bay Area vibes. ๐๏ธโจ
#NeSy2025 #neurosymbolicAI #SantaCruz
๐ Had an amazing time at NeSy 2025 @nesyconf.org in Santa Cruz! Very well-organized conference, great talks, inspiring discussion and of course enjoying the beautiful beach and Bay Area vibes. ๐๏ธโจ
#NeSy2025 #neurosymbolicAI #SantaCruz
Today, @yuqichengzhu.bsky.social opened the oral session with a beautiful talk on a Neurosymbolic extension of Retrieval Augmented Generation (RAG) with an argumentation framework
Yuqicheng is standing in front of his poster. The poster is on a panel with the number 299.
Yuqicheng Zhu (@yuqichengzhu.bsky.social) presented our paper, Predicate-Conditional Conformalized Answer Sets for Knowledge Graph Embeddings, at #ACL2025. This paper studies a key question for the reliability of applications using knowledge graph embeddings [โฆ]
[Original post on mstdn.degu.cl]
We are honored to host ๐๐ซ. ๐๐๐ข๐ง๐ก๐๐ซ๐ ๐๐ญ๐จ๐ฅ๐ฅ๐ for his talk, โ๐ธ๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐ ๐๐ฆ๐ ๐ก๐๐๐ ๐ค๐๐กโ ๐ด๐ผโ, on ๐๐ฎ๐ง๐ ๐ ๐๐ญ ๐๐:๐๐ in ๐๐จ๐จ๐ฆโฏ๐๐๐.๐๐๐, ๐๐ง๐ข๐ฏ๐๐ซ๐ฌ๐ข๐ญรค๐ญ๐ฌ๐ฌ๐ญ๐ซ๐ร๐โฏ๐๐. Students, staff, and all interested guests are warmly invited to attend!
#AI #Safety
Institute researchers propose a new query embedding method to extend the expressiveness of existing query embeddings. The paper will be presented at the Web Conference 2025, in Sidney, Australia, from April 28 to May 2.
Query embedding methods are used to predict the answers to queries by [โฆ]
Several papers from our institute (AC group) have been accepted at top conferences! ๐
Our colleagues are attendingโfeel free to reach out if youโd like to connect! #ICLR2025 #WWW2025 #NAACL2025 #AI #NLP #ML
๐ Weโre on Bluesky! ๐
We are the AI Institute at the University of Stuttgartโresearching fundamental questions about AI, reflecting its benefits for society, and promoting the transfer of AI applications to business and society.
Follow us for insights, updates, and discussions on the future of AI!
๐ค In our paper, we tackle this issue by addressing a crucial question:
"How many entities do we need to guarantee coverage of the true answer at a pre-defined confidence level (e.g. 90%)?"
(The more entities we need, the more uncertain the KGE model is about its predictions.)
๐ค When can we trust the predictions of Knowledge Graph Embedding (KGE) methods?
๐ฒ Do the plausibility scores they return provide this uncertainty information?
Unfortunately, noโthese scores are not calibrated and lack a probabilistic interpretation.
๐จ New Paper Accepted at #NAACL2025 ๐จ
"Conformalized Answer Set Prediction for Knowledge Graph Embedding"
@yuqichengzhu.bsky.social, Nico Potyka, Jiarong Pan, @boxiong.bsky.social, Yunjie He, Evgeny Kharlamov, @ststaab.bsky.social
โจCheck out our paper: arxiv.org/pdf/2408.082...
1/ Okay, one thing that has been revealed to me from the replies to this is that many people don't know (or refuse to recognize) the following fact:
The unts in ANN are actually not a terrible approximation of how real neurons work!
A tiny ๐งต.
๐ง ๐ #NeuroAI #MLSky
The slides for my lectures on (Bayesian) Active Learning, Information Theory, and Uncertainty are online now ๐ฅณ They cover quite a bit from basic information theory to some recent papers:
blackhc.github.io/balitu/
and I'll try to add proper course notes over time ๐ค
NeurIPS acknowledges that the cultural generalization made by the keynote speaker today reinforces implicit biases by making generalisations about Chinese scholars. This is not what NeurIPS stands for. NeurIPS is dedicated to being a safe space for all of us. 1/3
The final version of our survey on conformal prediction for NLP has now been published in TACL: direct.mit.edu/tacl/article...
Excited to present "Self-Calibrating Conformal Prediction" at #NeurIPS2024 this afternoon! Join me at the poster session to learn how combining model calibration with predictive inference gives calibrated point predictions and conditionally valid prediction intervals
This position of a doctoral researcher in Machine Learning โ Runtime Monitoring of Autonomous Driving Functions is open at Mercedes in collaboration with my team, Analytic Computing at @unistuttgart.bsky.social jobs.mercedes-benz.com/en/phd-docto...
๐ Best Paper Award & Honorable Mention ๐
Congratulations to our outstanding authors for their exceptional contributions to LoG 2024! ๐
๐ Your work inspires the entire graph and geometric ML community!
tractability has a precise definition taken from computer science!
So in probabilistic inference, it means solving exactly in polytime a certain query class of interest. In this case compute all possible probabilities for joint assignments.
Shameless plug from a tutorial here ๐๐๐
Qiang Zhang, me and Zaiqiao Meng will give a 1.5-hour tutorial on Integrating #KnowledgeGraphs and #LLMs for Advancing Scientific Research at @logconference.bsky.social at 2.30pm (UK time / GMT), 26 Nov. #AI4Science. Looking forward to discuss with you.
Excited to share that our paper "Self-Calibrating Conformal Prediction" with Ahmed Alaa is accepted at #NeurIPS2024! ๐
We combine model calibration and prediction intervals by integrating Venn-Abers into conformal prediction. #conformal #calibration
arxiv.org/pdf/2402.07307
I made a starter pack with the people doing something related to Neurosymbolic AI that I could find.
Let me know if I missed you!
go.bsky.app/RMJ8q3i
You can create your own rule-based feed with @skyfeed.app, or run a completely self-hosted feed server if you want to go fully custom.
For instance, @serge.belongie.com and I just set up an ML Internship feed that collects posts by a keywords-regex and hashtag-MLinternship
bsky.app/profile/did:...
Congratulations Michael!
I created a starter pack on KGs! I found others useful as newcomer.
go.bsky.app/EQRCq9R
"LLMs can't reason, look at how their accuracy drops if you change the numbers in the problem!!!"
The accuracy drop (%):