Home New Trending Search
About Privacy Terms
#
#interpretableAI
Posts tagged #interpretableAI on Bluesky

Interpretable Thermodynamic Score-based Classification of Relaxation Excursions
Goyal, Y. et al.
Paper
Details
#Thermodynamics #MachineLearning #InterpretableAI

0 0 0 0
Preview
Variational Autoencoder for Interpretable Seizure Onset Phase Detection in Epilepsy Drug-resistant epilepsy often requires precise identification of seizure onset zones using SEEG recordings. This article presents a Variational Autoencoder–based deep learning framework that detects a...

How can #AI make seizure detection more transparent? 🧠
Explore how a Variational Autoencoder helps interpret #SEEG data for #Epilepsy care — blending precision and explainability.
👉 Read more: www.neuroelectrics.com/blog/variati...
#SeizureDetection #DeepLearning #InterpretableAI #Neurotech

1 0 0 0
Sum-of-Parts Framework Boosts Interpretable Neural Networks

Sum-of-Parts Framework Boosts Interpretable Neural Networks

The Sum-of-Parts (SOP) framework turns any differentiable model into a neural network, learning feature groups and achieving state-of-the-art results on vision and language benchmarks. Read more: getnews.me/sum-of-parts-framework-b... #interpretableai #sann

0 0 0 0
Interpretable Basis Extraction for Visual AI Explanations

Interpretable Basis Extraction for Visual AI Explanations

Scientists unveiled a technique that extracts a sparse basis from CNN feature spaces, improving interpretability without manual labels. Tests on ResNet and VGG matched probing performance. Read more: getnews.me/interpretable-basis-extr... #interpretableai #ai

0 0 0 0
Study Deciphers Vision Transformers via Residual Replacement

Study Deciphers Vision Transformers via Residual Replacement

Researchers mapped 6.6 K ViT features via sparse autoencoders and proposed a residual replacement model that swaps updates for interpretable linear combos. Read more: getnews.me/study-deciphers-vision-t... #visiontransformers #interpretableai

0 0 0 0
Neural Logic Networks Boost Interpretable AI Classification

Neural Logic Networks Boost Interpretable AI Classification

Neural Logic Networks now support NOT gates and bias terms, enabling transparent IF‑THEN rules for tabular data. The open‑source code was released on 11 Aug 2025. getnews.me/neural-logic-networks-bo... #neurallogicnetworks #interpretableai #opensource

1 0 0 0

Dive deeper into the details here: buff.ly/W0SKVbn

#MedicalAI #InterpretableAI #HealthcareTech #ActuarialScience #TrustworthyAI

0 0 0 0
Preview
Deep Learning-Enabled Interpretable Down Syndrome Detection Model <p xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="first" dir="auto" id="d11196367e124">Down syndrome (DS) is a genetic condition characterized by distinct facial features and...

'Deep Learning-Enabled Interpretable Down Syndrome Detection Model' - research sponsored by the King Salman Center for #DisabilityResearch - on #ScienceOpen:

🖇️ #DownSyndrome #AIinMedicine #InterpretableAI #MedicalDiagnostics

0 0 0 0

Paper: nature.com/articles/s4200…
Code: github.com/ohsu-cedar-com���

#InterpretableML #AIinCancer #SingleCell #MultiOmics #InterpretableAI #CancerGenomics #OpenScience #Bioinformatics #ComputationalBiology #MachineLearning
@commsbio.nature.com @ohsunews.bsky.social @ohsuknight.bsky.social

3 0 0 0
Preview
Mastering Modern Time Series Forecasting : The Complete Guide to Statistical, Machine Learning & Deep Learning Models in Python 📘 Mastering Modern Time Series Forecasting (early access)The book trusted by data science leaders in 100+ countries. Unlock the toolkit behind today’s most powerful forecasting systems. 💸 Pricing 🎉 Standard Edition Price: $40 | Minimum: $35Will increase to $80+ as content grows. A tremendous amount of work and expertise has gone into this book, which is designed to deliver exponential improvement to your forecasting skills, your company's bottom line and ROI, and your career. Forecasting is one of the most in-demand skills across nearly every industry today. As the content continues to grow, if you find value in it—or simply want to support the project—you're welcome to contribute whatever it’s worth to you ❤️. 🧠 Why This Book Stands Out🔑 Forecasting models are only 5% of the equation.The other 95%? It’s the hard-earned knowledge of metrics, validation, deployment, failure modes, and real-world constraints — insights that are often missing or buried in internet noise and social media fluff.🔍 It starts with what actually matters: solid foundations.Learn how to properly evaluate forecasts, recognize when they're failing, and build with confidence — not on shaky assumptions, but on methods that stand up to real-world pressure.💎 You’ll also learn how to assess the forecastability of a time series — a critical step for managing your time, setting stakeholder expectations, and realistically estimating how far forecasting accuracy can be pushed before diminishing returns kick in.🧠 Built for understanding — not just coding.Go beyond black-box code. Grasp model mechanics and decision-making logic to truly understand how and why things work.💻 Clear, transparent, production-ready code.No obfuscation, no throwaway scripts. Every example is fully documented, reusable, and ready for real-world use.🔄 Continuously improved through real feedback.This is a living resource shaped by an active community of readers. Many improvements and additions come directly from their thoughtful feedback — and all readers get lifetime updates, including new chapters and bonus tools. Thank you to all contributors — your insights are recognized and appreciated in the book.📚 Comprehensive, real-world coverage.From classical time series models to deep learning and forecasting-specific transformers (FTSMs), the book covers a wide range — but always with a practical lens. Every method has been tested in production or validated against strong academic benchmarks. No fluff, just tools that work.📈 Real ROI — for your company and your career.Readers often see immediate improvements in model accuracy, interpretability, and stakeholder trust. No more silent failures or fragile production systems. This book helps you build forecasting solutions that earn trust, drive business results, and accelerate your career.✍️ About the AuthorWritten by Valeriy Manokhin, PhD, MBA, CQF — a seasoned forecasting expert, data scientist, and machine learning researcher with publications in top academic journals.Valeriy has advised both startups and large enterprises, helping them build and rebuild forecasting systems at scale. He has led successful forecasting initiatives for global organizations — including winning competitive tenders from multinational companies, outperforming major consulting firms like BCG and specialized AI startups focused on forecasting. He has delivered production-grade solutions for industry leaders such as Stanley Black &amp; Decker and GfK.His methods have driven multimillion-dollar business impact, and his training programs have reached professionals in over 40 countries. This book is now used in more than 100+ countries and has become a #1-ranked title in Machine Learning, Forecasting, and Time Series across major platforms.🌍 Trusted By and Taught ToValeriy’s expertise is trusted by leaders at:Amazon, Apple, Google, Meta, Nike, BlackRock, Morgan Stanley, Target, NTT Data, Mars Inc., Lidl, Publicis Sapient, and more.His frameworks are followed by professionals from:University of Chicago, KTH (Sweden), UBC (Canada), DTU (Denmark), and other world-class institutions.👤 Students include:VPs of Engineering, AI Leads, Principal &amp; Lead Data Scientists, ML Engineers, Consultants, Professors, Founders, Researchers, and PhD students.🎓 Want a Live, Interactive Learning Experience?Pair this book with the Modern Forecasting Mastery course on Maven.Join live cohort sessions with Valeriy, get direct feedback, and build models with peers.Next cohort → maven.com/valeriy-manokhin/modern-forecasting-mastery📦 What You Get📥 Instant access to the book — start reading immediately.🔄 Free updates — including new chapters, bug fixes, and bonus content.💬 Exclusive access to the private Discord community — connect with fellow readers, get additional materials, early bonuses, special discounts, and join live events with the author.🔓 Pro Edition Bonus Pack (Early Access – $65) 🔥🔥🔥 Includes everything above, plus:✅ Premium Forecasting Templates — plug-and-play workflows✅ Extended Case Studies — deep analyses across major industries✅ Cheat Sheets &amp; Flashcards — quick-reference model guides and best practices✅ Behind-the-Scenes Notebooks — annotated walkthroughs and exploratory pipelines✅ Forecast Model Selection Toolkit — Python notebooks to benchmark, optimize, and compare📈 Ideal for professionals and teams who want to build and deploy faster—and sidestep the guesswork.https://valeman.gumroad.com/l/MasteringModernTimeSeriesForecastingPro💸 New Pricing effective 16th June - grab your copy before price increase 🎉 Standard Edition Price: $45 | Minimum: $39Will increase to $80+ as content grows.If you find value or simply want to support the project, feel free to pay what it’s worth to you ❤️Ready to take your forecasting skills from stats to neural nets, and from theory to real-world deployment?👉 Hit “Buy Now” and start mastering forecasting like never before.

My book 'Mastering Modern Time Series Forecasting : The Complete Guide to Statistical, Machine Learning & Deep Learning Models in Python' -> valeman.gumroad.com/...

#timeseries #machinelearning #forecasting #shapelets #interpretableAI #predictivemaintenance #neuralnetworks

3 0 0 0

Multicenter Evaluation of Interpretable AI for Coronary Artery Disease Diagnosis from PET Biomarkers
Acampa, W., Barrett, L. et al.
Paper
Details
#InterpretableAI #CardiologyResearch #PETBiomarkers

0 0 0 0

3️⃣Interpretability through explicit reasoning traces
🔎 Models produce detailed “thought processes” that explain why a continuation is stereotypical or not, enabling transparency and easier auditing of AI bias.
/5

#InterpretableAI #SocialBias

0 0 1 0
Preview
Improving AI Accuracy and Interpretability with ICE-T

ICE-T is a new prompting method that boosts AI accuracy and transparency, outperforming zero-shot learning—especially in regulated, high-stakes fields.
#interpretableai

0 0 1 0
Preview
ICE-T’s Challenges and Paths for Future Development

Explore ICE-T method limitations, future research directions, and reproducibility details for enhancing LLM binary classification accuracy and interpretability. #interpretableai

0 0 0 0
Preview
Key Questions in the ICE-T Method for Patient Assessment

Explore the ICE-T method’s key questions used for patient assessment across drug abuse, alcohol use, medical decisions, and other clinical tasks. #interpretableai

0 0 0 0
Preview
ICE-T Outperforms Zero-Shot in NLP Tasks Across Multiple Domains

ICE-T outperforms zero-shot methods, significantly boosting µF1 scores in GPT-3.5 and GPT-4 across diverse classification tasks and datasets. #interpretableai

0 0 0 0
Preview
Improving Binary Classification with LLM-Generated Questions

LLMs generate yes/no questions to improve binary classification. We test classifiers and analyze µF1 performance using GPT-4 and GPT-3.5 outputs. #interpretableai

0 0 0 0
Preview
Medical and Legal Text Datasets for Binary Classification Tasks

Explore 3 labeled NLP datasets for binary classification: medical advice, human rights violations, and unfair contract terms in online ToS. #interpretableai

0 0 0 0
Preview
Diverse NLP Datasets for Real-World Text Classification

Explore annotated datasets used for text classification across domains—medical records, climate reports, and political tweets on Catalan independence. #interpretableai

0 0 0 0
Preview
Using LLMs for Downstream Classification: Prompt, Verbalize, Train

Learn how to prompt LLMs, convert their outputs into feature vectors, and train a classifier using verbalized responses for predictive tasks.
#interpretableai

0 0 0 0
Preview
How ICE-T Trains LLMs with Yes/No Questions for Better Classification

Learn how the ICE-T system trains language models using yes/no questions, converting answers into feature vectors for classifier training. #interpretableai

0 0 0 0
Preview
Why Interpreting LLMs Is Still So Hard

LLMs struggle with interpretability, overconfidence, and flawed explanations—key hurdles to using them in high-stakes domains like medicine or science. #interpretableai

0 0 0 0
Preview
How Prompting and In-Context Learning Improve LLM Performance

Explore advanced prompting and in-context learning strategies that improve the reasoning and performance of large language models during inference.
#interpretableai

0 0 0 0
Preview
ICE-T Serves AI Truth Cold—with Multiple Prompts and a Side of Clarity

ICE-T uses multiple LLM prompts combined with traditional classifiers to improve AI classification performance with high interpretability in medicine and law.

#interpretableai

0 0 0 0

#KANs #iTFKAN #TimeSeriesForecasting #InterpretableAI #DeepLearning #Transformers #MachineLearning #Forecasting #MLInnovation #SOTA

0 0 0 0
Post image

6/ What about interpretable models? They are often more trustworthy but often cannot be trained by federated learning. 🌲 FedCT doesn't discriminate against interpretable models. Works with decision trees, XGBoost, etc. Quality similar to centralized training. #InterpretableAI

0 0 1 0

How can we interpret what features LLMs use to perform a given task? 🤖💭 And how do we know if our interpretation is correct? 🤔🔬

Excited to be presenting 2 papers + oral on these questions in the #InterpretableAI workshop at #neurips2024 📢 -- come by our posters/talk to hear more!

0 0 1 0
APA PsycNet

12/ Let’s rethink the future of human-AI collaboration. 🤝

Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. doi.org/10.1037/dec0...

#AI #XAI #InterpretableAI #IAI #boosting #competences

2 0 1 0
Article information

Title: Boosting human competences with interpretable and explainable artificial intelligence.

Full citation: Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. https://doi.org/10.1037/dec0000250

Abstract: Artificial intelligence (AI) is becoming integral to many areas of life, yet many—if not most—AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes—otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent—by design—faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people’s competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and—because of XAI’s drawbacks—preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.

Article information Title: Boosting human competences with interpretable and explainable artificial intelligence. Full citation: Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. https://doi.org/10.1037/dec0000250 Abstract: Artificial intelligence (AI) is becoming integral to many areas of life, yet many—if not most—AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes—otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent—by design—faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people’s competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and—because of XAI’s drawbacks—preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.

🌟🤖📝 **Boosting human competences with interpretable and explainable artificial intelligence**

How can AI *boost* human decision-making instead of replacing it? We talk about this in our new paper.

doi.org/10.1037/dec0...

#AI #XAI #InterpretableAI #IAI #boosting #competences
🧵👇

73 23 4 3
The cloud giants can and should improve the transparency of their own AI foundation models (and/or of companies such as OpenAI, as major investors). It is not clear that they will do so without strong policy incentives. See Amazon; Google; Microsoft.
Greater emphasis on the distribution of IT resources can help to address the digital divide, with due consideration for the risks of adverse digital inclusion. Currently about two-thirds of the world’s population online, and one-third who do not use the internet. Internet use is growing. Some estimates suggest that there will be around a billion more users added in the next five years. See The Cloud vs. On-Prem vs. Hybrid; The Cloud in Context.

The cloud giants can and should improve the transparency of their own AI foundation models (and/or of companies such as OpenAI, as major investors). It is not clear that they will do so without strong policy incentives. See Amazon; Google; Microsoft. Greater emphasis on the distribution of IT resources can help to address the digital divide, with due consideration for the risks of adverse digital inclusion. Currently about two-thirds of the world’s population online, and one-third who do not use the internet. Internet use is growing. Some estimates suggest that there will be around a billion more users added in the next five years. See The Cloud vs. On-Prem vs. Hybrid; The Cloud in Context.

#explainableAI #transparentAI #interpretableAI

0 0 1 0