This work was made possible through a great collaboration with Jingcheng (Frank) Niu, Subhabrata Dutta, Ahmed Elshabrawy, @harishtm.bsky.social, and @igurevych.bsky.social
#Interpretability #InContextLearning #TMLR #LLMs #MechanisticInterpretability #EmergentAbilities
AI In-Context Learning Applied to Visual and EEG Datasets
A new study applies AI in-context learning to visual and EEG datasets. Read more: getnews.me/ai-in-context-learning-a... #ai #incontextlearning #eeg
Transformers Prove In-Context Learning for Nonlinear Regression
Researchers show transformers can learn nonlinear regression through in‑context learning without weight updates, converging when the target’s Lipschitz constant is low. Read more: getnews.me/transformers-prove-in-co... #transformers #incontextlearning
How Pretraining Data Shapes In-Context Learning
A new study finds that heavier-tailed pretraining data improves accuracy on rare numerical tasks, while broader coverage cuts the demos needed for target performance. Read more: getnews.me/how-pretraining-data-sha... #incontextlearning #pretraining
Study Shows Mamba Model Learns In‑Context Despite Outliers
The paper, submitted Oct 1 2025, shows a one‑layer Mamba model can retain accuracy with higher outlier rates than linear Transformers, though it may need more training steps. getnews.me/study-shows-mamba-model-... #mambamodel #incontextlearning
In-Context Learning Boosts Knowledge Accuracy, Cuts Reasoning in LLMs
Six ICL variants of GPT‑OSS:20b were tested on 840 questions, raising general‑knowledge accuracy to 91‑99% while dropping logic‑riddle scores to 10‑43% (baseline 43%). getnews.me/in-context-learning-boos... #gptoss20b #incontextlearning
LLMs Demonstrate In-Context Bandit Reinforcement Learning
Researchers found LLMs from 500 million up to 70 billion parameters can learn via in‑context bandit reinforcement, adapting to binary reward signals in the prompt without any parameter updates. getnews.me/llms-demonstrate-in-cont... #incontextlearning #llms
Learned Task Vectors Enhance LLM Performance and Explain ICL
Learned Task Vectors (LTVs) trained directly outperform extracted TVs, achieving higher accuracy on benchmarks and can be placed at any transformer layer or token position. getnews.me/learned-task-vectors-enh... #learnedtaskvectors #incontextlearning
New Study Maps Task Recognition and Learning Inside Large Language Model Attention
A 45‑page study submitted 29 Sep 2025 introduces Task Subspace Logit Attribution (TSLA) to identify attention heads for task recognition and learning. Read more: getnews.me/new-study-maps-task-reco... #incontextlearning #transformers
Mamba Model Replicates Online Gradient Descent in Linear Regression ICL
Mamba state‑space model mimics online gradient descent for linear‑regression in‑context learning, reaching convergence similar to transformers. Submitted 28 Sep 2025. Read more: getnews.me/mamba-model-replicates-o... #mamba #incontextlearning
Fixed-Weight Transformers Emulate Algorithms Through Prompting
Researchers show a frozen-weight Transformer can emulate diverse algorithms via prompt tokens, proving algorithmic universality for fixed-weight models. Code on GitHub. Read more: getnews.me/fixed-weight-transformer... #transformers #incontextlearning
In-Context Learning Emerges in World Models via Diverse Environments
AI world models can adapt on‑the‑fly via in‑context environment learning (ICEL); longer context windows and diverse environments boost error‑bound performance. 26 Sep 2025. getnews.me/in-context-learning-emer... #incontextlearning #worldmodels
Bayesian Scaling Laws Explain In-Context Learning Performance
A study finds in‑context learning follows a Bayesian scaling law that predicts accuracy gains as examples increase; experiments confirmed the trend on GPT‑2 models. getnews.me/bayesian-scaling-laws-ex... #incontextlearning #gpt2
Weak Supervision Method Improves Stability of In-Context Learning
A weak-supervision method adds a tiny adapter to capture demo effects, lowering inference time and stabilizing performance as prompts grow. Accepted at NeurIPS 2025. getnews.me/weak-supervision-method-... #weaksupervision #incontextlearning
KITE Method Boosts In-Context Learning with Kernelized Exemplars
KITE, a kernelized framework for selecting exemplars in in‑context learning, outperformed nearest‑neighbor retrieval in classification benchmarks without increasing prompt length. Read more: getnews.me/kite-method-boosts-in-co... #kite #incontextlearning
In-Context Learning: Assessing Its True Learning Capabilities
In‑context learning lets models handle tasks via prompts no fine‑tuning, and experiments show it beats zero‑shot baselines but loses accuracy on tasks far from training data. Read more: getnews.me/in-context-learning-asse... #incontextlearning #fewshot
MimicDroid: In‑Context Learning for Robots via Human Play Videos
MimicDroid trains humanoid robots from a few unlabeled human play videos, doubling success rates versus prior methods in real tests. Read more: getnews.me/mimicdroid-in-context-le... #mimicdroid #robotics #incontextlearning
El misterio revelado: cómo los LLM aprenden SIN entrenamiento.
🔹Ajuste fino implícito
🔹Actualización de pesos de bajo rango
🔹 El contexto actúa como un descenso de gradiente en tiempo real
youtu.be/l0JGUWMHBOc
#IA #LLM #DeepLearning #InContextLearning #AIResearch #AI
Towards Theoretical Understanding of Transformer Test-Time Computing:
Investigation on In-Context Linear Regression
Beining Wu, Difan Zou et al.
Paper
Details
#TransformerModel #TestTimeComputing #InContextLearning
In-context learning has been consistently shown to exceed hand-crafted neural learning algorithms across the board.
But it's limited by the length of the context. Even with neural architectures allowing context to grow to infinity, these come with high costs and scaling problems.
Is there a […]
5/5 Ready to level up your AI game?
Catch the new episode of The Data Guy Show for the full breakdown and actionable tips:
thedataguy.pro/s/7YzAL4Lj5GO
💡 How do you “teach” your AI? Drop your best tips or questions below!
#AICommunity #InContextLearning
Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks
#LLMAgents #InContextLearning #ReinforcementLearning #DecisionMaking
Teach your AI without retraining it.
In-Context Learning = showing, not telling. 💡
Full guide here:
promptwritersai.com/in-context-l...
#AI #InContextLearning #PromptEngineering
In-Context Learning meets Chaos
In-context learning is quite promising but limitations appear akin to chaotic dynamics.
A new Kind of AutoML:
memosisland.blogspot.com/2024/05/llm-...
#llm #reasoning #InContextLearning #chaos #ai
#ChainOfThought #LLMstability #SymbolicAutoML