Taku Ito's Avatar

Taku Ito

@takuito

Computational Neuroscience + AI @ IBM Research | πŸ“NYC | https://ito-takuya.github.io

596
Followers
543
Following
5
Posts
21.09.2023
Joined
Posts Following

Latest posts by Taku Ito @takuito

Bullshit Bench V2

new: 100 questions across several domains

- Anthropic & Qwen still on top
- Reasoning seems to hurt
- New models are *not* better than old (except Claude)
- Seems to be independent of domain

github.com/petergpt/bul...

02.03.2026 16:23 πŸ‘ 102 πŸ” 11 πŸ’¬ 6 πŸ“Œ 3
Preview
Text-to-LoRA: Instant Transformer Adaption While Foundation Models provide a general tool for rapid content creation, they regularly require task-specific adaptation. Traditionally, this exercise involves careful curation of datasets and repea...

Sakana has developed a way to, if I understand correctly, instantly generate LORAs on demand from long texts or documents

arxiv.org/abs/2506.06105
arxiv.org/abs/2602.15902

27.02.2026 05:51 πŸ‘ 54 πŸ” 6 πŸ’¬ 3 πŸ“Œ 4
Preview
US science after a year of Trump: what has been lost and what remains A series of graphics reveals how the Trump administration has sought historic cuts to science and the research workforce.

Trump has been in office for one year. We at @nature.com did a deep dive looking at the administration's disruption of science in numbers.

Take a lookβ€”the numbers are staggering. By me, @dangaristo.bsky.social, Jeff Tollefson, @kimay.bsky.social, & help from @noamross.net @scott-delaney.bsky.social

20.01.2026 18:08 πŸ‘ 505 πŸ” 318 πŸ’¬ 10 πŸ“Œ 30
This line graph illustrates the percentage change in agency staff levels from the previous year for nine major U.S. federal scientific and health organizations between the fiscal years 2016 and 2025. The agencies tracked include the CDC, Department of Energy, EPA, FDA, NASA, NIH, NIST, NOAA, and NSF. For the majority of the timeline between 2016 and 2023, the agencies show relatively stable fluctuations, generally staying within a range of +5% to -5% change per year. However, there is a dramatic and uniform plummet starting in the 2024–25 period. Every agency depicted shows a sharp downward trajectory, with staffing losses ranging from approximately -15% to over -25%. The Environmental Protection Agency (EPA) shows the most significant decline, dropping to roughly -26%, while the National Institute of Standards and Technology (NIST) shows the least severe but still substantial drop at approximately -15%.

This line graph illustrates the percentage change in agency staff levels from the previous year for nine major U.S. federal scientific and health organizations between the fiscal years 2016 and 2025. The agencies tracked include the CDC, Department of Energy, EPA, FDA, NASA, NIH, NIST, NOAA, and NSF. For the majority of the timeline between 2016 and 2023, the agencies show relatively stable fluctuations, generally staying within a range of +5% to -5% change per year. However, there is a dramatic and uniform plummet starting in the 2024–25 period. Every agency depicted shows a sharp downward trajectory, with staffing losses ranging from approximately -15% to over -25%. The Environmental Protection Agency (EPA) shows the most significant decline, dropping to roughly -26%, while the National Institute of Standards and Technology (NIST) shows the least severe but still substantial drop at approximately -15%.

This is the most astonishing graph of what the Trump regime has done to US science. They have destroyed the federal science workforce across the board. The negative impacts on Americans will be felt for generations, and the US might never be the same again.

www.nature.com/immersive/d4...

20.01.2026 22:53 πŸ‘ 14449 πŸ” 8316 πŸ’¬ 90 πŸ“Œ 765

One of my favorite findings: Positional embeddings are just training wheels. They help convergence but hurt long-context generalization.

We found that if you simply delete them after pretraining and recalibrate for <1% of the original budget, you unlock massive context windows. Smarter, not harder.

12.01.2026 04:12 πŸ‘ 220 πŸ” 32 πŸ’¬ 8 πŸ“Œ 1

Oh wow, deepseek is starting to make serious progress on LLMs that offload memory to external storage: github.com/deepseek-ai/...

12.01.2026 18:44 πŸ‘ 219 πŸ” 25 πŸ’¬ 6 πŸ“Œ 8
Schematic depicting cortical-subcortical interactions during multi-task learning

Schematic depicting cortical-subcortical interactions during multi-task learning

Excited to see our paper with @mwcole.bsky.social finally out in peer-reviewed form @natcomms.nature.com! We examine how the human brain learns new tasks and optimizes representations over practice…1/n

19.11.2025 18:03 πŸ‘ 24 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0
Preview
AI discovers learning algorithm that outperforms those designed by humans An artificial-intelligence algorithm that discovers its own way to learn achieves state-of-the-art performance, including on some tasks it had never encountered before.

Did you know that AI can figure out its own way to learn, and that its way is better than one designed by humans? Read more in a @nature.com N&V (and the original paper is in the comment) πŸ§ͺ www.nature.com/articles/d41...

24.10.2025 13:18 πŸ‘ 6 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0

Our work with @pawa-pawa.bsky.social is out in Nature Machine Intelligence! The choice of activation function affects the representations, dynamics, and circuit solutions that emerge in RNNs trained on cognitive tasks. Activation matters!
www.nature.com/articles/s42...

24.10.2025 19:18 πŸ‘ 42 πŸ” 11 πŸ’¬ 0 πŸ“Œ 0

(repost welcome) The Generative Model Alignment team at IBM Research is looking for next summer interns! Two candidates for two topics

🍰Reinforcement Learning environments for LLMs

🐎Speculative and non-auto regressive generation for LLMs

interested/curious? DM or email ramon.astudillo@ibm.com

07.10.2025 20:19 πŸ‘ 19 πŸ” 14 πŸ’¬ 1 πŸ“Œ 1
Preview
Why I left academia and neuroscience Don't worry, this isn't yet another story of rage-quitting.

Michael X Cohen on why he left academia/neuroscience.
mikexcohen.substack.com/p/why-i-left...

06.10.2025 17:05 πŸ‘ 95 πŸ” 36 πŸ’¬ 7 πŸ“Œ 14
Preview
Arousal as a universal embedding for spatiotemporal brain dynamics - Nature Reframing of arousal as a latent dynamical system can reconstruct multidimensional measurements of large-scale spatiotemporal brain dynamics on the timescale of seconds in mice.

Nature research paper: Arousal as a universal embedding for spatiotemporal brain dynamics

go.nature.com/4nMUgYz

26.09.2025 10:26 πŸ‘ 31 πŸ” 12 πŸ’¬ 0 πŸ“Œ 2
Post image

Lab’s latest is out in Imaging Neuroscience, led by Kirsten Peterson: β€œRegularized partial correlation provides reliable functional connectivity estimates while correcting for widespread confounding”, where we demonstrate a major improvement to standard fMRI functional connectivity (correlation) 1/n

14.09.2025 21:34 πŸ‘ 75 πŸ” 30 πŸ’¬ 6 πŸ“Œ 0
Preview
Can AI generate truly novel algorithms? A decades-old approach to measuring algorithmic complexity could provide a window into better understanding how AI systems compute.

Formalizing AI computation in terms of algorithmic complexity can offer a formal way to quantify AI systems while offering a principled foundation to build more algorithmically capable systems in the future.
Blog: research.ibm.com/blog/ai-algo...
arXiv: arxiv.org/abs/2411.05943

19.08.2025 22:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

While using AI models to generate code is commonplace these days, we still do not fully understand the limits of the complexity of the code these models can formulate.
3/n

19.08.2025 22:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

Using circuits to formalize algorithmic problems for AI models (e.g., depth as time complexity, size as space complexity), we can quantify the complexity of circuit computations (algorithmic complexity) an AI model can perform.
2/n

19.08.2025 22:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

What complexity of algorithms can AI compute? In a new paper with colleagues at IBM Research, we explore how circuit complexity theory can help quantify the degree of algorithmic generalization in AI systems. www.nature.com/articles/s42...
@natmachintell.nature.com
#ML #AI #MLSky
1/n

19.08.2025 22:38 πŸ‘ 17 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1
Post image

Mental health research is at a turning pointβ€”breakthroughs can transform lives, but only with bold action, investment, and open collaboration. The time for action is now. Read our full statement here: childmind.org/blog/can-sci...

07.03.2025 20:17 πŸ‘ 15 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0
Post image

Out today in Nature Machine Intelligence!

From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N

21.02.2025 16:29 πŸ‘ 135 πŸ” 40 πŸ’¬ 5 πŸ“Œ 4

New preprint! Ziyan and I explore how task order impacts continual learning in neural networks and how to optimize it. Our analysis highlights two key principles for better task sequencing.
Check it out: arxiv.org/pdf/2502.03350

06.02.2025 23:14 πŸ‘ 7 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

The entire website for the NIH Office of Research on Women's Health (ORWH) is very nearly stripped bare. This is so, so devastating. orwh.od.nih.gov/research/fun...

31.01.2025 18:25 πŸ‘ 978 πŸ” 617 πŸ’¬ 44 πŸ“Œ 65
Preview
Discretized representations in V1 predict suboptimal orientation discrimination - Nature Communications How animals generate perceptual decisions remains poorly understood. Here, the authors show that during a discrimination task, the mouse visual cortex does not encode the orientations of the cues but ...

New paper out! 🚨 πŸ“° With @batuhanerkat.bsky.social, John McClure, @hussainyk1.bsky.social, @polacklab.bsky.social we reveal how discretized representations in V1 predict suboptimal orientation discrimination. πŸ§ͺ🧠🐭 This work reconciles neuro and psychometric curves
www.nature.com/articles/s41...

08.01.2025 21:43 πŸ‘ 26 πŸ” 8 πŸ’¬ 3 πŸ“Œ 0
Post image

New paper in @brain1878.bsky.social: Healthy people under S-ketamine, an NMDAR antagonist, and people living with schizophrenia, a disorder associated with NMDAR hypofunction, spend more time in an external mode of perception - where noisy sensory signals override knowledge about the world.

19.01.2025 21:18 πŸ‘ 26 πŸ” 8 πŸ’¬ 1 πŸ“Œ 2
Preview
The origin of color categories | PNAS To what extent does concept formation require language? Here, we exploit color to address this question and ask whether macaque monkeys have color ...

The origin of color categories | PNAS www.pnas.org/doi/10.1073/...

16.01.2025 15:59 πŸ‘ 52 πŸ” 13 πŸ’¬ 2 πŸ“Œ 6

Check our latest in which we leverage shape metrics to compare neural geometry across regions, sessions or subjects and how their differences predict behavior.

w/ Nejatbakhsh, Duong, @sarah-harvey.bsky.social, Brincat, @siegellab.bsky.social, @earlkmiller.bsky.social & @itsneuronal.bsky.social

12.01.2025 15:19 πŸ‘ 103 πŸ” 37 πŸ’¬ 3 πŸ“Œ 1
Post image

Paper shows very small LLMs can match or beat larger ones through 'deep thinking' - evaluating different solution paths - and other tricks. Their 7B model beats o1-preview on complex math by exploring 64 different solutions & picking the best one.

Test-time compute paradigm seems really fruitful.

11.01.2025 05:34 πŸ‘ 157 πŸ” 20 πŸ’¬ 3 πŸ“Œ 4
Preview
Linking neural population formatting to function Animals capable of complex behaviors tend to have more distinct brain areas than simpler organisms, and artificial networks that perform many tasks tend to self-organize into modules (1-3). This sugge...

New results for a new year! β€œLinking neural population formatting to function” describes our modern take on an old question: how can we understand the contribution of a brain area to behavior?
www.biorxiv.org/content/10.1...
πŸ§ πŸ‘©πŸ»β€πŸ”¬πŸ§ͺ🧡
#neuroskyence
1/

04.01.2025 16:25 πŸ‘ 232 πŸ” 82 πŸ’¬ 2 πŸ“Œ 7
Preview
AI and Stress 200Bn Weights of Responsibility The Stress of Working in Modern AI Felix Hill, Oct 2024 The field of AI has changed irrevocably in the last 2 years. ChatGPT is approaching 200m monthly users. Gemin...

And relatedly, Felix wrote a good piece on the stress and anxiety currently affecting many people who work in AI due to the current climate in the industry:

docs.google.com/document/d/1...

If only more folks in AI were gentle and introspective like this...

03.01.2025 20:05 πŸ‘ 17 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0

What was the most important machine learning paper in 2024?

My Famous Deep Learning Papers list (that I use in teaching) does not include any new ideas from the last year.

papers.baulab.info

Which single new paper would you add?

31.12.2024 15:09 πŸ‘ 55 πŸ” 11 πŸ’¬ 10 πŸ“Œ 0
Preview
Did OpenAI Just Solve Abstract Reasoning? OpenAI’s o3 model aces the "Abstraction and Reasoning Corpus" β€” but what does it mean?

Some of my thoughts on OpenAI's o3 and the ARC-AGI benchmark

aiguide.substack.com/p/did-openai...

23.12.2024 14:38 πŸ‘ 339 πŸ” 98 πŸ’¬ 16 πŸ“Œ 26