While both robotics and LM can be cast as next-token prediction, the token distribution for computer agents seems more like abstract motor programs (robotics) vs. language. This puts computer use on the trajectory of robotics which is slower than LLMs. 2/2
30.05.2025 15:21
π 2
π 0
π¬ 0
π 0
Intriguing prediction from
Trenton Bricken & @sholto-douglas.bsky.social on @dwarkesh.bsky.social's podcast: computer use agents "solved" in ~10 months π±οΈβ¨οΈ. This feels highly optimistic. I think that computer use is closer to robotics than language modeling. 1/2
30.05.2025 15:21
π 1
π 0
π¬ 1
π 0
Iβm presenting this work at 11a PT today in East Exhibit Hall at poster #4009. Come by and chat!
11.12.2024 18:01
π 2
π 0
π¬ 0
π 0
π
You can find me at the following presentations:
- Poster Session 1 East #4009 on Wednesday, December 11, from 11a-2p PST.
- System 2 Reasoning Workshop Spotlight Oral Talk on Sunday, December 15, from 9:30-10a PST.
- System 2 Reasoning Workshop poster sessions on Sunday, December 15.
09.12.2024 15:06
π 0
π 0
π¬ 1
π 0
πDTM and sDTM operate on trees, and we introduce a very simple and dataset independent method to embed sequence inputs and outputs as trees. Across a variety of datasets and test time distributional shifts, sDTM outperforms fully neural and hybrid neurosymbolic models.
09.12.2024 15:06
π 0
π 0
π¬ 1
π 0
π³We introduce the Sparse Differentiable Tree Machine (sDTM), an extension of (DTM) that introduces a new way to represent trees in vector space. Sparse Coordinate Trees (SCT) reduce the parameter count and memory usage over the previous DTM by an order of magnitude and lead to a 30x speedup!
09.12.2024 15:06
π 0
π 0
π¬ 1
π 0
Our previous work introducing the Differentiable Tree Machine (DTM) is an example of a unified neurosymbolic system where trees are represented and operated over in vector space.
09.12.2024 15:06
π 0
π 0
π¬ 1
π 0
Hybrid systems use neural networks to parameterize symbolic components and can struggle with the same pitfalls as fully symbolic systems. In Unified Neurosymbolic systems, operations can simultaneously be viewed as either neural or symbolic, and this provides a fully neural path through the network.
09.12.2024 15:06
π 0
π 0
π¬ 1
π 0
π§ Neural networks struggle with compositionality, and symbolic methods struggle with flexibility and scalability. Neurosymbolic methods promise to combine the benefits of both methods, but there is a distinction between *hybrid* neurosymbolic methods and *unified* neurosymbolic methods.
09.12.2024 15:06
π 0
π 0
π¬ 1
π 0
π¨ Thrilled to share that Compositional Generalization Across Distributional Shifts with Sparse Tree Operations received a spotlight award at #NeurIPS2024! π I'll present a poster on Tuesday and give an invited lightning talk at the System 2 Reasoning Workshop on Sunday. π§΅π
09.12.2024 15:06
π 12
π 4
π¬ 1
π 1
Applied AGI scientist is a wild job title considering people have no idea how to even define AGI let alone what we should apply to create it.
18.11.2024 11:27
π 6
π 0
π¬ 0
π 0
Okay the people requested one so here is an attempt at a Computational Cognitive Science starter pack -- with apologies to everyone I've missed! LMK if there's anyone I should add!
go.bsky.app/KDTg6pv
11.11.2024 17:27
π 220
π 91
π¬ 70
π 3
Researchers are split on HOW to achieve compositional behavior. Some propose data interventions, others argue we need entirely new model architectures, and some suggest we need to integrate symbolic paradigms.
11.11.2024 20:40
π 0
π 0
π¬ 1
π 0
Key finding: ~75% of researchers agree that CURRENT neural models do NOT demonstrate true compositional behavior. Scale alone won't solve this - we need fundamental breakthroughs.
11.11.2024 20:40
π 0
π 0
π¬ 1
π 0
We surveyed 79 top AI researchers about compositional behavior. Our goal? Map out the field's consensus and disagreements on how neural models process language to illuminate promising paths forward. Inspired by Dennettβs logical geography, we cluster participants by responses πΊοΈ
11.11.2024 20:40
π 0
π 0
π¬ 1
π 0
Compositionality is fundamental to language: the ability to understand complex expressions by combining simpler parts. But do current AI models REALLY understand this? Spoiler: Most researchers say NO.
11.11.2024 20:40
π 0
π 0
π¬ 1
π 0
Iβm excited to share our survey investigating the current challenges and debates around achieving compositional behavior (CB) in language models, to be presented at #EMNLP2024! What makes language understanding truly intelligent? A thread unpacking our latest research π€ππ§΅
11.11.2024 20:40
π 1
π 0
π¬ 1
π 0
Besides being ergonomically beneficial, a split keyboard can prevent this from happening!
19.09.2024 07:46
π 0
π 0
π¬ 1
π 0