Computer-mediated carcinisation
Computer-mediated carcinisation
This includes many of my papers, too. The point I am making is the findings in careful academic research likely represents a lower bound of AI capabilities at this point.
I canβt
i just β¦
i canβt
www.404media.co/anthropic-cl...
I bet if someone *has* succeeded, it's via spinning up an elicitation-GPT that just drilled you for critical intel, wouldn't let you weasel out via under/overspecified output, then dumped it all back to you in standardized format so you could think faster - basically exporting your extraction algo.
Exactly. If we overheard Dario, Sam, and Demis chatting about certain well known AI critics, I'd be willing to bet they'd be expressing gratitude. Proving a grouch wrong is a real motivator.
Hi Everyone!
We're hosting our Wharton AI and the Future of Work Conference on 5/21-22. Last year was a great event with some of the top papers on AI and work.
Paper submission deadline is 3/3. Come join us! Submit papers here: forms.gle/ozJ5xEaktXDE...
Exciting new hobby project in the offing related to AI and skill. Involves a childhood passion, a wild leap into the unknown, made real via an order from Amazon just now. Will be 100% cool, I will be documenting things, sharing eventually. Feels like April 2023 again!
The Silo is so good. Just superb. This generation's answer to the BSG remake.
My hobby horse. You can simulate a rocket all you want, and use more energy on computation than the actual rocket would, but you won't get to orbit until you ignite rocket fuel. What if all the energy we are spending on simulating learning is not the juice we really need to make intelligence?
The GPT-4 barrier was comprehensively broken Some of those GPT-4 models run on my laptop LLM prices crashed, thanks to competition and increased efficiency Multimodal vision is common, audio and video are starting to emerge Voice and live camera mode are science fiction come to life Prompt driven app generation is a commodity already Universal access to the best models lasted for just a few short months βAgentsβ still havenβt really happened yet Evals really matter Apple Intelligence is bad, Appleβs MLX library is excellent The rise of inference-scaling βreasoningβ models Was the best currently available LLM trained in China for less than $6m? The environmental impact got better The environmental impact got much, much worse The year of slop Synthetic training data works great LLMs somehow got even harder to use Knowledge is incredibly unevenly distributed LLMs need better criticism Everything tagged βllmsβ on my blog in 2024
Here's my end-of-year review of things we learned out about LLMs in 2024 - we learned a LOT of things simonwillison.net/2024/Dec/31/...
Table of contents:
In 2024 we learned a lot about how AI is impacting work. People report that they're saving 30 minutes a day using AI (aka.ms/nfw2024), and randomized controlled trials reveal theyβre creating 10% more documents, reading 11% fewer e-mails, and spending 4% less time on e-mail (aka.ms/productivity...).
Independent evaluations of OpenAIβs o3 suggest that it passed math & reasoning benchmarks that were previously considered far out of reach for AI including achieving a score on ARC-AGI that was associated with actually achieving AGI (though the creators of the benchmark donβt think it o3 is AGI)
Just *one* of the reasons that Blindsight was ahead of its time. Way ahead.
Massive congrats!! So excited to check it out.
Wow!
Join me by the fireside this Friday with Matt Beane as we dive into one of todayβs biggest workforce challenges: upskilling at scale. π
Linke below to hear the full discussion on Friday, December 13 at 11 am EST!
linktr.ee/RitaMcGrath
@mattbeane.bsky.social
I propose a workshop.
Most engineers/CS working on AI presume away well established, profound brakes on AI diffusion.
Most social scientists presume away how AI use could reshape those brakes.
Let's gather these groups, examine these brakes 1-by-1, make grounded predictions.
Models like o1 suggest that people wonβt generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed
Most folks donβt regularly have a lot of tasks that bump up against the limits of human intelligence, so wonβt see it
Grateful for the opportunity to visit and learn from the professionals at the L&DI conference. And very glad to hear you found my talk so valuable, Garth! Means a lot.
I made an HRI Starter Pack!
If you are a Human-Robot Interaction or Social Robotics researcher and I missed you while scrolling through bsky's suggestions, just ping me and I'll add ya.
go.bsky.app/CsnNn3s
Wrote a little something on this in 2012, though I didn't anticipate the main reason for hiring such workers - training data.
www.technologyreview.com/2012/07/18/1...
Ohmydeargod.
David Meyer (v.) /ΛdeΙͺvΙͺd ΛmaΙͺ.Ιr/
To attribute complex, intentional design or deeper meaning to simple emergent behaviors of large language models, especially when such behaviors are more likely explained by straightforward technical constraints or training artifacts.
They did NOT. Wow. Sign of the times.
And I can verify on your rule! I was so flabbergasted and honored. Your feedback was rich and so helpful. Remain grateful.
I remember *treasuring* the previews. I'd fight to get there on time. Was part of the thrill.
But ads? F*ck that noise. Seriously, straight up evil.
Never occurred to me there'd be an algo under the hood that could reliably learn to provide content I'd value more than a straight read of my hand-curated list of people. My solution has been following people if they post high signal stuff all the time.
I have never used the feed page. What a horror, can't quite understand why folks would try.
Only/ever the "following" page. Even there things got pretty intolerable towards/around the election, now settled down.
My Thanksgiving post. A Kurt Vonnegut poem. He talks with Joe Heller (Catch 22 fame) about a billionaire. Key part:
Joe said, "I've got something he can never have"
And I said, "What on earth could that be, Joe?"
And Joe said, "The knowledge that I've got enough"
www.linkedin.com/pulse/kurt-v...
Oh my dear god this is an incredible study.
I think there's likely an effect there!