When we fit a sigmoid (S-curve) to the exact same dataset by @metr.org, we find it fits the data much better (in-sample) than their exponential, suggesting a compelling alternative. Importantly, the inflection point (June 2025) has already passed.
When we fit a sigmoid (S-curve) to the exact same dataset by @metr.org, we find it fits the data much better (in-sample) than their exponential, suggesting a compelling alternative. Importantly, the inflection point (June 2025) has already passed.
Great example! I agree autonomous driving suffers the exact same issue
Interesting and definitely related, thank you for sharing!
Barriers to AI adoption isnβt just about technological trustβaddressing the economics of attention will be key to achieving human-AI collaboration.
Read the full paper here: papers.ssrn.com/sol3/papers....
This creates a perverse incentive where employers will:
π« Ban the AI entirely to avoid unmonitored risks,
π Fire the human, even when human oversight would have improved outcomes, or
π Adopt a less reliable AI tool, simply because it keeps the human engaged at a lower cost.
If the AI is right 99% of the time, the human-in-the-loop has almost zero incentive to inspect the outputs. To force them to stay vigilant against that rare mistake, the employer has to pay a massive wage premiumβone that scales inversely with the error probability.
We keep saying: "AI will handle the boring stuff, and humans will supervise." But the problem is--as AI reliability improves, it becomes really hard to motivate a human to conscientiously monitor it.
In a new WP with Gerard Cachon, we describe the "human-AI contracting paradox."
This is not to say AI can't improve education, but it requires careful measurement, experimentation, and evaluation - none of which is happening. The tech industry has put FAR more effort into A/B testing for ad optimization than into building for education.
Yet another example - adopting over-hyped AI/EdTech in classrooms without careful thought (especially replacing time with βtrained, caring teachersβ) is likely to have many negative consequences on kids' learning, motivation, and well-being β¦https://wired.com/story/ai-teacher-inside-alpha-school/
Such a great article!! Well worth reading instead of using chatgpt to summarize π
www.nytimes.com/2025/07/18/o...
Thanks!! P.s. just so you know, you're the only reason I post on bluesky π
Joint work w/ amazing team @obastani.bsky.social, Alp SΓΌngΓΌ, Haosen Ge, Γzge KabakcΔ±, & Rei Mariman.
Grateful for thoughtful feedback from Eric Bradlow, Angela Duckworth, Stefan Feuerriegel, Benjamin Lira Luttges, Lilach M., Ananya Sen, Christian Terwiesch, Lyle Ungar, & many others
Out in @pnas.org today!! We ran a field experiment with ~1000 high school students & found:
β
GenAI tutoring boosts practice perf
β οΈ But hinders human learning, hurting perf when AI access is removed
π‘οΈ Safeguards like hint-based help can offset this
www.pnas.org/doi/10.1073/...
Cool paper using brain imaging to track cognitive load while writing essays! As expected, people under-use their brain when using LLM assistance, but interestingly, they also struggle more when switching back to writing on their own -- AI assistance creates "cognitive debt"
arxiv.org/abs/2506.08872
Research increasingly shows that AI without appropriate guardrails can harm students' long-term learning and engagement. This is not something to be rushed into without careful experimentation and resources, especially when it comes to our young kids.
"Putting more screens in our classrooms is not going to automatically lead to a smarter, healthier or better-employed population. And parents of all backgrounds need to stand up and shout it now."
www.nytimes.com/2025/05/14/o...
Conformal prediction sets are a useful way to capture uncertainty for LLMs & deep learning models. But they're data-hungry! We propose a semi-bandit algo to learn these sets online. Check out our @icmlconf.bsky.social paper: arxiv.org/abs/2405.13268
Work led by Haosen Ge, w/ @obastani.bsky.social
Great video summarizing our research! AI in education needs to be carefully designed to ensure we support critical thinking & learning
W/ @obastani.bsky.social Alp Haosen Ozge Rei
youtube.com/watch?v=n2W_...
Excited to once again co-organize the 4th Annual Workshop on AI & Analytics for Social Good at UMD on 5/2 with Margret Bjarnadottir, Jessica Clark, Jui Ramprasad, and John Silberholz! Early-career scholars, please submit your "AI for good" work by 2/14:
www.rhsmith.umd.edu/departments/...
Excited to see our paper "Generative AI Can Harm Learning" cited in Ch 7 of the 2025 Economic Report of the President: whitehouse.gov/cea/written-...
Paper: papers.ssrn.com/sol3/papers...., co-authored with @obastani.bsky.social, Alp, Haosen, Ozge & Rei
Hi folks, I'm new here! ππΎ