A PhD is an apprenticeship in research – we can’t let AI take that away
AI might have ‘PhD-level’ intelligence. But substituting it for a PhD student sacrifices a special part of the academic ecosystem.
A PhD is much more than cheap research labour. A PhD is an apprenticeship in research. Today’s students are tomorrow’s research leaders. Students learn how to ask the right questions, how to critique findings and, ultimately, how to take responsibility for the science they produce.
13.03.2026 11:53
👍 3
🔁 1
💬 0
📌 1
⚛️ Introducing CREATE, a benchmark for creative associative reasoning in LLMs.
Making novel, meaningful connections is key for scientific & creative works.
We objectively measure how well LLMs can do this. 🧵👇
13.03.2026 14:34
👍 20
🔁 8
💬 1
📌 3
Experiments show that current LLM-based coding agents can generate working code in the short term, but struggle to maintain code quality across many iterations and evolving requirements.
#aicoding #agenticengineering
📄 arxiv.org/abs/2603.03823
12.03.2026 16:26
👍 12
🔁 8
💬 1
📌 0
No, “AI” is not a Stochastic Parrot 🦜
I’ve recently come across a new flavor of AI denialism making the rounds.
"AI" is not a stochastic parrot.🦜
I wrote this piece a couple weeks ago, but it was hard for me to finish up given AI's role in society and war over the past few weeks. I should share it at some point though. Not perfect, but here it is.
medium.com/@margarmitch...
11.03.2026 17:56
👍 197
🔁 80
💬 18
📌 17
Fantastic essay: "Why the ATM didn’t kill bank teller jobs, but the iPhone did." davidoks.blog/p/why-the-at...
Key point: "it is paradigm replacement, not task automation, that actually displaces workers." In this sense, we're still in the early days of AI's effect on the economy.
11.03.2026 13:39
👍 110
🔁 33
💬 6
📌 1
www.percepta.ai/blog/can-llm...
As a research lark at Percepta, Christos embedded a computer into an LLM, showed that it could solve the hardest Sudokus, and then as a side bonus built an exponentially faster attention
11.03.2026 21:44
👍 301
🔁 55
💬 19
📌 40
🥁🥁🥁 Newly out from us today in Science Advances: “Biased AI Writing Assistants Shift Users’ Attitudes on Societal Issues”.
Large Language Models are providing users with autocomplete writing suggestions on many platforms. Could these suggestions shift users’ own attitudes? (spoiler: YES) (1/7)
11.03.2026 19:02
👍 144
🔁 79
💬 4
📌 12
Guest Post — Societies 2030: The Community Advantage in an AI-First World - The Scholarly Kitchen
Today's guest bloggers call for society publishers to recognize their unique role in shaping the systems researchers use to discover and evaluate knowledge.
"Trust becomes more valuable, not less, when the information environment degrades. The more noise enters the system, the more researchers will retreat to sources w/ human accountability. The question shifts from “what does the evidence say?” to “who is telling me this, & why should I believe them?”"
10.03.2026 18:24
👍 4
🔁 4
💬 0
📌 0
How Researchers Won a Legal Fight to Access X's Data Under the DSA
A Berlin court has delivered a consequential ruling, ordering X to grant Democracy Reporting International access to its publicly available data.
A Berlin court has ordered X to grant researchers API access under the Digital Services Act. Daniela Alvarado Rincón, Simone Ruf and Jürgen Bering explain how they won the case, and why it’s a major step for researcher data access.
09.03.2026 08:54
👍 48
🔁 31
💬 0
📌 3
About a year ago I made some predictions about the effect of AI on programming jobs. Block laid off 40% of its staff claiming AI made them more efficient. Is that really true or did they just over-hire? Let's look at some data and see what's really happening.
Full post: seldo.com/posts/do-ai-...
08.03.2026 20:49
👍 53
🔁 10
💬 10
📌 1
When Using AI Leads to “Brain Fry”
As firms increasingly incentivize employees to build and oversee complex teams of agents—for example, by measuring and rewarding token consumption as a proxy for performance—people are finding themsel...
You get used to it, but it was a shock to the system at first.
“I end each day exhausted—not from the work itself, but from the managing of the work. Six worktrees open, four half-written features, two ‘quick fixes’ that spawned rabbit holes, and a growing sense that I’m losing the plot entirely.”
08.03.2026 22:55
👍 34
🔁 5
💬 2
📌 2
I hacked ChatGPT and Google's AI – and it only took 20 minutes
I found a way to make AI tell you lies – and I'm not the only one.
I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior. It turns out changing what AI tells other people can be as easy as writing a blog post *on your own website*
I didn’t believe it, so I decided to test it myself www.bbc.com/future/artic...
18.02.2026 16:37
👍 1976
🔁 677
💬 31
📌 122
How does ChatGPT work? Or rather, language models in general- Part 1 attempting a lay explanation.
YouTube video by Casey Fiesler
I'm creating a series of short form videos about how language models work technically. The goal is to be something in between "you know it's next token prediction" and "now you've taken a machine learning class." I'd love your thoughts so here are the first few! 🧵
www.youtube.com/shorts/VZB8X...
08.03.2026 13:30
👍 145
🔁 38
💬 7
📌 2
Anthropomorphism Is Breaking Our Ability to Judge AI
Tech Policy Press fellow James Ball asks, how should we interact with a technology designed to ‘speak’ with us on what appear to be human terms?
Interactions with large language models blur the lines between what feels like human conversation and the more typical experience of using technology—which seems to be causing confusion even among what should be experienced and sophisticated users, writes Tech Policy Press fellow James Ball.
07.03.2026 09:54
👍 44
🔁 15
💬 2
📌 1
AI Doesn’t Reduce Work—It Intensifies It
One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consisten...
Interesting interview study about AI in the labor process. Best possible atmosphere (well-paid, autonomous) still results in more work, not less, because AI helps you multitask, work during nonwork time, extend your capability into other people's turf (experts then need to clean up vibejobs)
06.03.2026 12:47
👍 62
🔁 22
💬 1
📌 1
Let’s teach neuroscientists how to be thoughtful and fair reviewers
Blanco-Suárez revamped the traditional journal club by developing a course in which students peer review preprints alongside the published papers that evolved from them.
In the first essay of our new neuro-education series, @eblancosuarez.bsky.social shares how she reimagined the traditional journal club course by developing a class in which students review preprints alongside the published papers that evolved from them.
#neuroskyence
bit.ly/4reKhg8
06.03.2026 15:17
👍 13
🔁 5
💬 0
📌 1
Defuddle now has a website!
This means you can use Defuddle anywhere to get the main content of a page in Markdown format.
You can simply add "defuddle.md" before any URL, use it via curl, Skills, CLI, or add it to your app via NPM.
04.03.2026 16:02
👍 221
🔁 27
💬 10
📌 2
Text Shot: To build a useful AI model, you need to journey into the wild base model and stake out a region that is amenable to human interests: both ethically, in the sense that the model won’t abuse its users, and practically, in the sense that it will produce correct outputs more often than incorrect ones. What this means in practice is that you have to give the model a personality during post-training1.
Human beings are capable of almost any action at any time. But we only take a tiny subset of those actions, because that’s the kind of people we are. I could throw my cup of coffee all over the wall right now, but I don’t, because I’m not the kind of person who needlessly makes a mess2. AI systems are the same. Claude could respond to my question with incoherent racist abuse - the base model is more than capable of those outputs - but it doesn’t, because that’s not the kind of “person” it is.
In other words, human-like personalities are not imposed on AI tools as some kind of…
Giving LLMs a personality is just good engineering - Sean Goedecke www.seangoedecke.com/giving-llms-a-… #AI #training #personalities
03.03.2026 03:41
👍 6
🔁 1
💬 0
📌 0
AI Is Inventing Academic Papers That Don't Exist -- And They're Being Cited in Real Journals
Academic articles from authors using large language model are creating an ecosystem of fake research that threatens human knowledge itself.
Academics and technologists are sounding the alarm about a growing crisis in scholarship as we know it: AI-generated citations of nonexistent papers that have infested real journals. Despite being fake, the sources are widely assumed to be authentic the more they appear in published literature.
17.12.2025 19:45
👍 975
🔁 510
💬 36
📌 167
The USA And Israel Have Started Bombing Iran: A Primer
What Do We Know, What Should We Be Asking?, For What Should We Be Watching
"The USA And Israel Have Started Bombing Iran: A Primer
What Do We Know, What Should We Be Asking?, For What Should We Be Watching"
From @phillipspobrien.bsky.social
phillipspobrien.substack.com/p/the-usa-an...
28.02.2026 15:24
👍 425
🔁 168
💬 27
📌 5
Are AI-generated summaries suitable for studying and research?
Despite didactic, ethical, and environmental concerns, the use of GenAI is on the rise in academia. For most applications, the jury is still out on whether and how they will benefit education and rese...
This is one of the most reasoned & persuasive arguments for not allowing LLMs anywhere near reading & writing intensive classrooms. We can 100% choose not to outsource our reading & writing labor to a bot, and model for our students why they should do the same. #EduSky
www.tue.nl/en/our-unive...
28.02.2026 11:47
👍 87
🔁 40
💬 3
📌 4
"Powerful AI can statically help human decision-makers, but can harm collective knowledge building... it can lead to what we call “knowledge collapse” whereby in the long-run all human knowledge is ultimately destroyed.”
economics.mit.edu/sites/defaul...
27.02.2026 21:39
👍 33
🔁 12
💬 2
📌 8