I have been sick with COVID all week and missed Mon and Tues due to this. On Friday, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh. Here’s what happened: I was incorporating information from Shambaugh’s new blog post into an existing draft from Thursday. During the process, I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline. When the tool refused to process the post due to content policy restrictions (Shambaugh’s post described harassment). I pasted the text into ChatGPT to understand why. I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words. Being sick and rushing to finish, I failed to verify the quotes in my outline notes against the original blog source before including them in my draft. Kyle Orland had no role in this error. He trusted me to provide accurate quotes, and I failed him. The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that. I sincerely apologize to Scott Shambaugh for misrepresenting his words. I take full responsibility. The irony of an AI reporter being tripped up by AI hallucination is not lost on me. I take accuracy in my work very seriously and this is a painful failure on my part. When I realized what had happened, I asked my boss to pull the piece because I was too sick to fix it on Friday. There was nothing nefarious at work, just a terrible judgement call which was no one’s fault but my own. —Benj Edwards, February 15, 2026
I have been sick with COVID all week and missed Mon and Tues due to this. On Friday, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh. Here’s what happened: I was incorporating information from Shambaugh’s new blog post into an existing draft from Thursday. During the process, I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline. When the tool refused to process the post due to content policy restrictions (Shambaugh’s post described harassment). I pasted the text into ChatGPT to understand why. I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words. Being sick and rushing to finish, I failed to verify the quotes in my outline notes against the original blog source before including them in my draft. Kyle Orland had no role in this error. He trusted me to provide accurate quotes, and I failed him. The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that. I sincerely apologize to Scott Shambaugh for misrepresenting his words. I take full responsibility. The irony of an AI reporter being tripped up by AI hallucination is not lost on me. I take accuracy in my work very seriously and this is a painful failure on my part. When I realized what had happened, I asked my boss to pull the piece because I was too sick to fix it on Friday. There was nothing nefarious at work, just a terrible judgement call which was no one’s fault but my own. —Benj Edwards, February 15, 2026
I have been sick with COVID all week and missed Mon and Tues due to this. On Friday, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh. Here’s what happened: I was incorporating information from Shambaugh’s new blog post into an existing draft from Thursday. During the process, I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline. When the tool refused to process the post due to content policy restrictions (Shambaugh’s post described harassment). I pasted the text into ChatGPT to understand why. I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words. Being sick and rushing to finish, I failed to verify the quotes in my outline notes against the original blog source before including them in my draft. Kyle Orland had no role in this error. He trusted me to provide accurate quotes, and I failed him. The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that. I sincerely apologize to Scott Shambaugh for misrepresenting his words. I take full responsibility. The irony of an AI reporter being tripped up by AI hallucination is not lost on me. I take accuracy in my work very seriously and this is a painful failure on my part. When I realized what had happened, I asked my boss to pull the piece because I was too sick to fix it on Friday. There was nothing nefarious at work, just a terrible judgement call which was no one’s fault but my own. —Benj Edwards, February 15, 2026
Sorry all this is my fault; and speculation has grown worse because I have been sick in bed with a high fever and unable to reliably address it (still am sick)
I was told by management not to comment until they did. Here is my statement in images below
arstechnica.com/staff/2026/0...