Why would anybody ever have thought for a tenth of a second that this would be acceptable behaviour?
Why would anybody ever have thought for a tenth of a second that this would be acceptable behaviour?
Oh yes. I miss the days when one could have some harmless fun with early versions of Dall-E, before it became we will buy all the GPUs and double electricity prices and make all you plebs lose your jobs and then we will create a digital god by 2029.
Astonishing how many delivery services or craftsmen struggle to find the right address. We are playing this on hard mode because, and this will surely not apply to any other people on the planet /s, our unit number also exists in other apartment blocks in the neighbourhood.
Diagram showing that New Zealand is the island worst affected by invasive weeds in the world.
Ouch. #isbcw
The second they think they have to give it rights, they will stop arguing it has consciousness.
Ye gods, the ratio on this one.
Why Language Models Hallucinate. A paper released by OpenAI in September of 2025.
Back in September, OpenAI released a paper showing that ChatGPT will always make things up.
Not sometimes. Not until the next update. Always. It's how the system fundamentally works. Which means there is no "fix."
Challenge for those who are very confident that a candle isn't a star: Explain how the nuclear fusion inside stars works
If it stays at 0.35C per decade (or increases even more), technological civilisation collapses by 2100 at the latest. I'd say that matters. I had so far assumed it would be a long-drawn out process over the next two centuries.
Some of the transformative efficiency expert thought leaders quoted in the piece were extremely proud that they had pushed various non-coding staff in their orgs to become vibe-coders. Finance manager? Must code. Head of sales? MUST CODE.
Everything is cults today. What has happened to people?
Today I read an article about the AI transformation in an airline magazine, and the only use cases mentioned other than vague corporate jargon were stuff like having a chatbot tell employees how many leave days they have left. I can easily look that up in SAP. Thatβs not an efficiency revolution.
Wow. That is a @nytpitchbot.bsky.social post come to life.
Price of Crude Oil WTI (USD/Bbl) over a five-year period, spanning from 2021 to early 2026. The chart shows a significant price peak in 2022 reaching over $120, followed by a general downward trend with various fluctuations, eventually hitting a low near $55 in late 2025 before a sharp vertical spike to the current price of 90.900. This recent surge represents an increase of +23.880 (+35.63%), highlighted in green text above the blue line graph.
The real insanity isnβt how much oil prices have spiked, itβs that weβre still burning oil for energy.
I assume they were fired because their advice was "this is a bad idea"?
Every lengthy write-up on "lessons learned" after someone lets an AI process delete their database or all their emails reads like this to me
Just great when journals in which I have already published and for which I have already reviewed decide that I need yet another Editorial Manager account. More user names and more passwords to forget. Yay.
"Essentially, the employees most excited and inspired by βvisionaryβ corporate jargon may be the least equipped to make effective, practical business decisions for their companies."
Sorry I'm not more open-minded about LLMs, it's just some fucking maniacs shoveled out a bunch of useless bloatware featuring that technology, did not give me any chance to opt out, reorganized the entire economy around it, zeroed out gains made by green energy, and made it impossible to buy RAM
Not sure any larger opponent would have more competent leadership or less corrupt procurement procedures either, though.
No no no, his original bailey was much larger than just "write papers". It was "do social science research". He created a motte-and-bailey fallacy for the textbooks.
This isn't meant to imply that programming is only text generation. Still, writing bits of code is the low hanging fruit here. If an LLM summarises a legal document for you or writes an article, you cannot just hit the "run" button to see if an error message pops up and realise it made a mistake.
Really, coding is the ideal use case for LLMs. They are text generators, and coding is generating text, and you or an "agent" can easily test if the code runs and has the expected output and try again if it doesn't. None of that applies in any other use of LLMs, even other writing tasks.
Australian colleagues may find this of interest and want to provide their perspective on whether and, if so, how they would like to use more AI in their work.
That is so simple that a script from the 1990s could do it.
That is so generic a statement that nobody can disagree with it. "As if a friend asks for feedback" is different. Although we do hope that reviewers are friendly, that stance is not what a peer reviewer should adopt.
They will just say you should have used Claude instead, and if Claude also face-plants, you should have used some other model, and so on.
You may mean well, but it often reads as:
Please add my favourite analysis although it isn't needed (also, cut 1000 words).
You should have made this paper about what I find interesting, not what you found interesting.
Let me help you write the way I would have written, in my voice, not yours.
The Fyre Festival had a less cringe name, though
Maybe they get some benefit out of it in other use cases? I have colleagues who use genAI for relatively low-stakes coding assistance but know not to trust it for factual questions in research. Not sure if the top paid subscription is necessary for that, though.
Give the enormous volume of "it works so amazingly, you are a fool for not using it and will soon be out of a job" everywhere, I appreciate that some people are trying what I am unwilling to pay a fee to try and reporting that, actually, it doesn't work.