CHOTINER: So your son is 14, is that right?
IKARI: It's complex, Isaac. Ifβ
CHOTINER: And the βrobotβ he pilots, thatβs actually the child of an alien you keep crucified in the basement, which is possessed by the spirit of his dead mother?
IKARI: Look, let me answer the question.
CHOTINER: Sure.
ME, IN TEARS: you can't just say every single part of a computer system is a file
UNIX, POINTING AT THE MOUSE: file
Today I wrote about a general tendency to accept atrocious premises, and the need to reject fascism's false choices in order to find expansive and imaginative paths forward.
Breaking the premise, embracing the obstacles, pursing everything.
www.the-reframe.com/there-is-no-...
As always, Moxon cuts through the bullshit like a knife:
"Whenever you're dealing with an argument that you know is wrong somehow but are unsure of why, the best advice I have to give is to ask: "in what way is this argument founded in the premise that some people matter and other people don't?" "
the thing about "phd experts in your pocket" is you can basically just email real ones
A lot of people are learning that cowardice wonβt save them in real time. Might as well be brave.
warframe's previous story-driven expansion was about '90s-stolgia, dating, and boy bands. the next one? well...
I think about this constantly
yes itβs a big mystery why people who think empathy is evil donβt become therapists
Ironically, upon the paperβs release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to βonly read this table below,β thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. βWe specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,β she says, laughing.
Amazing: MIT researchers revealed how ChatGPT etc are destroying our brains and booby-trapped the report to expose those who want to use AI to ostensibly summarize the results.
t.co/JXeTALBPds
A major danger of LLMs is that humans are SO predisposed to attribute knowledge to any entity that uses natural language fluently. We cannot imagine that a machine that outputs natural-seeming speech/text doesn't have cognition. Brilliantly articulated by @emilymbender.bsky.social et al. (2021).
As of this morning, Deploy Empathy is only 9 (!!) copies away from selling 5,000 copies
So Iβve reduced the paperback to $Β£β¬ 10. If youβve wanted to get copies for your team, today is the day!
**today only**
πΊπΈ www.amazon.com/Deploy-Empat...
π¬π§ www.amazon.co.uk/Deploy-Empat...
LLMs turn your job into mostly code review, a task everyone famously loves to do and is good at
If you have no idea what your users might want and no interest in finding out, simply promise investors that you're building an app for everyone that can do everything, and therefore will have everyone in the world as your addressable market (wow!) with the global GDP as potential revenue (wowza!)
This is a great post about the magic mixture that made Bell labs work. I especially like this bit because it accords very strongly with something I've always believed.
"Why would you expect information theory from someone who needs a babysitter?"
As somebody said quite a long time ago - why should I bother to read something nobody's bothered to write?
I feel like whenever someone suggests using AI to deliver documents in less time, there is an implicit "don't worry, nobody is actually going to read these" attached to it.
Costco is a really popular subject for business-success case studies but I feel like business guys kinda lose interest when the upshot of the study is like "just operate with scrupulous integrity in all facets and levels of your business for four decades" and not some easy-to-fix gimmick
1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages donβt exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
hi, economy expert here! this is not funny, stock markets only do this when theyβre in extreme distress.
"Vanilla is in vanilla ice cream and what else?" is such a perfect encapsulation of the surly self-confidence of the people who are ripping the wires out of our government and our economy.
the goal is _maintainable_ code, not just pure volume of code. if you deprive yourself of learning and just outsource it to the machine, you are robbing yourself.