we live in the stupidest fucking time in history
we live in the stupidest fucking time in history
Looks like a fun way of finding research and talks on llvm: bwatsonllvm.github.io/library/inde...
The screenshot of the release notes reads: A new version of TeX Live Utility is available! TeX Live Utility 1.55 is now available-you have 1.54. Would you like to download it now? Release Notes: Changes Since 1.54 β’ Fix bug 137, homepage link in Help Book. β’ First release in a long time, since I don't use TeX anymore and resent the very idea of paying Apple annually for the privilege of giving away free and open-source software. Also, I'm really lazy. Please accept my apologies for all the annoying issues you've encountered in this mission-critical software. β’ Updated mirror list, which was three years out of date. β’ Added missing legacy mirrors, which was even more out of date. β’ Added an alert on startup when user tries Homebrew's lobotomized MacTex, because those lunatics left timgr but removed its database. Thanks for nothing, guys. Bug 142 and 144. β’ Use a custom user-agent to work around the Anubis bot trap on texlive.info. Can't wait to see what else breaks because of this, thanks to the profusion of degenerate Artificial Insemination fetishists scraping websites to feed their models. β’ Lists of countries in Repository/Continent are now sorted. No idea how you people let me get away with that one for the last fifteen years.
There's a "is anyone even reading this" sort of honesty you get in the software update release notes from a project that's been around for a long time.
Something about LLM hype culture renders a man immune to the experience of embarrassment. If I couldnβt tell the difference between PhD-level scholarship and grammatical gibberish I simply would not announce that to a global audience.
No joke: I got angry hate mail today for writing an obituary of a Black woman scientistβbecause the person felt she did didnβt deserve the recognition.
Which just makes me want to share it again: www.nature.com/articles/d41...
I'm in this picture and I don't like it. I've regularly got hundreds of tabs open. What I really need is a way to organize the information and bookmarks just ain't it.
CPUs are getting worse.β¨β¨
Weβve pushed the silicon so hard that silent data corruptions (SDCs) are no longer a theoretical problem.β¨β¨
Mercurial Cores are terrifying because they donβt hard-fail; they produce rare, but *incorrect* computations!
Do you think big tech will put in policy restrictions for employees?
"You have to have [these] inhouse competencies before we let you use AI, and you'll have to take an annual exam to continue to qualify."
arxiv.org/abs/2601.20245
I'm convinced AI is our generation's radium - a discovery with genuinely useful applications in specific, controlled circumstances that we stupidly put in everything from kid's toys to toothpaste until we realised the harm far too late where future generations will ask if we were out of our minds.
Wikipedia has signed major AI training deals with Meta, Amazon, and Microsoft. Now, regional-language Wikipedia editors are doing double duty: feeding LLMs with credible knowledge while fighting a flood of AI-generated misinformation
github actions says: could not resolve github.com
how you doing there github
The simplex algorithm is super efficient. 80 years of experience says it runs in linear time. Nobody can explain _why_ it is so fast.
We invented a new algorithm analysis framework to find out.
A las mujeres se nos dice que somos demasiado jΓ³venes para [], demasiado viejas para []... porque la ΓΊnica edad correcta es cuando eres un hombre π
I've been wanting to do this for a while, so here we go. This is a thread of some of the reports coming out about the issues with genAI, specifically negative impacts on people.
I'll do my best to keep this updated while it's relevant.
1. The thing about science that these jokers don't understand is that science cannot be vibe-coded.
Whatever its flaws, the point with vibe coding is that you're trying to quickly make something that sorta works, where you can immediately sorta see if it sorta works and then sorta use it.
is there no better description of omarchy
Much of the media is complicit, especially in the anglophone world.
bsky.app/profile/carb...
Se ha publicado el EU Gender Equality Index eige.europa.eu/gender-equal...
Al ritmo actual, la igualdad de gΓ©nero plena aΓΊn estΓ‘ a mΓ‘s de 50 aΓ±os de distancia y las mujeres en la UE aΓΊn necesitan trabajar 15 meses y medio para ganar lo que los hombres ganan en un aΓ±o. La buena noticia: EspaΓ±a es 4Βͺ.
I'll be running my 3090 for a loooong time
yep
SΓ‘bado.
Leed, poneos jerseys gordos y recordad que no necesitΓ‘is absolutamente NADA de las rebajas. TenΓ©is de todo.
Book review π These women helped to shape quantum mechanics β itβs time to recognize them
go.nature.com/3Ls2yYu
I've found a lot of weird things in TP over the years. To the point that I stopped paying for the premium subscription...
At intervals.icu, I use this to see the amount of total work done at certain intensity, or to see the differences between intervals of different length, e.g., avg pace/power/VAM/GAP +- std dev for 30"/2'/2'30" intervals, etc.
My "No Graphics API" blog post is live! Please repost :)
www.sebastianaaltonen.com/blog/no-grap...
I spend 1.5 years doing this. Full rewrite last summer and another partial rewrite last month. As Hemingway said: "First draft of everything is always shit".
Why Scaling Is Not Enough I believe in scaling laws and I believe scaling will improve performance, and models like Gemini are clearly good models. The problem with scaling is this: for linear improvements, we previously had exponential growth as GPUs which canceled out the exponential resource requirements of scaling. This is no longer true. In other words, previously we invested roughly linear costs to get linear payoff, but now it has turned to exponential costs. Frontier AI Versus Economic Diffusion The US and China follow two different approaches to AI. The US follows the idea that there will be one winner who takes it all β the one that builds superintelligence wins. Even coming short of superintelligence of AGI, if you have the best model, almost all people will use your model and not the competitionβs model. The idea is: develop the biggest, badest model and people will come. Chinaβs philosophy is different. They believe model capabilities do not matter as much...
βWhy AGI Will Not Happenβ by Tim Dettmers.
timdettmers.com/2025/12/10/w...
This essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: βWinner takes allβ approach of building the biggest thing vs a long-term focus on practicality.
Itβs good to see papers start to address LLMs as structural plagiarism β provenance, more hidden than the original words or training data. www.nature.com/articles/s42...
This paper contains some good arguments about an issue that concerns me a lot when I hear my colleagues talking about LLM use in developing their research:
Whose ideas are you presenting as your own?
(Though the fatalist argument the authors make at the end of paper is disappointing/bizarre.)
Man, Codeberg feels like pre-AI GitHub. So refreshing to use. π