Who thinks "clean room" is needed to reimplement and put it into a new license does NOT understand copyright. Clean room is a trick to make litigation simpler, it is not mandated by law: rewrites are allowed. The new code just must not copy protected expressions. Linus was Unix-aware.
05.03.2026 11:26
๐ 32
๐ 4
๐ฌ 1
๐ 0
Every morning there are two news: the news about the release of some major upgrade to some AI, and the news about the lack of DeepSeek v4 release.
26.02.2026 19:11
๐ 20
๐ 0
๐ฌ 0
๐ 0
Implementing a clear room Z80 / ZX Spectrum emulator with Claude Code - <antirez>
Implementing a clear room Z80 / ZX Spectrum emulator with Claude Code: antirez.com/news/160
24.02.2026 17:59
๐ 18
๐ 3
๐ฌ 1
๐ 0
Users report that asking Sonnet 4.6, via the Anthropic API, "What's your name", it reports "DeepSeek" with a high frequency. Labs cross-fine-tune on chain of thoughts and use other models as RL signal consistently. Also the pretraining, a *key* step, is on mostly public data. Anthropic disappoints.
24.02.2026 13:58
๐ 19
๐ 0
๐ฌ 1
๐ 0
Il post code & methodology, and the anti-contamination steps taken.
23.02.2026 15:56
๐ 5
๐ 0
๐ฌ 0
๐ 0
A clear room emulator of the Z80, the Spectrum, and CP/M written by Claude Code. Vey skeptical with the compiler experiment (more complex for sure) made by Anthropic because of the fundamental flaw of not providing the agent with the specifications / papers.
23.02.2026 15:55
๐ 14
๐ 0
๐ฌ 3
๐ 0
You see, I'm ok with AI usage in many ways. Yet I can't understand why people that don't have deficiencies in written expression use it for writing. Emails. Blog posts. Comments. Why? We only lose something in this way. We want your voice.
19.02.2026 22:03
๐ 51
๐ 3
๐ฌ 6
๐ 1
Today I had to fight with GPT 5.3 to defend my position on the complexity of a specific command of the new Redis type I'm adding (released soon I hope). It had a great point about the worst case, but the typical case was as I claimed. We reached an agreement mentioned both... :D
19.02.2026 11:18
๐ 22
๐ 0
๐ฌ 4
๐ 0
Besides, Amodei has - in my opinion - a personal role in the wake up call of China about GPUs. It was unavoidable but certain words even speedup the process, perhaps.
19.02.2026 08:51
๐ 4
๐ 0
๐ฌ 0
๐ 0
You know what happens with Nvidia ban for the Chinese market? That 1.5 technologically advanced and capable humans said "maybe we can use our own GPUs". You know what's going to happen with Anthropic-style AI usage bans, right?
19.02.2026 08:50
๐ 7
๐ 0
๐ฌ 4
๐ 0
GitHub - antirez/picol: A Tcl interpreter in 500 lines of code
A Tcl interpreter in 500 lines of code. Contribute to antirez/picol development by creating an account on GitHub.
After 19 years, the version 2 of the Picol interpreter is out. Features [expr] in ~40 lines of code, floats, globals, can run mandelbrot.tcl, and so forth. The code is now more functional and more readable at the same time. 654 lines of code now.
github.com/antirez/picol
17.02.2026 16:24
๐ 19
๐ 3
๐ฌ 2
๐ 0
Fatica da programmazione automatica
YouTube video by Salvatore Sanfilippo
On the stress induced by automatic programming (English audio and subs available): youtu.be/id9QG-mQSOo?...
17.02.2026 12:58
๐ 12
๐ 0
๐ฌ 0
๐ 0
Terry Tao - Machine assistance and the future of research mathematics - IPAM at UCLA
YouTube video by Institute for Pure & Applied Mathematics (IPAM)
www.youtube.com/watch?v=zJvu...
16.02.2026 16:48
๐ 4
๐ 0
๐ฌ 2
๐ 0
But in general, Transformers don't have much special: if you have data, GPU, a reinforcement learning pipeline that works, you can build a frontier model. Everybody attempting seriously is making it. I don't believe much is a technology that in the long run you can "lock in".
16.02.2026 10:50
๐ 1
๐ 0
๐ฌ 1
๐ 0
I'm finding it very hackery, but this depends *a lot* on the way you use it to be honest. To converge to a flat lack of understanding is very simple. Btw so far Chinese models are avoiding the AI oligarchy of "the few", thanks to Kimi 2.5, GLM 5, and soon DeepSeek 4: if this stops we are fucked.
16.02.2026 10:49
๐ 1
๐ 0
๐ฌ 1
๐ 0
Of all human feelings, envy is the one I despise the most.
16.02.2026 08:40
๐ 26
๐ 0
๐ฌ 0
๐ 0
@timkellogg.me is right. I'm feeling this a lot. It's like suddenly you can fly, so you need to go somewhere even if you don't need to go anywhere. Very simple to burnout this way.
15.02.2026 21:37
๐ 1
๐ 0
๐ฌ 1
๐ 0
Claim: OpenAI is leaving a lot of money on the table (and a lot of Codex users) for not having a plan between 20$ and 200$
14.02.2026 09:30
๐ 44
๐ 0
๐ฌ 8
๐ 1
Flux2.c is now Iris.c (greek goddess Iris, messenger of the gods and personification of the rainbow), and adds support for the zImage Turbo model as well. Calling projects after company products names is a bad long term idea.
github.com/antirez/iris.c
13.02.2026 15:28
๐ 19
๐ 1
๐ฌ 0
๐ 0
Btw ambient noise does note help the encoder to do a stellar work. Much better when there isn't too much noise.
13.02.2026 10:00
๐ 0
๐ 0
๐ฌ 0
๐ 0
Yep it's quite incredible. There aren't noticeable "distortions" but if you really know the speaker you could tell something is a bit different. It captures ambient noise quite well, too. Never tested on music so far. Should NOT work.
13.02.2026 09:59
๐ 0
๐ 0
๐ฌ 1
๐ 0
Imagine creating the UX for HBO Max and deciding to make the screen almost black each time somebody moves the cursor or acts with the TV remote, or each time the show starts, for 10 seconds. How broken products design is, in 2026?
13.02.2026 08:05
๐ 13
๐ 0
๐ฌ 2
๐ 0
Note that I totally get that Claude Code is far better *practically* at many things that don't require to be a so advanced thinker, but: 1. I'm more interested in very hard problems. 2. I believe it is simpler for Codex to learn Claude smartness, than Claude Codex intelligence.
12.02.2026 22:07
๐ 10
๐ 1
๐ฌ 2
๐ 0
WTF the Qwen3-TTS encoder/decoder compress wav files of 100 times... Compression is now GPU-bound, no longer algo-bound, for the most part. The same is happening for images and videos as well, just not practical because of speed for now.
12.02.2026 21:24
๐ 8
๐ 0
๐ฌ 1
๐ 0
The 20$ codex plan is worth more than the $200 Claude Code plan.
12.02.2026 19:13
๐ 22
๐ 3
๐ฌ 4
๐ 1
Much faster now.
12.02.2026 17:13
๐ 1
๐ 0
๐ฌ 0
๐ 0
Give a look at how simple the inference pipeline is here, in the encoder side: github.com/antirez/qwen...
12.02.2026 16:49
๐ 7
๐ 0
๐ฌ 0
๐ 0
The way those transcribing models work, with audio -> FFT -> MEL -> Conv2D -> self attention feed to what is, basically, a decoder only LLM (autoregressive with the emitted tokens AND the audio embeddings generated by the encoder) is one of the MOST fascinating things in AI, IMHO.
12.02.2026 16:42
๐ 9
๐ 0
๐ฌ 3
๐ 0
GitHub - antirez/whisperbot: Telegram bot that transcribes audio messages via whisper.cpp
Telegram bot that transcribes audio messages via whisper.cpp - antirez/whisperbot
I updated my Telegram bot as well (github.com/antirez/whis...), since it is much better to use Qwen3-ASR instead of Whisper medium. For the same quality, the inference time is much better with the new model. Also emits each token on stdout ASAP, so it's more smooth.
12.02.2026 16:35
๐ 4
๐ 1
๐ฌ 2
๐ 0