Want to build something crazy in 1 prompt?
Let's meet at Shift:
- โณ 48 hours of hacking
- โจ unlimited Codex+Cursor credits, awesome coaches
- โฐ 27-29 March, Nantes
shift-hackathon.com
Want to build something crazy in 1 prompt?
Let's meet at Shift:
- โณ 48 hours of hacking
- โจ unlimited Codex+Cursor credits, awesome coaches
- โฐ 27-29 March, Nantes
shift-hackathon.com
Token Meter for VSCode โ free, open source.
Works on Cursor too.
๐ github.com/samber/vscode-token-meter
โญ if it's useful.
Writing code isn't the bottleneck anymore.
Distribution is. Compliance is. Paperwork is.
We spent 10 years automating the build. Nobody automated the publish.
The "long" part:
- Creating an Azure Publisher account
- Filling in marketplace metadata
- Waiting for review
2 hours of bureaucracy for a tool that took 1 prompt to exist. ๐คฎ
So I opened Claude Code and described what I wanted.
A VSCode extension. Token count in the status bar. Real-time. Switches model on click.
One prompt. It built it.
I was building Claude Code Skills: custom instruction sets injected into the AI context window.
Problem: no idea how many tokens each skill was consuming.
Was I burning half the context budget on docs? No clue.
The longest part of my last side project was filling out a form.
1 prompt to build. 2 hours to publish.
We're optimizing the wrong thing.
๐งต
Your Go code is leaving 90% of the CPU idle ...until now.
samuelberthe.substack.com/p/your-go-co...
samber/ro, the streaming alternative to samber/lo, also supports SIMD operations in its last version !
Changelog: github.com/samber/ro/re...
Doc: ro.samber.dev/docs/plugins...
SIMD documentation is available here: lo.samber.dev/docs/experim...
(requires a recent amd64 CPU)
๐ We just made the biggest release of samber/lo to date: github.com/samber/lo/re...
- ๐งช adding support for SIMD (Go >= 1.26)
- ๐ฅ adding **Err variants helpers
- ๐ lots of performance improvements
The library has just surpassed 1.000 helpers!
These GPU/LPUs are still niche, but theyโll be game-changing for near-real-time use cases.
* Their direct competitor, Groq, has very little SRAM per chip. Running a 70B model requires assembling roughly 1k LPUs in a large cluster, which introduces additional latency.
* Memory capacity per chip is unknown, but it matters because batching relies on abundant memory.
Cerebras can achieve almost the same token bandwidth with batching (around 2k tok/s per user, up to 20k tok/s per chip).
* In the future, chips without HBM will probably be paired with traditional Nvidia GPUs: B200 for prefill, followed by Groq/Cerebras/Taalas for decoding.
17k tok/s is impressive. A few observations:
* This measures only the decoding stage. We have no information about the prefill (Time To First Token)
* Taalas HC1 likely has lower TFLOPS than the B200, so the performance gap during the prefill stage might not be significant.
๐๐งต
"I Verified My LinkedIn Identity. Here's What I Actually Handed Over."
thelocalstack.eu/posts/linked...
๐ง Can someone explain what happened in just one weekend? Clawdbot has been around for 5 months but has only just exploded in popularity.
Probably the fastest product launch ever on GitHub. ๐ธ
๐
100 people coding in a single repository is a merge nightmare.
If you have 100 *AI-assisted* coders in a single repository, you'd better have a f****ng fast CI.
A picture is worth a thousand words.
The production of the Blackwell GPU generation starting 2025-H2 has caused DRAM prices to soar.
If this technology is real, this is a ChatGPT-moment.
*They have 0 factory, industry takes time.
*They did not disclose the materials
#CES #battery ๐
www.youtube.com/watch?v=Y-aP...
China successfully reverse-engineered an old ASML machine. ๐ณ
(it's a prototype - no working chip has been produced ...yet)
www.reuters.com/world/china/...
The tool & documentation are available here: strudel.cc
๐พ ๐ง If you like coding and music, look at that: www.youtube.com/watch?v=iu5r...
Seems like Cloudflare still uses their old "FL1" proxy (based on Openresty and Lua) for at least 25% of total traffic. ๐คฏ
Even the leaders struggle to eliminate their technical debt.
blog.cloudflare.com/5-december-2...
How Meta keeps its AI hardware reliable ๐
// Training a model on 10k-ish GPUs will toast a few hundred along the way. Better make sure you catch silent failures ๐
engineering.fb.com/2025/07/22/d...
๐ง Claude is becoming the default model for developers, for both AI-assisted coders and agent creators.
Look at that ๐ #contextEngineering
www.anthropic.com/engineering/...
Deepseek, October 2025:
"Hey guys, look at our new deepseek-ocr model!"
Deepseek CTO in 6 months ๐
1) ourworldindata.org/data-insight...
2) ourworldindata.org/reducing-fer...