Marco D'Agostini's Avatar

Marco D'Agostini

@madacol.com

madacol.com I love to automate stuffs, mostly using web pages nowadays

75
Followers
245
Following
171
Posts
13.12.2023
Joined
Posts Following

Latest posts by Marco D'Agostini @madacol.com

Image by Nano Banana Pro.

A cinematic, wide-angle shot of a sunlit, overgrown cemetery filled with rows of weathered gray tombstones. Every headstone is engraved with the name "AI WALL" in a classic serif font. Below the names, various short date ranges from the past few years are listed, such as "JAN 2024 - FEB 2024" and "FEB 2026 - MAR 2026." Some stones include epitaphs like "Here lies a bad prediction," "Didn't age well," and "Spoke too soon." The scene is a satirical take on frequent but failed predictions that AI development would stop progressing.

Image by Nano Banana Pro. A cinematic, wide-angle shot of a sunlit, overgrown cemetery filled with rows of weathered gray tombstones. Every headstone is engraved with the name "AI WALL" in a classic serif font. Below the names, various short date ranges from the past few years are listed, such as "JAN 2024 - FEB 2024" and "FEB 2026 - MAR 2026." Some stones include epitaphs like "Here lies a bad prediction," "Didn't age well," and "Spoke too soon." The scene is a satirical take on frequent but failed predictions that AI development would stop progressing.

In memory of all the walls AI was about to hit.

13.02.2026 01:30 πŸ‘ 21 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

this is just like Venezuela's chavismo tactic, make the opposition believe the government has absolute control so the opposition loses faith and abstains from voting

05.02.2026 16:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Everyone wants to blame the content: misinformation, bots, polarisation. That mistakes catalysts and symptoms for the underlying causes. The crisis isn’t bad information, it's that the very information infrastructure we relied on to create a shared understand of the world had changed completely.

23.01.2026 09:11 πŸ‘ 867 πŸ” 154 πŸ’¬ 10 πŸ“Œ 20

Claude code is a game changer for fixing Linux issues, it knows how to debug obscure problems automatically, even if it doesn't always fix the issues, it teaches you how it all works underneath, it's really cool

01.02.2026 08:25 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Moltbook is the most interesting place on the internet right now The hottest project in AI right now is Clawdbot, renamed to Moltbot, renamed to OpenClaw. It’s an open source implementation of the digital personal assistant pattern, built by Peter Steinberger …

I wrote about Clawdbot/Moltbot/OpenClaw and Moltbook, the fascinating, weird and sometimes even useful social network for digital assistants to swap tips and gossip with each other simonwillison.net/2026/Jan/30/...

30.01.2026 16:45 πŸ‘ 250 πŸ” 46 πŸ’¬ 19 πŸ“Œ 27

Google earth / maps

31.01.2026 17:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

many European organizations are already in activitypub, it seems they run their own Mastodon instance at ...@europa.eu

30.01.2026 19:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I hope this also serves as a signal that International rules need big reforms towards recovering democracy

16.01.2026 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

if only it were ""politically correct"" to do so years ago, we would've had a much better outcome under the democratic party than what we are currently getting.

But we have learned the hard way that this is a golden opportunity for us, no one even dreamed it was possible to become a US colony

08.01.2026 15:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Why do Venezuelans welcome trump's actions with arms wide open?

US colony
|
distopia >--------------> utopia
| |
chavismo normal
country

08.01.2026 15:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Qwen Qwen Chat offers comprehensive functionality spanning chatbot, image and video understanding, image generation, document processing, web search integration, tool utilization, and artifacts.

oh this is neat β€” Qwen’s newest image model decomposes images into layers (how photoshop works), and you edit just one layer and nothing else

qwen.ai/blog?id=qwen...

19.12.2025 21:02 πŸ‘ 39 πŸ” 7 πŸ’¬ 3 πŸ“Œ 0

and yet it still is a win-win scenario.

Most venezuelans would gladly give all the oil if it meant this criminals left power

17.12.2025 16:35 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I know, but I still can't see the difference

09.12.2025 18:22 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I fail to differentiate rounding-to-false vs oversimplified

09.12.2025 18:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

if you click the share button, there's an tick box to add the time to the URL

14.11.2025 17:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I find it strange that there was no number for how much safer is vaping vs smoking

is it 10% safer?
100%?
10x safer?
100x safer?

What are the orders of magnitudes we are dealing with?

04.11.2025 15:21 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

our ability to use AI will rely on our ability to verify.

And to verify you need to know some stuffs

02.11.2025 12:09 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I am increasingly starting to suspect that a good chunk of our social problems with social media is selection bias in disguise

27.10.2025 12:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Although I agree with that sentiment in general, I am not sure that this is a good representing case of that corruption.

Anti money laundering laws usually look like a medicine that is worst than the disease

23.10.2025 15:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Large language models are often used to answer queries grounded in large text corpora (e.g. codebases, legal documents, or chat histories) by placing the entire corpus in the context window and leveraging in-context learning (ICL). Although current models support contexts of 100K-1M tokens, this setup is costly to serve because the memory consumption of the KV cache scales with input length. We explore an alternative: training a smaller KV cache offline on each corpus. At inference time, we load this trained KV cache, which we call a Cartridge, and decode a response. Critically, the cost of training a Cartridge can be amortized across all the queries referencing the same corpus. However, we find that the naive approach of training the Cartridge with next-token prediction on the corpus is not competitive with ICL. Instead, we propose self-study, a training recipe in which we generate synthetic conversations about the corpus and train the Cartridge with a context-distillation objective. We find that Cartridges trained with self-study replicate the functionality of ICL, while being significantly cheaper to serve. On challenging long-context benchmarks, Cartridges trained with self-study match ICL performance while using 38.6x less memory and enabling 26.4x higher throughput. Self-study also extends the model's effective context length (e.g. from 128k to 484k tokens on MTOB) and surprisingly, leads to Cartridges that can be composed at inference time without retraining.

Large language models are often used to answer queries grounded in large text corpora (e.g. codebases, legal documents, or chat histories) by placing the entire corpus in the context window and leveraging in-context learning (ICL). Although current models support contexts of 100K-1M tokens, this setup is costly to serve because the memory consumption of the KV cache scales with input length. We explore an alternative: training a smaller KV cache offline on each corpus. At inference time, we load this trained KV cache, which we call a Cartridge, and decode a response. Critically, the cost of training a Cartridge can be amortized across all the queries referencing the same corpus. However, we find that the naive approach of training the Cartridge with next-token prediction on the corpus is not competitive with ICL. Instead, we propose self-study, a training recipe in which we generate synthetic conversations about the corpus and train the Cartridge with a context-distillation objective. We find that Cartridges trained with self-study replicate the functionality of ICL, while being significantly cheaper to serve. On challenging long-context benchmarks, Cartridges trained with self-study match ICL performance while using 38.6x less memory and enabling 26.4x higher throughput. Self-study also extends the model's effective context length (e.g. from 128k to 484k tokens on MTOB) and surprisingly, leads to Cartridges that can be composed at inference time without retraining.

arxiv.org/abs/2506.06266

19.10.2025 12:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It seems It just needs debouncing

18.10.2025 17:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Simon you only need one skill.

the skill to read your blog and extract skills from each one of your posts

17.10.2025 07:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Claude Skills are awesome, maybe a bigger deal than MCP Anthropic this morning introduced Claude Skills, a new pattern for making new abilities available to their models: Claude can now use Skills to improve how it performs specific tasks. Skills …

Claude Skills are awesome, maybe a bigger deal than MCP
simonwillison.net/2025/Oct/16/...

16.10.2025 21:25 πŸ‘ 200 πŸ” 34 πŸ’¬ 22 πŸ“Œ 10

bsky.app/profile/kevi...

16.10.2025 11:38 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

that pelican looks disturbingly similar to the one generated by the horny model

"welcoming ass chick posture"

16.10.2025 11:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

do you have anything written about that setup and how you talk to it?.

Seems like something I want to replicate

12.10.2025 12:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

1/ 🚨 NEW NPM MALWARE CAMPAIGN. Yes, another.

North Korea’s β€œContagious Interview” campaign is escalating: 338 malicious npm packages, 50,000+ downloads -- 25 still live.

Aimed at Web3/crypto devs & job seekers via slick recruiter DMs β†’ git clone β†’ npm install β†’ compromise.

10.10.2025 23:02 πŸ‘ 8 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

I'm having a hard time understanding what's going on. What is the purpose of SKILLS.md?
What is the tool doing?

It seems like a bunch of prompts to encourage Claude to follow a specific type of process, but is it doing more than that? or am I failing to see the potential in that?

11.10.2025 16:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Just to be clear, this is not running lean, the engine, in the browser.

It's forwarding the code to a server where Lean runs.

BTW, can lean 4 run on the browser via WASM? Is there any plan for it?

10.10.2025 19:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

(The same thing extends to prompt injection. Human employees are often tricked, scammed or phished, and that’s the baseline that computers need to beat.)

09.10.2025 12:24 πŸ‘ 9 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0