Claude Code + Notion MCP + $DB_Vendor MCP = π€
Gave an outline of my idea. Claude took my outline, found the relevant data, and turned it into a polished proposal with hard data in record time.
Claude Code + Notion MCP + $DB_Vendor MCP = π€
Gave an outline of my idea. Claude took my outline, found the relevant data, and turned it into a polished proposal with hard data in record time.
100% on having tooling (linting/types/tests) helping out, though I'll also have Claude add those in.
I've found increasing if Claude doesn't work, to wipe out the changes and work on improving my prompt.
www.john-rush.com/posts/ai-202...
My favorite part of Claude Code: I can get an idea out over the weekend. Before: My laptop was a graveyard of personal projects that took all weekend just to get the structure in place. Now with Agentic Coding, I focus on shipping my idea.
If you're not using Claude Code, you are missing out. AI coding has taken a giant leap over the past few months Great article with some tips spiess.dev/blog/how-i-u...
4. I like ChatGPT Codex running on OpenAI's servers, as it allows me to parallelize my work. If Claude had this feature, I'd probably exclusively use it
3. AI coding is good at those tasks you think take 10 minutes but end up spiraling into multiple hours: e.g., bumping versions and dealing with compatibility issues
A few takeaways:
1. Invest in your CLAUDE.md or AGENTS.md file to give details about the code, what you expect in general, and how to run testing, type checking, and linting.
2. The more type hints and tests you have, the better, as the agent will use these to verify its work
After several hours, I upgraded to the lowest Claude Max tier as my Opus tokens were used up, and I did not want to stop progressing.
In 3-4 hours, I got done what would have taken me 3-4 full days -- most of this work is not rocket science or even the interesting parts of a project (wiring up whatever build system is currently popular, getting all the correct dependency versions).
π§ Claude Code π§ , which is now included in the $20/mo Pro plan, is like working with a good mid-level developer. Claude took my prompt, implemented multiple subsystems in parallel, and gave me a working prototype. I spent a few more rounds with Claude adding additional functionality.
I then spent all Sunday morning building a complex prompt describing a prototype. Codex got the basic project structure set, but instead of implementing most functionality, just stubbed it out. A few repeated attempts to build upon that base with smaller parts of my prompt were not getting far.
My Codex team completed 14 Pull Requests: from adding support for newer Python versions, converting to use uv+ruff, to finding a very subtle bug, to correcting longstanding typos.
π§ ChatGPT Codex π§ runs OpenAI Codex in containers on OpenAI's servers and is now available with ChatGPT Plus ($20/mo). It is like having a team of junior developers. I give them tasks, they do the work, test and type check. I provide some feedback, and they open GitHub Pull Requests.
My background: I am a long-time GitHub Copilot and continue.dev user (with multiple LLMs, recently Claude).
AI Coding tools are evolving quickly -- if you have not tried them in the past month, you owe it to yourself to revisit. I spent the weekend with ChatGPT Codex and Claude Code, and wow π€―.
I have no idea what the context is, but you're making me hungry.
Time and time again, starting with #postgresql would just make life easier. If at some point you need to scale beyond PostreSQL, you are an enviable success story.
engineering.usemotion.com/migrating-to...
Just rebuilt a new EC2 VM instead of trying to attach the volume to a working VM and fix in a chroot
Today is starting off strong! Can't even serial console in after an interrupted upgrade borked the kernel.
How is it 2025 and the receiver of Outlook invites can not control the reminder time π€―? It's not like we need 15 minutes to walk to a conference room. We're just bouncing between video calls. The number of meetings I forgotten about because I've moved to another task ...
@microsoft.com
DeekSeek built a reasoning LLM using reinforcement learning, minimal labeled data, and auto-verification. Key innovation: R1-Zero proves high-quality reasoning possible without massive supervised datasets.
newsletter.languagemodels.co/p/the-illust...
Deep dive on NVDA: Despite AI boom, major threats emerge from innovative hardware (Cerebras, Groq), custom silicon (big tech), software alternatives (MLX, Triton), and DeepSeek's 45x efficiency gains. Risks may not be priced in @ 20x sales & 75% margins
youtubetranscriptoptimizer.com/blog/05_the_...
HuggingFace: "We are reproducing the full DeepSeek R1 data and training pipeline so everybody can use their recipe. Instead of doing it in secret we can do it together in the open!"
x.com/_lewtun/stat...
"R1 distillations are going to hit us every few days - because it's ridiculously easy (<$400, <48hrs) to improve any base model with these chains of thought eg with Sky-T1"
news.ycombinator.com/item?id=4282...
Well done!
The gamification of learning has both inspired and burned me.
I've often wondered if we put similar incentives in healthcare tools if it would be a net positive or negative?
www.latent.space/p/reasoning-...
The DeepSeek R1 meltdown is impressive to watch. The democratization of top-tier models and the cratering of training cost has opened thousands of opportunities.
HOPPR signs imaging AI vets Boonn, Kim
π Thank you for sharing this news, @auntminnie.bsky.social
@wboonn.bsky.social
@ksiddiqui.bsky.social
Phi-4 fine-tuneable via unsloth: fixing bugs π and moving to llama π¦ architecture.
unsloth.ai/blog/phi4