built a /deep-review command: fans out code review to 4 AI models in parallel. if 2+ flag the same issue, severity gets bumped. auto-fixes critical and high findings before the PR exists.
built on @aarondfrancis's counselors package.
built a /deep-review command: fans out code review to 4 AI models in parallel. if 2+ flag the same issue, severity gets bumped. auto-fixes critical and high findings before the PR exists.
built on @aarondfrancis's counselors package.
what's the one thing in your AI coding setup that compounds the most?
for me it's context. CLAUDE.md files, auto-memory, project-specific configs. every session leaves behind knowledge that the next session builds on. the AI gets better at my codebase without me doing anything extra.
the more i automate my dev workflow, the more my job looks like product management.
define the problem. clarify requirements. approve the plan. review the output.
the engineering skills didn't become less valuable. they just moved from typing code to systems design.
claude code PostToolUse hooks are underrated IMO.
i added a 17-line bash script that detects gh pr create and automatically starts the code review loop. no manual step between "PR created" and "reviewer comments handled."
code.claude.com/docs/en/hooks
do you want AI to cite your brand?
own the definition.
find the specific terms in your niche that don't have clean, authoritative explanations anywhere. write the best one. structure it so AI can extract it cleanly.
I call this the "dictionary definition heist."
truths have a lifecycle. they can be superseded, contradicted, or archived.
three layers to my agentic memory system:
1. raw transcripts. every conversation stored as source of truth.
2. compressed observations. what the agent noticed and extracted.
3. normalized truths. facts with confidence scores and validity windows.
it actively resolves conflicts and automatically resolves contradictory memories.
i'm writing about it in the next issue of my newsletter which I'll send tomorrow
most AI conversations start from zero. you open a new chat and the AI has no idea what you worked on yesterday.
I built a memory system so mine doesn't work that way anymore.
three layers: raw transcripts, compressed observations, normalized truths with confidence scores.
the real find was Apache AGE; a PostgreSQL extension that adds Cypher graph queries. graph traversal on your existing database. no new moving parts.
PostgreSQL can handle a lot more than people give it credit for π
will share more about what i'm building soon.
i did evaluate the alternatives. Neo4j has stale Laravel packages. FalkorDB has no PHP client. SurrealDB would need a full Eloquent driver rewrite.
instead i ended up building in Laravel with Eloquent and PostgreSQL.
pgvector handles embeddings. regular tables handle the entity graph. Laravel AI SDK handles extraction. everything runs inside the same database my app already uses. no new services. no new infrastructure to monitor.
i've been experimenting with different AI memory systems. i wanted something that can store observations, connect entities, and retrieve context across sessions.
my first instinct was to reach for a hosted vector database. that's what everyone recommends.
π§΅
between IAPI for interactivity and the Abilities API for AI, WordPress in 2026 is not the WordPress people dismiss. it's evolving fast and the AI integration story is stronger than most people realize, myself included.
the other thing that caught my attention: the official WordPress/ai plugin. it adds an Abilities API that makes WordPress discoverable to AI agents. register capabilities, connect to OpenAI/Claude/Gemini, and let AI interact with your site programmatically. this is early but it's a foundation.
it's not replacing React in the editor. React still powers the block editing UI. but for the frontend; product filters, cart updates, interactive elements; IAPI is now the standard path forward.
what it is: server-rendered PHP with lightweight client-side directives. you add data-wp-* attributes to your markup and a small store in JS handles reactivity. full-page caching works natively. no hydration mismatches. it's WordPress-native interactivity without reaching for React.
caught up with the latest development happenings in WordPress over the weekend. some of my clients are heavily invested in the ecosystem and it's important to stay current.
the biggest shift: the WordPress Interactivity API (IAPI). π§΅
icymi: i published my first npm package this weekend, ποΈ Librarium
it's an open source CLI tool that lets you fan out research queries to multiple search and deep-research APIs in parallel for human or agent consumption.
let me know what you think!
github.com/jkudish/lib...
Plume has everything you need:
- typed DTOs with active record methods
- automatic lazy token refresh
- rate-limit handling built in
- artisan commands with JSON output, great for AI agent use
- AI tools for the Laravel AI SDK
- test fakes with semantic assertions
- comprehensive docs and AI skills
πͺΆ announcing plume β a Laravel package for the X API v2.
if you're building anything with the X API in Laravel, i hope this becomes the canonical implementation.
give it a try and feel free to star/share π
github.com/jkudish/plume
First time sharing my work on Hacker News.
I posted about my new CLI tool Librarium.
news.ycombinator.com/item?id=471...
github.com/jkudish/lib...
Appreciate an upvote if you can!
ποΈ Introducing Librarium
Fan out research queries to multiple search and deep-research APIs in parallel.
Results are collected, normalized, and deduplicated into structured output that your agents can consume or write to markdown.
github.com/jkudish/librarium
ποΈ Introducing Librarium
Fan out research queries to multiple search and deep-research APIs in parallel.
Results are collected, normalized, and deduplicated into structured output that your agents can consume or write to markdown.
github.com/jkudish/librarium
the real unlock with AI agents isn't just speed. it's parallelism.
i work faster AND on more things at once.
what's one thing you automated this week that you used to do manually?
for me: new errors on most of my projects automatically create an AI driven investigation and PR for me to review and approve.
want help improving your SEO/AEO/GEO strategy? hire me.
jkudish.com
76% of ChatGPT's top-cited pages were updated in the last 30 days.
stale content = invisible to AI. refresh your key pages quarterly at minimum.
structured data (JSON-LD) tells search engines what your pages are, not just what they say.
no schema = no rich results. FAQ schema alone boosts AI visibility by 40%.