Rida Al Barazi's Avatar

Rida Al Barazi

@rbarazi

Technologist with deep passion for product, usability, and marketing.

98
Followers
614
Following
26
Posts
29.11.2023
Joined
Posts Following

Latest posts by Rida Al Barazi @rbarazi

Identity management for agents is the thing nobody talks about but everyone hits. You don't notice it until you're 4 agents deep and realize you've spent your afternoon in IAM consoles instead of shipping. First-class concern. Full stop.

09.03.2026 14:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Why I finally tried OpenClaw (and how I'm making it safe) I avoided running agents with real access for a year. The unlock wasn't better models or tighter sandboxes. It was giving the agent its own identity.

Full writeup with the Docker setup, browser fingerprinting, threat model, and what broke when the agent published this post through the pipeline.

rida.me/blog/why-i-f...

11.02.2026 21:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Separate email, dedicated 1Password vault, JIT secrets. Once it had a real identity, CAPTCHAs disappeared (Google OAuth), GitHub access worked (scoped account), and every workflow I built had auditable boundaries.

The hard problems of agent automation are identity problems.

11.02.2026 21:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The knock on OpenClaw is security. Fair. So I spent two weeks making it actually safe. Docker-first, Tailscale sidecar, Chrome in its own container. But the biggest unlock wasn't infrastructure. It was giving the agent its own identity.

11.02.2026 21:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Rendering MCP Tool Results as ChatKit Widgets in Rails Step-by-step guide to rendering OpenAI ChatKit MCP tool results as rich UI widgets in a Ruby on Rails app using server-side hydration and SSE streaming.

OpenAI’s ChatKit ships a Python SDK. I’m using Rails.

I just built a bridge from MCP UI β†’ ChatKit widgets.
When a tool returns a ui:// resource, Rails extracts the widget payload and streams it as an SSE event.

AI agents wrote most of the code after I defined the architecture.

Full Writeup:

08.12.2025 05:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The Resonant Computing Manifesto Technology should bring out the best in humanity, not the worstβ€”a manifesto for resonant computing built on five principles that reject hyper-scale extraction for human flourishing.

What if technology didn’t feel so… hollow?

Some friends and I just released a manifesto about a world where tech leaves us feeling nourished (along with an evolving list of theses about how we can build it)

resonantcomputing.org

05.12.2025 16:03 πŸ‘ 173 πŸ” 56 πŸ’¬ 10 πŸ“Œ 16
Setting Agents Up to Succeed The difference between good and bad AI coding sessions isn't the model. It's what you give the agent before it starts.

Context windows matter more than people realize.

I split agent work into focused sessions:
1. write the feature
2. run browser tests
3. fix the bugs with test output

It’s not elegant, but it’s efficient β€” and that’s the reality of coding with agents today.

rida.me/blog/setting...

05.12.2025 03:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Cursor made me think in file references.
Claude Code made me think in patterns.
Codex made me think in constraints and goals.

The evolution of agents is really the evolution of what you no longer need to explain.

05.12.2025 03:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Agents repeat a lot of unnecessary work.

I’ve started running β€œmeta sessions” where the agent scripts any repetitive CLI chain into one command. It keeps context clean and makes the whole workflow more reliable.

Jobs > steps.

05.12.2025 03:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

AI coding sessions succeed or fail long before the model starts generating code.
The real variable is the context you give the agent.

Modern agents don’t need step-by-step instructions β€” they need intent, constraints, architecture, and a definition of β€œdone.”

05.12.2025 03:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
What Makes an App AI-Native? AI-native apps aren't about having AI features. They're about control transfer: how control flows between user and agent as work progresses from intent to output.

Wrote up the full framework with failure modes for each pattern:
rida.me/blog/what-ma...

04.12.2025 03:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The thing I got wrong building these:
I thought the agent should prompt the handoff. "Ready for you to take over!"
It feels like the agent giving up. The user should claim control when ready.
That's what makes it a collaborator, not a tool.

04.12.2025 03:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

The question that separates AI-native from AI bolted on:
Not "does it have AI features?" but "how does control flow between user and agent?"
The best apps transfer control progressively, conversation to drafting to polish, and the interface matches where you are.

04.12.2025 03:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

MCP's auth model has a chicken-and-egg problem.
You can't discover tools without auth. You can't auth contextually without knowing which tools matter. You can't know what matters until the user asks.
My workaround in the image. But the spec needs a real answer.

03.12.2025 03:22 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
YOLO Mode Only Works When YOLO Can't Hurt You What I learned from 12 months of coding with AI agentsβ€”and why I built isolated dev environments to make them actually useful.

So I built isolated dev environments. Each feature branch gets its own containers, database, tunnels, secrets.
When the blast radius is zero, you can finally let go.
Wrote it all up here: rida.me/blog/yolo-mo...

02.12.2025 04:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The problem wasn't the model. It was their environment.
Shared databases. Conflicting ports. State bleeding across branches. Of course I couldn't trust YOLO mode.

02.12.2025 04:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I spent 12 months coding with AI agents daily. Cursor, Claude Code, Codex.
The productivity was real. So were the failures. I kept seeing the same pattern: agents declare victory before the job is done.

02.12.2025 04:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

Happy to share that I’ll be speaking at @confooca.bsky.social 2026 in Montreal!
Two talks this year:

β€’ Agentic Coding: Building Features with AI Teammates
β€’ Safe Agentic Dev Environments

If you're into AI workflows, coding agents, or dev tooling, would love to meet folks there.

20.11.2025 12:45 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Built a little tool called BranchBox. Every feature gets its own fully loaded and isolated dev environment. Worktrees, devcontainers, Docker networks, databases, ports, env vars. No clashes.

Great for humans. Even better for coding agents.

Repo: github.com/branchbox/br...

15.11.2025 01:26 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Current mode: 3 projects in parallel

5 Codex CLI tasks
2 Codex web tasks
3 Claude Code CLI tasks
1 Claude Code web task

Feels like speed chess! Fast moves, limited time, full focus.

Good thing I’ve got a larger context window than those agents 😏

12.11.2025 00:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Best part?

./bin/feature-teardown oauth

Cleanly removes:
- Worktree
- Container
- Database
- Tunnel

Ship aggressively. Clean up instantly.

YOLO mode with an undo button. /2

Would this change your workflow?

13.10.2025 03:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

When you're juggling multiple Claude coding sessions and your local env becomes the bottleneck:

I built something for a safer YOLO mode:

./bin/feature-start oauth

β†’ Isolated worktree + container + DB + live URL
β†’ Ready for Claude/Codex
β†’ 10 seconds

Zero conflicts. /1

13.10.2025 03:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Building Review Apps with Kamal and GitHub Actions A deep dive into implementing preview environments for pull requests using Kamal deployment, PostgreSQL schema isolation, and GitHub Actions automation.

I missed Heroku’s magical Review Apps after moving to my own Hetzner box, so I rebuilt them with Kamal + GitHub Actions + Postgres schemas.

β€œ/deploy” on a PR πŸ‘‰ spins up an isolated env & subdomain
Closing the PR πŸ‘‰ tears it all down

Full walkthrough ↓
rida.me/blog/kamal-g...

06.08.2025 11:21 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Introduction to Operator & Agents YouTube video by OpenAI

OpenAI released Operator and Computer Use Agent today.

I really like the "take control" feature and human-in-the-loop.
The fact that it relies solely on the screenshot with no page markup is impressive too. I needed both when building Pair Browsing.

Excited to try CUA when it comes out!

23.01.2025 19:51 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Long time indeed! Glad to be here. Hoping to reconnect with the community and it's so heartwarming to start with you <3

16.01.2025 14:06 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Inspired by browser-use (ported its code for a browser extension), DoBrowser fr the core idea, and Google’s Project Mariner.
If you’re curious, here’s the repo: github.com/rbarazi/pair....
More insights soon!

16.01.2025 14:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

I’ve been tinkering with something I’m calling Pair Browsing. Think of it like pair programming, but for the webβ€”a little AI agent that helps you navigate day-to-day browsing. Check out the quick demo below! πŸ€–

16.01.2025 14:02 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0