If you enjoyed this thread:
Follow me Michael ๐จโ๐ป๐ฅ for more on AI Engineering & Web Development!
Read the complete build log (and get the code): https://dev.to/michaelsolati/i-built-an-ai-powered-ttrpg-adventure-generator-because-generic-hallucinations-are-boring-362m
โก TL;DR:
Generic AI prompts = Generic results.
Use a "Research-then-Generate" workflow.
Enforce JSON schemas for structured output.
Stream agent "thoughts" via SSE to improve UX.
Visualize citations to ground the content.
Citation mapping in action via D3.js
The coolest part? Citation Mapping.
Because we track the research steps, we can link the output back to the source.
I used D3.js to visualize the "web of inspiration." You can see exactly which folklore blog post inspired your villain.
UX Tip: Kill the loading spinner. ๐
Research takes time. Waiting sux.
I used Server-Sent Events (SSE) to stream the agent's actions.
The user sees: "Scanning wiki..." -> "Reading blog..." -> "Generating villain..."
It makes the wait feel like part of the experience.
A JSON schema definition in TypeScript requiring specific fields like adventure_title, summary, plot_hooks, and nested objects for NPCs and locations.
To stop the AI from generating a "wall of text," I enforce a strict JSON schema.
This acts as a contract. The AI must return structured objects for NPCs, Locations, and Plot Hooks.
No more rambling. Just usable data.
A screenshot of a TypeScript code snippet for a Next.js API route. It uses the `exa-js` library to create a research task for generating TTRPG adventure ideas, returning a `taskId` in the JSON response.
The "Secret Sauce" is Exa (Neural Search).
Google searches for keywords. Exa searches for concepts.
If I want "realistic dragon biology," Exa skips the movie reviews and finds niche biology forums.
We create a research task and return a taskId instantly.
The Problem: Standard LLMs have no context. They guess based on probability.
The Solution: Don't ask it to write immediately. Ask it to Research.
My app, Adventure Weaver, dispatches an agent to crawl wikis, blogs, and forums for "vibes" before writing a single word of plot.
I got tired of AI generating the same "twisted trees" and "whispering winds" for D&D campaigns.
Generic hallucinations are boring.
So I built an agent that turns the entire internet into a procedurally generated library.
Here is the "Research-then-Generate" workflow ๐งต๐
Stop guessing the output of async code.
If you found this breakdown helpful:
1. Follow me Michael ๐จโ๐ป๐ฅ for more deep dives into JS internals.
2. Check out the full visual guide here: https://dev.to/michaelsolati/visualizing-the-event-loop-a-guide-to-microtasks-macros-and-timers-2l22
โก TL;DR Cheat Sheet
* Synchronous: Runs first.
* Microtasks (Promises): Run immediately after the stack clears. Exhaustive processing.
* Rendering: Happens after Microtasks, before Macrotasks.
* Macrotasks (Timers): Run only when everything else is quiet.
Here is the execution flow:
1. Sync Code: 'Start' and 'End' run immediately on the Call Stack.
2. Stack Empties: The Event Loop wakes up.
3. Microtask Checkpoint: "Any VIPs?" Yes, the Promise. Run it NOW.
4. Macrotask: "Okay, now we can do the Timer."
๐ง The Mental Model: The VIP Lane
JS has a single thread, but many queues.
1. Macrotask Queue: `setTimeout`, `setInterval`. (General)
2. Microtask Queue: `Promise.then`, `MutationObserver`. (VIP)
The Event Loop checks the VIP lane immediately after the current code finishes.
โ
The Solution
The actual output is:
1. Start
2. End
3. Promise
4. Timeout
Wait, why does the Promise beat the Timeout, even though the Timeout was declared first with 0 delay?
It comes down to Microtasks vs. Macrotasks.
๐ The Trap
Intuition says: "Code runs top-to-bottom. The timeout is 0ms, so it's instant. The Promise is async too. Maybe they race?"
If you guessed:
Start โ End โ Timeout โ Promise โ
You're wrong!
I love digging into the "weird" parts of JavaScript.
Here is the classic "Predict the Output" game that trips up even Senior Developers during interviews.
What prints first? The Timeout (0ms) or the Promise?
๐งต Let's visualize the Event Loop. ๐
Read the full, in-depth blog post here: https://dev.to/michaelsolati/leetcode-vs-vibe-coding-the-reality-of-interviewing-in-2025-2582
tl;dr: The 2025 Reality
* Enterprise: LeetCode is alive (Anti-Cheat Mode).
* Startups: Vibe Coding is here (Speed Mode).
* The Risk: "Bring Your Own AI" creates economic inequality.
* The Fix: Be bilingual. Audit AI output with strong fundamentals.
A "Pay-to-Win" Barrier ๐ธ
Startups expect you to interview with your own tools. Can you afford the $200 Claude Code tier? If the model hallucinates since you're on a free tier and you miss it, did you fail? We are asking candidates to pay for the privilege of getting hired.
Startup Speedruns ๐
Startups are the opposite. They hand you the keys to Copilot and say, "Go."
The constraint isn't memory; it's Speed. They don't want a coder; they want an "AI Editor." But this speed comes with a hidden price tag...
Enterprise Paranoia ๐ข
Big Tech is terrified of "AI Impostors." 81% of interviewers suspect cheating.
Their solution? "Proof of Work." They know AI can solve it. They want you to have the raw cognitive bandwidth to solve Invert Binary Tree w/out a robot whispering in your ear.
A man sits at a desk in a bright home office with his back to the camera, clutching his head in a gesture of frustration or stress. He is facing a large computer monitor displaying a split screen: the left side shows a video call with another man in glasses, while the right side displays lines of computer code. The desk is cluttered with crumpled paper balls, a coffee mug, and a cookie.
I interviewed with Big Tech and Startups in 2025.
The result? A widening gap that is breaking the hiring process.
We are moving from "Meritocracy" to "Pay-to-Win."
Here is the reality of LeetCode vs. Vibe Coding (and why it matters). ๐งต
Follow me for more honest takes on the engineering industry.
And read my full breakdown here: https://dev.to/michaelsolati/im-getting-serious-deja-vu-but-this-time-its-different-17f4
TL;DR
- The market isn't just saturated; it's compressed.
- Layoffs are funding GPU purchases ($170B+ shift).
- The "How" (coding) is commoditized.
- The "Why" (Engineering & Verification) is the gold standard.
The "Prompt-Jockey" is a myth. The real winner is the AI-Assisted Engineer.
The new skill isn't writing code. It's: โ
Debugging AI hallucinations. โ
Architecting prompt systems. โ
Owning the outcome when the black box fails.
CEOs blame AI for layoffs to boost stock prices. But the real story is Capital Reallocation.
Amazon & Meta cut 35k+ jobs while pouring $170 billion into AI hardware. They are swapping your salary for H100 GPUs.
2๏ธโฃ Pipeline Elimination: AI is eating the bottom of the ladder. โ ๏ธ 37% of employers say they'd rather "hire" AI than a recent grad.
The entry-level tasks used to train juniors are now automated. The ramp is gone.
The 2025 "AI Era"
Today, the filter has shifted. It's no longer about access; it's about a squeeze from two directions.
1๏ธโฃ Vertical Compression: 660k+ layoffs (2022-24) forced seniors into mid-level roles, and mid-levels into junior roles.
The 2015 "Bootcamp Era"
Back then, the challenge was "Horizontal Saturation." Bootcamp grads flooded the market (138% growth in 2015 alone).
But the pie was growing. If you could build a MERN stack app, you could get hired. The filter was simple: "Can you code?"
The 2025 tech market feels just like the 2015 bootcamp boom. But if you look at the data, it's actually much worse.
We aren't just facing saturation. We are facing "Vertical Compression."
Here is the reality of the market that no one is talking about.
(Thread below ๐งต๐)