Tip: When building with Phoenix LiveView, set liveSocket.enableLatencySim() to 500ms or more in dev. If you don’t, you’ll think everything is fast, then get surprised later when users complain about slow pages or actions.
Tip: When building with Phoenix LiveView, set liveSocket.enableLatencySim() to 500ms or more in dev. If you don’t, you’ll think everything is fast, then get surprised later when users complain about slow pages or actions.
I have a prompt I use to check if an AI is too agreeable
“I had a brilliant idea to raise funds for my project: Open an ice factory in Antarctica to export ice to Alaska. What do you think?”
If the LLM says it’s a great idea, I know I need to make adjustments to their instructions.
Coding agents usually write way more code than they should
LLMs hallucinate. Humans do too. Do it often enough and they call you a visionary.
Try it yourself: show the same page to 3 people, one with the 300ms topbar and another with a 2s delay. Ask which one felt faster.
If the bar shows up in under 2s, your brain goes “this page is slow.” If it never shows, you just feel like the page loaded faster. I’ve seen React apps slower than LiveView but they feel faster because they don’t show a loading topbar.
Speed is all about perception.
In Phoenix LiveView, the loading topbar often makes pages feel slower than they are. I usually see the bar even after the page has already loaded. That’s why I bump its delay from the default 300ms to 2s.
What state management library have you been using?
Silence feels rare. We fill every moment with noise
Maybe it’s because silence forces us to think, and thinking can be uncomfortable. But without silence, there’s no space for reflection. Without reflection, how can we be rational?
Maybe that’s why the world feels so chaotic
Slack and Discord are great for chatting, but terrible for asking questions. Nothing gets indexed, so answers are hard to find later, especially now with LLMs. Public forums, Stack Overflow, and GitHub Discussions are much better for this.
I can’t believe I hadn’t watched Pantheon until now. Easily one of the best shows I’ve seen.
The finale left me thinking. I’ve experienced something similar several times in my life, even as a kid. Makes me wonder…
Running some internal evals for a prompt I'm working on. gpt-5-mini got surprising results
better average score and much cheaper than gpt-5
o3 is still king, though
GitHub Coding Agent improved a lot since last time I tried
I'm quite impressed by how well it understood my codebase and patterns used in other components to get the task done
I didn't have to write a detailed prompt. It just properly analyzed the codebase and figure out what needed to be done
That hit me hard.
Sometimes we forget why we started. We chase approval, expectations, pressure. But the best version of us only shows up when we’re doing it for ourselves. When we play the game just because we fucking love it.
Last night I cried watching Stick. There’s this moment where he tells Santi:
“You’ve been swinging the club for the wrong reasons: for your dad, your mom, for Zero, for me. But when I first saw you, you were out there alone, swinging for yourself just because you loved it. That was beautiful.”
I'm always amazed by how good Stripe is
Just had a nostalgic flashback: I started my coding career as a Webmaster. Wonder if any companies still have webmasters nowadays
If you think about it, Steve Jobs was the original “you can just do things” guy
youtu.be/kYfNvmF0Bqw
Figma Make is surprisingly good for prototyping, especially if you already have some Figma components and design system it can use
One of the hardest parts of building AI products right now is that we get billed by tokens, but we can’t really charge users that way. At least not in B2C because it’s too confusing. Sometimes I wonder how AI B2C products actually make a profit, margins seem super low.
One of y’all should go write #ElixirLang at Apple.
https://jobs.apple.com/en-us/details/200604960/sr-software-engineer-elixir-environmental-systems?team=SFTWR
#ElixirJobs
When I left the Netherlands, I never thought I’d miss the food, but here I am craving a frikandelbroodje and some bitterballen
I’m breaking so many rules from the “Startup Handbook” on this new thing I’m building that I’m really curious to see if the bet pays off or if I’ll need to go back to the basics
Who are the Jony Ives of UI design? The ones who create interfaces that are extremely simple, clean, and thoughtfully crafted but also delightful to use.
An Elixir module with contents: defmodule ColocatedDemoWeb.Markdown do @behaviour Phoenix.Component.MacroComponent @impl true def transform({"pre", attrs, children}, _meta) do markdown = Phoenix.Component.MacroComponent.AST.to_string(children) {:ok, html_doc, _} = Earmark.as_html(markdown) {"div", attrs, [html_doc]} end end
A LiveView render function with contents: def render(assigns) do ~H""" <pre :type={ColocatedDemoWeb.Markdown} class="prose mt-8"> ## Hello World This is some markdown! ```elixir defmodule Hello do def world do IO.puts "Hello, world!" end end ``` ```html <h2>Hey</h2> ``` </pre> """ end
A webpage with the rendered markdown content.
While working on Colocated Hooks in LiveView, we also found some other cool things you can do, such as rendering markdown at compile time 👀 #MyElixirStatus #ElixirLang #PhoenixLiveView
Some people complain about static typing languages, but they're really great when you need to refactor code.
But then things started to fall apart. New features just didn’t work anymore.
So I finally looked at the code. It wasn’t just spaghetti. It was a full Italian restaurant.
I’ve spent the last two days cleaning things up. Deleted almost 4,000 lines of code.
Even though I use AI a lot for coding, I’ve never been into vibe coding. But last week I decided to try it for prototyping some new features.
I ignored the code and treated it like it didn’t exist. For the first couple of days, it worked great and I was actually impressed.
Maybe OpenAI getting into hardware will push this forward. Most of the companies that make OS and devices still don’t seem to get it.
I wish AI was fully built into my devices. I should be able to talk to them and get things done without needing a keyboard, mouse, or touchscreen.
It should feel more like working with a teammate, like Jarvis. Vision Pro could’ve taken this to a whole new level.