This repo could be good -> github.com/wzhudev/reve...
"fully operational mobile McDonald’s unit" is a phrase that makes me feel like maybe i've passed away and all the news i read is all just the final neurons in my brain racing around making haphazard connections in the moments before darkness
The Design Museum in Kensington is great and there’s a Dishoom nearby. Dishoom also has a great breakfast menu + chai.
g.co/kgs/TbGLnHR
Nice! How? SVG and some JS?
“The question that most participants asked after the workshop was about the playground, and how they might continue this workflow.“ Almost a year later you still need to crack open a code editor to really try this stuff out.
Here’s my reflections on a workshop I ran for designers on AI interfaces -> joshuacrowley.com/study/omnivo... I agree though, there’s not a tonne of thinking/writing about the opportunities for designers. But also we’re only just now getting models that are fast enough, cheap enough and good enough.
content-addressable filesystems! What a notion. I’m trying to reflect on how I use Cursor, as I consider files/folders, and how they work in the context of, well context!
You should do it, simple way to start marketing whatever you’re building. Maybe a growing audience to shadow a project, if you’re vibe coding your own. Get insights and ideas.
100% write more throw away code. I think we don’t appreciate how many more concerns even front end devs have besides just getting something to appear on the screen.
Hi @jesseyuen.bsky.social added you to this handy starter pack, hope that's okay -> go.bsky.app/GXihHDK
Looks good! Yeah you can get quite a clip going at the start, refactoring is a little slower. I found feature flagging really useful, because you can build out features so quickly, it’s easier to toggle combos of features then refactor with cursor.
Good podcast on the productivity paradox. Maybe AI is the last mile piece for the computer revolution, as opposed to a new exponential.
Yeah I guess I’m not aware of any library that handles end to end encryption for local/offline stored data AND remote data in any off the shelf package. Could be handy for the local first community.
Yes I am! You can use the app locally without any auth. Or you can sync to a DO and I used Clerk to auth that. This repo by @codewithbeto.dev should show a pattern for that -> github.com/betomoedano/...
Nice, do you have a link for the speaker component?
So in my app you can store your recipes, or your noughts and crosses games and all in a single data store thanks to @tinybase.bsky.social. User can keep it in localStorage or move it to the cloud.
The UI component for each list is the source of truth for how the data works, it’s a contract between the user and the model. The workflow is prompt for component, which dictates the schema. User verify as meeting need. Generate system prompt based on the component to codify how to CRUD that data.
I’ve been experimenting with this for tinytalkingtodos.com you have a one size fits all schema, a row could be about a pantry item or a birthdate. Each row belongs to a list that contains a system prompt for an AI to help unpack it, and a custom UI to house it. It’s a lossy approach, a broadchurch.
There’s just so much detail to work through when implementing these experiences, it’s fun but also exhausting because they just pop up!
You could discard the audio tokens each turn, and inference off just the text transcription. But you’d probably want to be constantly re transcribing all the audio chunks to get the most accurate representation of the conversation, otherwise very lossy
How do you transcribe those tokens for the UX of chat? It’s likely a user needs a transcript to understand what’s been understood. Well you have to use another model/call to transcribe the audio input, like whisper. So you have a new problem, does the transcript/audio diverge?
Multimodal models like Flash 2.0 and GPT4o can use audio tokens to natively reason over audio input. Neither can yet provide text and audio tokens that come from the same inference pass, and presumably wont ever. So that presents a flaw in these models no one seems to be discussing.
Would you consider doing the backend yourself? Using an IDE like cursor + Claude Sonnet 3.5, services like Clerk, Stripe, even Airtable can simply the requirements for the back end.
You could build an AI party planner agent for example. Many steps like research, booking, RSVPs, catering, all executed on your behalf, with input. However for such a task, for many users the effort of planning is kind of the point, it’s an expression of an act of service for a loved one.
I don’t know what agentic ai really means for users. I can see how an immediate barrier for deeper LLM experiences is access to user data. So will agents just be for integrating your service into a user’s auth/cloud/filesystem? a crutch for interoperability?
TTT encourages you and your partner to sit down and work through a plan together. The Operator listens in, and makes notes adding todos to your lists as you discuss it in realtime. It cuts through the argy-bargy.
Collaboratively plan all your last minute Christmas todos with your significant other and tinytalkingtodos.com. It's a local-first voice assistant with a focus on privacy and families.