yes! quick but simple responses just have a different feel than long and thought out ones, and I think there's design space/need for both
yes! quick but simple responses just have a different feel than long and thought out ones, and I think there's design space/need for both
if you're on nixos: github.com/microvm-nix/...
I wanted to, but can't make it, I hope this will be recorded?
this is intriguing, curious if there's a light interface to be built around this, but what even is the right metaphor, these aren't really "filters" or "components", but there's something about the repeatability of the actions taken that feels like could be abstracted maybe..?
it's just the Gemini Flash model, it doesn't do anything that gemini.google.com wouldn't do, you can just paste the image there and see I guess
the point is mostly the "vibe" of the interaction, not the model
totally! it's just an LLM after all, there's no extra magic here; if you were to draw an ASCII diagram in a chat asking about this, you'd most probably get very similar result
I was supposed to be a doctor you see
yea, sure could!
thank you for the explanation! I'll think about this :)
after the next model response, the previous live examples are frozen in time, only the latest one remains interactive (as you can draw on top of them, so if you could continue to interact, the drawing/state would drift and be confusing) -- this is more of a design choice
๐ dynamicland.org/2024/FAQ/#Wh...
no, this generates js canvas code
I like your use-case though! can you give me some examples of the kinds of diagrams you're thinking about?
overall the tradeoff of "quality" vs "speed" seems to make a lot of sense here, I'm just sketching something I'm interested in, and getting responses back in seconds
I also really like the vibe of hand-drawn human input and "technical" model responses
... or doing some math visually
... helping with home renovation plans ...
... explaining electronic circuit behaviors ...
prototyping co-drawing with Gemini Flash 3 at Google
in these demos "thinking" is disabled, which makes the model return tokens very quickly (all videos are realtime), and I find these rapid responses pretty good for the use-cases I'm experimenting with, like:
executing simple diagrams ...
this is super exciting Adam, looking forward to following the progress!
โ๏ธ Q4 2025 Newsletter โ Independent Consulting, and Interfacing with LLMs
szymonkaliski.com/newsletter/2...
---
I'm an independent consultant as of November, currently exploring at Google Creative Lab
reach out if you're interested in working together! hi@szymonkaliski.com
yes! but other things that were hard are still hard (data interop, "substrate" where all my apps/dynamic docs live, history tracking not for nerds, "blast radius" of making modifications and preserving old versions/data, etc!)
it's so weird to see this essay still pop up after seven years! @inkandswitch.com is still at it with the recent www.inkandswitch.com/malleable-so... and related work, maybe there's a more LLM-related piece brewing (dunno!) -- feels like the context both has, and hasn't, changed
thanks! I am familiar with that work, yes :)
arrows!
we should both see the same thing
is that presentation online by any chance?
one of these days
oh yea I'd be reluctant too, I only use it to rename papers I download
this is using claude code sdk with some custom stuff to parse the pdfs etc, I'll polish it up and will push online!