I think that if the intro to your video is “Thing – you know, [complete description of thing]”, as if you don’t know how the phrase “you know” works, but really you’re just too lazy to write a carefully pitched intro, you should have to post a no-excuses apology, sincerely reflect, and make amends.
Struggling with matplotlib’s fucking awful way of doing absolutely everything is what gives meaning to life, though. Otherwise we might as well be brains in jars.
Please make all necessary preparations to read my wonderful partner’s wonderful book.
Sending good vibes to the adult whose feelings I hurt as a teen in 1998 for suggesting that Deepak Chopra might in some way lack credibility or integrity.
(But also: hire me!)
(Mostly off social media right now due to The Horrors, which I do not optimally digest in this format. If I follow you, I would welcome an e-mail or other asynchronous communication from you. Otherwise, expect me when you see me.)
So in the actual implementation, Potato expects (1) a pan band, (2) the particular spectral sensitivities, and (3) the particular spatial artifacts of the WV-2/3 sensors. However, almost everything in it on a conceptual level should translate for … pretty much any visible sensor.
checking my spam folder
this is obviously ai
You gotta hear this song, it'll change your life I swear. (plays the Shins cover of "Wonderful Christmastime")
(Some of it is in the source data. You can see Potato drawing some non-physical “shadows” around the boats, for example; it’s definitely not filtering as much as it should be here. But that can only be part of what’s going on.)
Yeah. I have some hunches about what’s going on, but I’ve tried not to spend time even informally reverse-engineering it. Knowing exactly what’s going on here wouldn’t help me do anything I want to do.
This one is less subtle. Look at the paddleboards’ colors and the ringing artifacts (dark halos) around the paddleboards, boats, etc. Also, those faint diagonals in the water in the standard image? They don’t diffract. They’re artifacts, not ripples.
A side by side image of part of a marina in two versions. The one on the left is slightly grainy and lacks color detail. The one on the right is imperfect but noticeably better.
Stand-up paddleboards and boats, Marina Del Rey, 2025-01-16 (CID 103001010C12B000). L: standard, R: Potato.
One of the big aims is to make images that look like photos, pictures, not just visualizations of data that happens to be visible light. (Nuance on this is in the essay in docs/personal.md.) So putting aside technical details, what I’m looking for here is a sense of seeing a real moment.
A side-by-side comparison of a beach, a breakwater, and colorful boats tied to the breakwater.
Boats at a breakwater, Manila. A subtler one, maybe – zoom in? 2025-11-13, latitude 14.5818, longitude 120.9576, CID 10400100770EF000. Commercial off-the-shelf pansharpening on the left, Potato on the right.
A screenshot of a CSV showing satellite image catalog IDs with various numbers and notes.
Secret Potato lore (it’s in the docs, but not the interesting part of the docs): I hand-rated more than 1,400 satellite images on several quality axes to filter the training data. I put a lot of city miles on QGIS. This was a terrible idea, but I chose to be guided by the sunk-cost fallacy.
Critics are raving about Potato!
(This is me gently reminding any satellite data execs who might be reading that if you want people to increase the value of your data, at some point you have to let them see your data. “They’ll pay us to improve our product” is not the 🌌🧠 strategy you seem to think. Release large sample datasets.)
Short version: As shipped, it’s narrowly adapted to the particular artifacts of the WV-2/3 sensor. But I expect it to be adaptable to others with less work than starting from scratch would be. (If I’d had a good pool of Planet training data, I would have tried!)
Hatpty thbrithvdy!
Happy holidays, Bluesky! I got you a megathrust earthquake, soil liquefaction, spine-tingling papers about the way our networks confound knowledge, and a PDF in a pear tree. It's my wrap on a year of trying to make sense of how we make sense of what's happening to us.
www.wrecka.ge/landslide-a-...
Sure, by the Pan band.
A screenshot showing the violet roof in Google’s imagery contrasting with an on-the-ground photo of a blue roof, more like Potato’s rendering.
Look at the yellow bases of the lamp posts, the edge between road median and paved surface, and the details in the rails. Look at the vegetation: which looks more like real plants? But the one that gets me is the roof color. Google’s own user-submitted data shows who’s got it right.
A side-by-side comparison of two images of an industrial part of Durban.
Here’s a highway by a switch yard at the edge of the Port of Durban, South Africa. (I prefer mundane test images over landmarks.) CID 10400100770EF000; 2022-04-2. Latitude -29.8937, longitude 31.0134. Google Earth on the left, Potato on the right.
Overall color balance and contrast, being adjustable, is not part of this comparison. I’m lightly grading Potato to look more like the standard data. Compression artifacts are also out of scope. Look as much as possible at the details and especially hue.
Here’s a thread of comparisons as I have time. The source data will all be © Vantor, CC BY-NC if from their Open Data Program. My goal here is not to deride standard pansharpening methods, only to show what’s different – what this is all about.
Oh jeez, it’s good to hear that from someone whose work I find so interesting.