Screenshot showing my credit score dropped this month because 1. I am not using my available credit, and 2. Iβm using more of my available credit
God grant me the wisdom to understand the internal logic of the credit system
@maggieappleton.com
Design engineer playing with AI and hacky prototypes @githubnext.com Adores digital gardening, end-user development, and embodied cognition. Makes visual essays about design, programming, and anthropology. π London π± maggieappleton.com
Screenshot showing my credit score dropped this month because 1. I am not using my available credit, and 2. Iβm using more of my available credit
God grant me the wisdom to understand the internal logic of the credit system
The looks exactly like a technical specification to me. Have you read the W3C specs?
The source code is far more than 2000 lines long. So the spec is still an efficient, concise outline of the key architectural decisions you need to make source code.
IF you want to adapt software to fit your needs, run in your pref lang, change the architectural decisions, etc. rather than forking and editing source code, you can adjust at a higher level.
Specs are a concise, directionally correct design ref with less complexity than the implementation code.
This is gathering chatter so I need to clarify: neither openAI nor I is suggesting everyone prompts their own version. Obvs wasteful. Theres a canonical implementation!
The important line is βin your programming language of choiceβ. Where lang could be any number of design prefs and variables.
I should have put more emphasis on the βin your programming language of choiceβ part. Where lang can be swapped out for many design prefs
The canonical code is one specific implementation. The spec-as-prompt means people have a higher level ref to make customisations to before they implement.
Agreed! Spec writing becomes the core process to build better tools for, skill-build around. Good product and design taste has always been the essence of great software, but now we get to put more energy and effort there.
The canonical implementation gets patched. And then they update the spec with the patch too.
I should have emphasised more the βimplement in your programming language of choiceβ part. Where βown langβ could be any variety of design preferences. So you only prompt bespoke versions when needed.
I would bet on this happening. Maybe models help develop programming languages that are much more efficient than human readable code. Or maybe we just use these existing langs? (Not my area of expertise)
Then I miscommunicated the general premise in my commentary. I very much meant to point to the highlighted part of the screenshot.
The βimplement in programming language of your choiceβ part. Alongside a canonical implementation. Where programming language could be any number of design variables
The spec is really good - detailed, comprehensive: github.com/openai/symph...
Implementations are now easy. Writing excellent specifications becomes the core skill.
A section of the OpenAI Symphony readme that says βtell your coding agent to build symphony in a programming language of your choiceβ with a link to a detailed spec
We have reached a moment where instead of releasing software you simply release the detailed spec for software and tell people to prompt their agent to build it themselves
From the README of OpenAIβs new Symphony orchestrator: github.com/openai/symph...
Well, done for if the tools for OSS donβt evolve.
GitHub is working on a ton of new features and controls that give maintainers ways to fight fire with fire. But means contributing to OSS can no longer be a drive-by affair. Only high trust, high context people get let in to contribute. Trade-offs.
Photographer (family, wedding, events) is safe. Though I expect it to be a popular backup option, so maybe a competitive market.
Costuming is cool π―
Oh TIL! I did photography before too, but just small gigs throughout high school/uni to make extra cash. I figure itβs pretty un-automateable. But also going to be one of the first markets to saturate since lots of people enjoy it π
Gotta find more niche manual labour ideas
Interior design is more at risk, but family photography not at all.
bsky.app/profile/magg...
With interior design some parts can be AI collab, but itβs still a physical job demanding an understanding of light, texture, space, and, most of all, meeting human needs for a home. What activities does this space enable and encourage, how does storage work, etc.
But family photography, not at all
Iβve seen levels.ioβs AI image products and Iβm baffled by anyone paying for fake images of themselves, in places theyβve never been, bearing facing expressions they donβt make, augmented to be weirdly sexual when thatβs not the vibe.
What is the point? To remember moments you never experienced?
How will an image generator capture my childrenβs expressions at this particular moment in time? At this specific age? In their home, surrounded by the objects of their life?
Photography is about capturing real, specific moments in time; how people look and feel right now. AI can do none of that.
Whatβs everyoneβs physical labour backup career?
Iβm thinking family photographer. Maybe interior designer?
Iβm not good at either of those thingsβ¦ yet. But I reckon I could make it work if all white collar jobs melt down into the GPUs.
Levelling up humanity by reading the room and understanding basic context
Truly a celebratory event. So far Iβve just marked it by eating a large bag of crisps on the sofa while the little one empties out all the drawers and cabinets within reach. Pretty good day.
π you have my sympathy. I hope yours ends up a better sleeper than mine
First full, uninterrupted night of sleep in 10.5 months. The baby finally got with the programme. Itβs a brave new world.
More on GitHub Next's research and previous work here: githubnext.com
We'll demo a bunch of new projects in March
(And no, unfortunately/fortunately we don't have any power over or insight into GitHub's uptime or the speed of the PR page. Sorry. Not our work.)
We're a small R&D team exploring non-obvious futures; what does the post-PR, post-async-human-code-review, post-manual-git-management SDLC look like
The whole GitHub Next team is coming to London in March! And we're hosting a meet-up for anyone interested in the next generation of software eng tools
Sign-up here β luma.com/v5eltkec?tk=...
March 3rd at 6pm
We'll show our recent work & research, but also have open spots for community demos
π it initially feels weird but once you cross into constant voice mode you canβt go back. So much faster and easier. And you can convey so much more detail about what you want, what your fuzzy idea is, how you want the model to approach the problem, etc.
Okay sadly Hank Green deleted the original because internet people are crap π
But the gist was βwhen I talk to software people about how itβs dramatically changing everything for them, and what skills will be needed in the future, they say physical labour and something like a liberal arts degreeβ
Iβm a software engineer / designer and AI agents are fantastic at writing code. I can be directing 3 or 4 agents at once, all implementing features in a few hours that would take me days++ to write by hand.