For some reason LinkedIn thinks I'm super into maritime content.
For some reason LinkedIn thinks I'm super into maritime content.
They tightly couple business logic to the database.
The future of infrastructure management is controller-driven. theyamlengineer.com/controller-d...
Stop Treating YAML Like a String theyamlengineer.com/stop-treatin...
Yes, it's a common ask (as well as going the opposite direction with exporting to TF). Historically, for GCP we've used Config Connector to do bulk imports of existing resources as well as exports to Terraform. cloud.google.com/config-conne...
I don't know if there is similar tooling for AWS/ACK?
For a bit more on the motivation and thinking behind Koreo: theyamlengineer.com
With Koreo, you can now viably implement your own internal developer platform in a way that is manageable, scalable, andโcriticallyโactually testable.
Koreo enables platform teams to implement powerful workflows and resource orchestrations that enable the automation of everything from simple deployments to entire cloud environments.
A core part of Real Kinetic's business is helping organizations establish platform engineering as a practice, but the existing tooling available today is lacking. That's why we're excited to open source Koreo, the platform engineering toolkit for Kubernetes. koreo.dev
If you've ever built Kubernetes integrations you'd know that "plural" might be the dumbest fucking thing perhaps in the entirety of Kubernetes.
Whispers: most companies don't run at Netflix scale and can very economically run their core business on serverless just fine. bsky.app/profile/theb...
All in all, what a wet fart of a product announcement. Less graphs, more use cases and *showing* the capabilities. Seems like they desperately need a Steve Jobs type person who actually has a vision because I'm not sure if Altman does. This problem doesn't seem to be specific to OpenAI though.
But I'm also guessing $200/mo is the only way this is even economically viable until compute gets wildly cheaper, so I'm not sure about the long term prospects of this stuff.
Unless these AI companies are just content with trying to sell this stuff to engineers, idk how this will ever break into normal consumers until someone has an "iPhone moment" but for GPTs.
Is this what happens when you have a bunch of PhD researchers trying to build an economically viable business? Like wtf is this. At least show a side-by-side comparison telling me why this is so much better than the other versions versus the alphabet soup of model names.
Oh sweet for $200/mo I can get access to GPT-4o with o1-mini Advanced Voice which also includes o1 pro mode! And there are graphs that show how much gooder this is for me. openai.com/index/introd...
If you're working on something really new, I don't think you really know what your idea is until you've built a prototype.
There's a critical tangibility to seeing some part of the vision made real. Accept no substitutes, and be skeptical of big claims without concrete demos.
Prompt engineering but the prompt goes to an API instead of a text box.
Lambda is only 10 years old?? What even is time
1. It's super easy to quickly make a thing that does stuff OK (this is like 10% of the work)
2. 90% of the work is the "refinement"
3. I would be hesitant if it was the core of my business and not just a bells-and-whistles feature or productivity enhancement
Probably these are well-understood problems to someone who is not AI-ignorant like me, but my experience led me to a few conclusions...
It's like describing a problem to someone who is writing code to solve it, except you don't get to see the code, only the output of the program, which is a major PITA to debug. And worse, minor changes to the description is like having them starting from scratch each time.
The other problem I saw was, again in very specific circumstances, I could not get the model to change a certain behavior no matter what I put in the prompt.
I'm curious how people are testing these things. For us, it's completely viable to have a battery of tests and then parse the output and check it because we're just generating YAML. But for anyone doing unstructured output, I'd imagine it would be difficult to test things reliably.
The biggest problem I see is idk how you handle model stability. Just switching from Gemini 1.5 Flash 001 to 002 resulted in completely different output in certain circumstances.
Which means the entire AI assistant we built is just a one-shot API call with no infrastructure whatsoever. HOWEVER our problem space is very neatly scoped which I think makes it a lot more tractable.