The good news is that you don't need to do much of it, but it has to make an appreciable difference to what you produce as a team.
I truly and really am sorry.
The good news is that you don't need to do much of it, but it has to make an appreciable difference to what you produce as a team.
I truly and really am sorry.
LLMs can and will automate your job if all you do is routine everyone else is also doing. You need to find a way to make yourself useful in this new world, where showing up will no longer be enough.
Your advantage is that you understand cause and effect. It doesn't. You can look at a set of rules and conditions, spend some time contemplating them and reach conclusions about where they "collide". It can only do so in places where it has seen enough to relate it back to its training.
What they're good at is reusing the steps around our concepts, i.e.: our routines.
However I feel about LLMs, it's important to give them a fair shake, if for nothing else: self-preservation. We know writing code without giving it sufficient thought is a recipe for disaster. LLMs can only do that
In other words: fast-thinking is our reward for having done the hard work of figuring things out in detail previously. We can train ourselves enough to turn some tools in our industry into routines, but LLMs can mostly catch up with us there if it has enough training material.
I read the tests to remind my later self why I considered this particular scenario some special case and if they break, I have a way to slide back into these mental structures, but it doesn't become easy until it turns into routine, which it rarely does if I did my job well.
Once I'm finished with those, I usually write the conclusions down -mostly in code- after which I put some unit-tests in place to make sure that I got the formalisms right. Unit-tests are my safeguards that turn hard-won results from my slow-thinking into fast-thinking, i.e.: intuitive.
This is something deep in our biology, not something we can just "snap out of".
"Slow thinking" is the thing LLMs can't and (as far as we can tell) won't ever be able to do. My job makes me do (sometimes weeks-) long "slow thinking marathons". I don't find them enjoyable in the slightest.
I don't need much introspection to find that I do everything I can (both consciously and unconsciously) to avoid doing it. There was a famous experiment (reproduced a few times) where they showed that people preferred being electroshocked to having to do cognitively strenuous tasks.
LLMs are me when I show up for work but are somewhere else mentally. This is your advantage.
Recently, I've been thinking about Kahneman's "fast and slow thinking" and how this relates to current AI. I often find asking myself how often I need to strain my "thinking muscles".
LLMs are us (me certainly) at our worst. I'm an LLM when I become annoyed with a problem, refuse to engage with it in earnest and just want to get it over with. I'm an LLM when I give the first answer to a non-obvious question that pops into my mind.
People say that AI-produced stuff is uninspired, generic and soulless, but this year's Grammys were a great illustration of how we don't need GenAI for this to happen, shallow consumerism is enough.
Is it possible to show that LLMs can't reason? It's easy to show that they can't reason the way we do, but hypothetically it might be possible to somehow couple its "semantics" coherently. Unless this is you however, you really should shut up about LLM intelligence though.
How to mitigate latency in a deterministic system. This is an almost entirely novel concept which has very little literature supporting it.
Gemini does a great job at summarising our paper
LLMs automated design patterns, that is: stochastic pattern-matching on signals without any consideration for semantics.
Neither of them ever really worked, but both look like they do at first glance and since people don't like to think about semantics, they both took root
I'm reading @markburgessosl.bsky.social draft paper on attention in AI. Some π€―:
- LLMs are slow because of the incredibly wide context when evaluating, so that's also a boundary problem
- Boundaries are the same as a rejection of a violation of an autonomy, so semantics are agentic
The US can't do anything very radical with the threat of EU/Japan dumping its debt hanging over its head. The GENIUS Act can't be implemented because we can't fully automate clearing between two consistent datastores. The whole thing is just silly.
andrasgerlits.medium.com/why-clearing...
If you want to issue stablecoins as defined by the GENIUS Act, you need to buy treasuries in equal amount. I wonder if its goal is to bring US debt back on-shore. It's amazing how the status quo is held up by the Two Generals' Problem and how that's mostly a misunderstanding.
Systems like these are called relative-time systems and are simply not subjects to CAP's limitations.
Isolated clocks shouldn't be allowed to append to the sequences of nodes from which they are isolated, but should recover seamlessly once they become available again.
Ideally, event-order should be established with the speed of communication.
The alternative is to define the clock around communication. Nowhere does strong consistency say that you need to have an immediately evaluated global clock, in fact that's an oxymoron, what does time mean without its associated information anyway?
It really doesn't matter whether the local clocks are atomic clocks, vector-clocks or stable wormholes in space-time, the point is that the total order can't be progressed in case a single such node becomes isolated.
This is what absolute time coordinate-system is.
So, even in isolation, a node is allowed to process a new event and by choosing a time locally, implicitly appending that event to the global sequence of events at a point of its (well, its physical clock's) choosing.
CAP only applies to systems which are defined this way.
Second: that there's only a single, globally shared clock. So, what does absolute time mean? It means that the total order of all the events required by the system (to fulfill strong consistency) can be appended by each node autonomously. Like you would with atomic clocks.
First, that we can lose "a node". One could argue that (regardless of Paxos, so never needing to have nodes with exclusive data) this is a reasonable thing to assume, as we'll always have a finite set of sync replicas for any of our computers, so let's go with this.
Define clocks as "the thing that orders change in a system". We have a multi-node system, where all write-events are stongly ordered.
CAP says that if we lose a node in this setup, we can't know on any of the nodes what the "correct" state is. What are the assumptions?
All clocks are wrong but some are useful.
π§΅
Can someone explain to me why CAP was adopted into the canon based on a paper, but Paxos required Chubby to even be noticed?