Andras Gerlits's Avatar

Andras Gerlits

@omniledger.io

I have built the first async, consistent data-platform https://omniledger.io/ I build distributed systems @Citi I also write about distributed systems https://medium.com/@andrasgerlits

161
Followers
202
Following
422
Posts
28.10.2024
Joined
Posts Following

Latest posts by Andras Gerlits @omniledger.io

The good news is that you don't need to do much of it, but it has to make an appreciable difference to what you produce as a team.

I truly and really am sorry.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

LLMs can and will automate your job if all you do is routine everyone else is also doing. You need to find a way to make yourself useful in this new world, where showing up will no longer be enough.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Your advantage is that you understand cause and effect. It doesn't. You can look at a set of rules and conditions, spend some time contemplating them and reach conclusions about where they "collide". It can only do so in places where it has seen enough to relate it back to its training.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

What they're good at is reusing the steps around our concepts, i.e.: our routines.

However I feel about LLMs, it's important to give them a fair shake, if for nothing else: self-preservation. We know writing code without giving it sufficient thought is a recipe for disaster. LLMs can only do that

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In other words: fast-thinking is our reward for having done the hard work of figuring things out in detail previously. We can train ourselves enough to turn some tools in our industry into routines, but LLMs can mostly catch up with us there if it has enough training material.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I read the tests to remind my later self why I considered this particular scenario some special case and if they break, I have a way to slide back into these mental structures, but it doesn't become easy until it turns into routine, which it rarely does if I did my job well.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Once I'm finished with those, I usually write the conclusions down -mostly in code- after which I put some unit-tests in place to make sure that I got the formalisms right. Unit-tests are my safeguards that turn hard-won results from my slow-thinking into fast-thinking, i.e.: intuitive.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is something deep in our biology, not something we can just "snap out of".

"Slow thinking" is the thing LLMs can't and (as far as we can tell) won't ever be able to do. My job makes me do (sometimes weeks-) long "slow thinking marathons". I don't find them enjoyable in the slightest.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I don't need much introspection to find that I do everything I can (both consciously and unconsciously) to avoid doing it. There was a famous experiment (reproduced a few times) where they showed that people preferred being electroshocked to having to do cognitively strenuous tasks.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

LLMs are me when I show up for work but are somewhere else mentally. This is your advantage.

Recently, I've been thinking about Kahneman's "fast and slow thinking" and how this relates to current AI. I often find asking myself how often I need to strain my "thinking muscles".

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

LLMs are us (me certainly) at our worst. I'm an LLM when I become annoyed with a problem, refuse to engage with it in earnest and just want to get it over with. I'm an LLM when I give the first answer to a non-obvious question that pops into my mind.

12.02.2026 05:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

People say that AI-produced stuff is uninspired, generic and soulless, but this year's Grammys were a great illustration of how we don't need GenAI for this to happen, shallow consumerism is enough.

07.02.2026 05:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Is it possible to show that LLMs can't reason? It's easy to show that they can't reason the way we do, but hypothetically it might be possible to somehow couple its "semantics" coherently. Unless this is you however, you really should shut up about LLM intelligence though.

12.01.2026 09:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

How to mitigate latency in a deterministic system. This is an almost entirely novel concept which has very little literature supporting it.

29.12.2025 07:11 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
β€ŽGemini - direct access to Google AI Created with Gemini

Entire conversation
gemini.google.com/share/bfeb45...

29.12.2025 06:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Gemini does a great job at summarising our paper

29.12.2025 06:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

LLMs automated design patterns, that is: stochastic pattern-matching on signals without any consideration for semantics.

Neither of them ever really worked, but both look like they do at first glance and since people don't like to think about semantics, they both took root

27.12.2025 04:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I'm reading @markburgessosl.bsky.social draft paper on attention in AI. Some 🀯:
- LLMs are slow because of the incredibly wide context when evaluating, so that's also a boundary problem
- Boundaries are the same as a rejection of a violation of an autonomy, so semantics are agentic

22.12.2025 06:03 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Why Clearing is a Distributed System Problem and Why That’s Bad News for Stablecoins That all financial transactions rely on trust is nothing new. Just how far we need to trust the other party however, makes all the…

The US can't do anything very radical with the threat of EU/Japan dumping its debt hanging over its head. The GENIUS Act can't be implemented because we can't fully automate clearing between two consistent datastores. The whole thing is just silly.
andrasgerlits.medium.com/why-clearing...

17.12.2025 04:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If you want to issue stablecoins as defined by the GENIUS Act, you need to buy treasuries in equal amount. I wonder if its goal is to bring US debt back on-shore. It's amazing how the status quo is held up by the Two Generals' Problem and how that's mostly a misunderstanding.

17.12.2025 04:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Systems like these are called relative-time systems and are simply not subjects to CAP's limitations.

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Isolated clocks shouldn't be allowed to append to the sequences of nodes from which they are isolated, but should recover seamlessly once they become available again.

Ideally, event-order should be established with the speed of communication.

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The alternative is to define the clock around communication. Nowhere does strong consistency say that you need to have an immediately evaluated global clock, in fact that's an oxymoron, what does time mean without its associated information anyway?

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It really doesn't matter whether the local clocks are atomic clocks, vector-clocks or stable wormholes in space-time, the point is that the total order can't be progressed in case a single such node becomes isolated.

This is what absolute time coordinate-system is.

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

So, even in isolation, a node is allowed to process a new event and by choosing a time locally, implicitly appending that event to the global sequence of events at a point of its (well, its physical clock's) choosing.

CAP only applies to systems which are defined this way.

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Second: that there's only a single, globally shared clock. So, what does absolute time mean? It means that the total order of all the events required by the system (to fulfill strong consistency) can be appended by each node autonomously. Like you would with atomic clocks.

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

First, that we can lose "a node". One could argue that (regardless of Paxos, so never needing to have nodes with exclusive data) this is a reasonable thing to assume, as we'll always have a finite set of sync replicas for any of our computers, so let's go with this.

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Define clocks as "the thing that orders change in a system". We have a multi-node system, where all write-events are stongly ordered.

CAP says that if we lose a node in this setup, we can't know on any of the nodes what the "correct" state is. What are the assumptions?

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

All clocks are wrong but some are useful.

🧡

11.12.2025 14:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Can someone explain to me why CAP was adopted into the canon based on a paper, but Paxos required Chubby to even be noticed?

10.12.2025 10:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0