Vitalik Buterin's Avatar

Vitalik Buterin

@vitalik.ca

8,470
Followers
214
Following
127
Posts
05.05.2023
Joined
Posts Following

Latest posts by Vitalik Buterin @vitalik.ca

Preview
Continue reading on Firefly.Social I think it's healthy for us in the Ethereum world to have a more bold and open mindset to many things, particularly on the application layer and on how we see ourselves in the world. We should not compromise on core properties: censorship resistance, open source, privacy, security (CROPS). We should not have "open mindedness" of the type that leaves people with no confidence of what security properties the L1 will still have one year from now. We should not ask ourselves questions like "do we really need light clients to be able to trustlessly verify correctness of the chain?". But especially on the layer of applications and Ethereum's interface to the world, we should be more willing to radically rethink various concepts and step outside our comfort zone. This includes issues of technological direction, eg. "what if AI basically means that wallets as browser extensions and mobile extensions are dead within a year?" One example last year was the shift to thinking about privacy as a first-class consideration, something we value equally to the other types of security. This implies a radically different Ethereum application stack, because the entire stack so far has not been built around privacy. Great, let's build a radically different Ethereum application stack! An example this year is the growing work on the networking side of privacy, both inside the EF and outside. It includes application-layer issues, eg. "what if the rest of defi is basically just universal futures markets on top of a good decentralized oracle and letting users self-organize on top of that?", and "what if the ideal decentralized oracle is just a SNARK over M-of-N small LLMs over zk-TLSes of some major news sites?" (BTW this is interrelated with the AI issue: one consequence of AI is that it moves "applications" away from being discrete categories of behavior with discrete UIs, and more toward being a continuous space, so "build fewer apps and rely on users to self-organize around them" should inevitably expand as a pattern) One example this year is rethinking from zero the role of L2s, and what kind of L2s are actually most synergistic and additive to Ethereum. It also includes culture. This is a big part of "the whole milady thing" for myself, @AyaMiyagotchi and others. Yes, it's a silly meme. Yes, I find the political takes of some milady partisans cringe and sometimes outright bootlickerish (though other milady partisans are quite the opposite). But the core underlying subtext, the message behind the message, is: rip off the suit and tie. If you have your suit and tie on, be willing to grab the nearest wine glass and spill it all over your suit and tie, so you have no choice but to rip it off and reclaim your body's full flexibility and freedom. Actually imagine yourself doing this the next time you get invited to a richpeopleslop formal gala dinner. Take the preconception that you are "respectable", write it down on a piece of paper, crumble it up and burn it. The psychological baptism of doing this leads to the intellectual baptism of unlocking greater creativity and expanding overton windows. For too long, our algorithm in Ethereum has been: we have this existing ecosystem, what's the logical next step to make it one step better? Now, our algorithm should be: we have this L1 that is amazing and will become more amazing, we have a growing array of tools, both those built within our ecosystem and outside it, what are the most valuable things to build, knowing what we know now? If YOU had to write the section of the 2014 Ethereum whitepaper that talked about applications, and take a first-principles perspective of what makes sense in defi, decentralized social, identity, and elsewhere, what would you write? At least take the step of marking all path-dependence concerns down to zero, pretend for a brief moment that the Ethereum chain today has exactly zero usage and you're the one suggesting or building the first apps, and see what comes out. Do this even if you're the one building today's existing apps. This is how Ethereum can grow back stronger.

I think it's healthy for us in the Ethereum world to have a more bold and open mindset to many things, particularly on the application layer and on how we see ourselves in the world.

We should not compromise on core…

https://firefly.social/post/ff-6ad4c6cbcabd439a85951c6bed29dbbc?s=bsky

05.03.2026 20:58 👍 13 🔁 2 💬 2 📌 1
Preview
Continue reading on Firefly.Social Pretty obvious that the next iteration of wallets will heavily involve AI. I would not trust an LLM with multi-million transactions or funds, I expect the optimal workflow in high-value situations is "AI proposes a plan, local light client simulates it, you see the action and the simulated outcome and manually confirm it". This all needs to be done conservatively with lots of emphasis on security, but it could be very liberating; removing dapp UIs from the picture completely solves a large number of attack vectors (for both theft and privacy).

Pretty obvious that the next iteration of wallets will heavily involve AI.

I would not trust an LLM with multi-million transactions or funds, I expect the optimal workflow in high-value situations is "AI proposes a plan…

https://firefly.social/post/ff-25fd5ff179754146b71452b0966ca9f5?s=bsky

05.03.2026 18:08 👍 15 🔁 1 💬 0 📌 1
Preview
Continue reading on Firefly.Social .@yq_acc translated one of my recent posts from English to "English with better syntax highlighting". I encourage reading it, it's actually much more readable than my original post :D It's been fascinating to watch programming-language-style syntax structure and highlighting break out into written language. I feel like we've seen "traditional print media style prose" fall away over the past few years and get replaced by a much heavier emphasis on bullet points, diagrams, etc. These ideas that are fundamentally superior to linear written text, and the fact that programming languages, a medium without pre-existing cultural expectations, adopted them right from the beginning is evidence of this. Speech has to be linear, because our ears can't hear many things in parallel, but for a long time we've been hampered by the idea that our representation of written words has to simply be a direct linear representation of the speech. What if we go beyond that? What if we make the *structure* of an essay, either low-level grammar or higher-level ideas (which claims support which other claims, what ideas are placed in contrast to which other ideas, the temporal ordering of some process that is being described...), much more visible so that the reader's mind can subconsciously pick it up? In principle it can be very effective: if I make a statement X supported by A, B and C, and the reader immediately feels the most curiosity about B (or they already understand A and C, or...), then they can just skip straight to B. I hope that over the next few years we can see a lot more innovation in writing style. I'd even say break open the taboo, fictional work should also be open to these kinds of things (prose <-> graphic novel is a spectrum, not a binary) I also hope we can figure out standardized markdown extensions to make all these things nice and easy. I know of various textual diagramming languages, but it would be good to have a really good one that gets the same adoption and legitimacy that markdown has. https://x.com/yq_acc/status/2029591388900528304

.@yq_acc translated one of my recent posts from English to "English with better syntax highlighting". I encourage reading it, it's actually much more readable than my original post :D

It's been fascinating to watch…

https://firefly.social/post/ff-ef55135f61d94808b0015b35393fec66?s=bsky

05.03.2026 18:02 👍 6 🔁 0 💬 0 📌 0

This was a good discussion, a recording should be out soon!

https://x.com/ethereumhouseSF/status/2029417401490784523

05.03.2026 17:52 👍 10 🔁 0 💬 1 📌 0
Mantic Monday: Groundhog Day Plus: Anthropic, Iran, and midterm voting

https://www.astralcodexten.com/p/mantic-monday-groundhog-day

@slatestarcodex is making a great case for prediction markets being useful as an intellectual tool to help us understand the world and the possible near futures…

https://firefly.social/post/ff-9f2289dd68a548999904bc23aec134e7?s=bsky

05.03.2026 17:21 👍 10 🔁 0 💬 0 📌 1
Preview
Continue reading on Firefly.Social Over the past year, many people I talk to have expressed worry about two topics: * Various aspects of the way the world is going: government control and surveillance, wars, corporate power and surveillance, tech enshittification / corposlop, social media becoming a memetic warzone, AI and how it interplays with all of the above... * The brute reality that Ethereum seems to be absent from meaningfully improving the lives of people subject to these things, even on the dimensions we deeply care about (eg. freedom, privacy, security of digital life, community self-organization) It is easy to bond over the first, to commiserate over the fact that beauty and good in the world seems to be receding and darkness advancing, and uncaring powerful people in high places are making this happen. But ultimately, it is easy to acknowledge problems, the hard thing is actually shining a light forward, coming up with a concrete plan that makes the situation better. The second has been weighing heavily on my mind, and on the minds of many of our brightest and most idealistic Ethereans. I personally never felt any upset or fear when political memecoins went on Solana, or various zero-sum gambling applications go on whatever 250 millisecond block chain strikes their fancy. But it *does* weigh on me that, through all of the various low-grade online memetic wars, international overreaches of corporate and government power, and other issues of the last few years, Ethereum has been playing a very limited role in making people's lives better. What *are* the liberating technologies? Starlink is the most obvious one. Locally-running open-weights LLMs are another. Signal is a third. Community Notes is a fourth, tackling the problem from a different angle. One response is to say "stop dreaming big, we need to hunker down and accept that finance is our lane and laser-focus on that". But this is ultimately hollow. Financial freedom and security is critical. But it seems obvious that, while adding a perfectly free and open and sovereign and debasement-proof financial system would fix some things, but it would leave the bulk of our deep worries about the world unaddressed. It's okay for individuals to laser-focus on finance, but we need to be part of some greater whole that has things to say about the other problems too. At the same time, Ethereum cannot fix the world. Ethereum is the "wrong-shaped tool" for that: beyond a certain point, "fixing the world" implies a form of power projection that is more like a centralized political entity than like a decentralized technology community. So what can we do? I think that we in Ethereum should conceptualize ourselves as being part of an ecosystem building "sanctuary technologies": free open-source technologies that let people live, work, talk to each other, manage risk and build wealth, and collaborate on shared goals, in a way that optimizes for robustness to outside pressures. The goal is not to remake the world in Ethereum's image, where all finance is disintermediated, all governance happens through DAOs, and everyone gets a blockchain-based UBI delivered straight to their social-recovery wallet. The goal is the opposite: it's de-totalization. It's to reduce the stakes of the war in heaven by preventing the winner from having total victory (ie. total control over other human beings), and preventing the loser from suffering total defeat. To create digital islands of stability in a chaotic era. To enable interdependence that cannot be weaponized. Ethereum's role is to create "digital space" where different entities can cooperate and interact. Communications channels enable interaction, but communication channels are not "space": they do not let you create single unique objects that canonically represent some social arrangement that changes over time. Money is one important example. Multisigs that can change their members, showing persistence exceeding that of any one person or one public key, are another. Various market and governance structures are a third. There are more. I think now is the time to double down, with greater clarity. Do not try to be Apple or Google, seeing crypto as a tech sector that enables efficiency or shininess. Instead, build our part of the sanctuary tech ecosystem - the "shared digital space with no owner" that enables both open finance and much more. More actively build toward a full-stack ecosystem: both upward to the wallet and application layer (incl AI as interface) and downward to the OS, hardware, even physical/bio security levels. Ultimately, tech is worthless without users. But look for users, both individual and institutional, for whom sanctuary tech is exactly the thing they need. Optimize payments, defi, decentralized social, and other applications precisely for those users, and those goals, which centralized tech will not serve. We have many allies, including many outside of "crypto". It's time we work together with an open mind and move forward.

Over the past year, many people I talk to have expressed worry about two topics:

* Various aspects of the way the world is going: government control and surveillance, wars, corporate power and surveillance, tech…

https://firefly.social/post/ff-59f413c3ccbf4bad902b63c856dfc50e?s=bsky

03.03.2026 19:21 👍 17 🔁 2 💬 2 📌 0
Preview
Continue reading on Firefly.Social Finally, the block building pipeline. In Glamsterdam, Ethereum is getting ePBS, which lets proposers outsource to a free permissionless market of block builders. This ensures that block builder centralization does not creep into staking centralization, but it leaves the question: what do we do about block builder centralization? And what are the _other_ problems in the block building pipeline that need to be addressed, and how? This has both in-protocol and extra-protocol components. ## FOCIL FOCIL is the first step into in-protocol multi-participant block building. FOCIL lets 16 randomly-selected attesters each choose a few transactions, which *must* be included somewhere in the block (the block gets rejected otherwise). This means that even if 100% of block building is taken over by one hostile actor, they cannot prevent transactions from being included, because the FOCILers will push them in. ## "Big FOCIL" This is more speculative, but has been discussed as a possible next step. The idea is to make the FOCILs bigger, so they can include all of the transactions in the block. We avoid duplication by having the i'th FOCIL'er by default only include (i) txs whose sender address's first hex char is i, and (ii) txs that were around but not included in the previous slot. So at the cost of one slot delay, only censored txs risk duplication. Taking this to its logical conclusion, the builder's role could become reduced to ONLY including "MEV-relevant" transactions (eg. DEX arbitrage), and computing the state transition. ## Encrypted mempools Encrypted mempools are one solution being explored to solve "toxic MEV": attacks such as sandwiching and frontrunning, which are exploitative against users. If a transaction is encrypted until it's included, no one gets the opportunity to "wrap" it in a hostile way. The technical challenge is: how to guarantee validity in a mempool-friendly and inclusion-friendly way that is efficient, and what technique to use to guarantee that the transaction will actually get decrypted once the block is made (and not before). ## The transaction ingress layer One thing often ignored in discussions of MEV, privacy, and other issues is the network layer: what happens in between a user sending out a transaction, and that transaction making it into a block? There are many risks if a hostile actor sees a tx "in the clear" inflight: * If it's a defi trade or otherwise MEV-relevant, they can sandwich it * In many applications, they can prepend some other action which invalidates it, not stealing money, but "griefing" you, causing you to waste time and gas fees * If you are sending a sensitive tx through a privacy protocol, even if it's all private onchain, if you send it through an RPC, the RPC can see what you did, if you send it through the public mempool, any analytics agency that runs many nodes will see what you did There has recently been increasing work on network-layer anonymization for transactions: exploring using Tor for routing transactions, ideas around building a custom ethereum-focused mixnet, non-mixnet designs that are more latency-minimized (but bandwidth-heavier, which is ok for transactions as they are tiny) like Flashnet, etc. This is an open design space, I expect the kohaku initiative @ncsgy will be interested in integrating pluggable support for such protocols, like it is for onchain privacy protocols. There is also room for doing (benign, pro-user) things to transactions before including them onchain; this is very relevant for defi. Basically, we want ideal order-matching, as a passive feature of the network layer without dependence on servers. Of course enabling good uses of this without enabling sandwiching involves cryptography or other security, some important challenges there. ## Long-term distributed block building There is a dream, that we can make Ethereum truly like BitTorrent: able to process far more transactions than any single server needs to ever coalesce locally. The challenge with this vision is that Ethereum has (and indeed a core value proposition is) synchronous shared state, so any tx could in principle depend on any other tx. This centralizes block building. "Big FOCIL" handles this partially, and it could be done extra-protocol too, but you still need one central actor to put everything in order and execute it. We could come up with designs that address this. One idea is to do the same thing that we want to do for state: acknowledge that >95% of Ethereum's activity doesn't really _need_ full globalness, though the 5% that does is often high-value, and create new categories of txs that are less global, and so friendly to fully distributed building, and make them much cheaper, while leaving the current tx types in place but (relatively) more expensive. This is also an open and exciting long-term future design space.

Finally, the block building pipeline.

In Glamsterdam, Ethereum is getting ePBS, which lets proposers outsource to a free permissionless market of block builders.

This ensures that block builder centralization does not…

https://firefly.social/post/ff-de08efd879cc40b38be88cc8030cf4bd?s=bsky

02.03.2026 17:33 👍 5 🔁 0 💬 1 📌 1
Preview
Continue reading on Firefly.Social Now, execution layer changes. I've already talked about account abstraction, multidimensional gas, BALs, and ZK-EVMs. I've also talked here about a short-term EVM upgrade that I think will be super-valuable: a vectorized math precompile (basically, do 32-bit or potentially 64-bit operations on lists of numbers at the same time; in principle this could accelerate many hashes, STARK validation, FHE, lattice-based quantum-resistane signatures, and more by 8-64x); think "the GPU for the EVM". https://firefly.social/post/x/2027405623189803453 Today I'll focus on two big things: state tree changes, and VM changes. State tree changes are in this roadmap. VM changes (ie. EVM -> RISC-V or something better) are longer-term and are still more non-consensus, but I have high conviction that it will become "the obvious thing to do" once state tree changes and the long-term state roadmap (see https://ethresear.ch/t/hyper-scaling-state-by-creating-new-forms-of-state/24052 ) are finished, so I'll make my case for it here. What these two have in common is: * They are the big bottlenecks that we have to address if we want efficient proving (tree + VM are like >80%) * They're basically mandatory for various client-side proving use cases * They are "deep" changes that many shrink away from, thinking that it is more "pragmatic" to be incrementalist I'll make the case for both. # Binary trees The state tree change (worked on by @gballet and many others) is https://eips.ethereum.org/EIPS/eip-7864, switching from the current hexary keccak MPT to a binary tree based on a more efficient hash function. This has the following benefits: * 4x shorter Merkle branches (because binary is 32*log(n) and hexary is 512*log(n)/4), which makes client-side branch verification more viable. This makes Helios, PIR and more 4x cheaper by data bandwidth * Proving efficiency. 3-4x comes from shorter Merkle branches. On top of that, the hash function change: either blake3 [perhaps 3x vs keccak] or a Poseidon variant [100x, but more security work to be done] * Client-side proving: if you want ZK applications that compose with the ethereum state, instead of making their own tree like today, then the ethereum state tree needs to be prover-friendly. * Cheaper access for adjacent slots: the binary tree design groups together storage slots into "pages" (eg. 64-256 slots, so 2-8 kB). This allows storage to get the same efficiency benefits as code in terms of loading and editing lots of it at a time, both in raw execution and in the prover. The block header and the first ~1-4 kB of code and storage live in the same page. Many dapps today already load a lot of data from the first few storage slots, so this could save them >10k gas per tx * Reduced variance in access depth (loads from big contracts vs small contracts) * Binary trees are simpler * Opportunity to add any metadata bits we end up needing for state expiry Zooming out a bit, binary trees are an "omnibus" that allows us to take all of our learnings from the past ten years about what makes a good state tree, and actually apply them. # VM changes See also: https://ethereum-magicians.org/t/long-term-l1-execution-layer-proposal-replace-the-evm-with-risc-v/23617 One reason why the protocol gets uglier over time with more special cases is that people have a certain latent fear of "using the EVM". If a wallet feature, privacy protocol, or whatever else can be done without introducing this "big scary EVM thing", there's a noticeable sigh of relief. To me, this is very sad. Ethereum's whole point is its generality, and if the EVM is not good enough to actually meet the needs of that generality, then we should tackle the problem head-on, and make a better VM. This means: * More efficient than EVM in raw execution, to the point where most precompiles become unnecessary * More prover-efficient than EVM (today, provers are written in RISC-V, hence my proposal to just make the new VM be RISC-V) * Client-side-prover friendly. You should be able to, client-side, make ZK-proofs about eg. what happens if your account gets called with a certain piece of data * Maximum simplicity. A RISC-V interpreter is only a couple hundred lines of code, it's what a blockchain VM "should feel like" This is still more speculative and non-consensus. Ethereum would certainly be *fine* if all we do is EVM + GPU. But a better VM can make Ethereum beautiful and great. A possible deployment roadmap is: 1. NewVM (eg. RISC-V) only for precompiles: 80% of today's precompiles, plus many new ones, become blobs of NewVM code 2. Users get the ability to deploy NewVM contracts 3. EVM is retired and turns into a smart contract written in NewVM EVM users experience full backwards compatibility except gas cost changes (which will be overshadowed by the next few years of scaling work). And we get a much more prover-efficient, simpler and cleaner protocol.

Now, execution layer changes. I've already talked about account abstraction, multidimensional gas, BALs, and ZK-EVMs.

I've also talked here about a short-term EVM upgrade that I think will be super-valuable: a vectorized…

https://firefly.social/post/ff-88f4b90674bc498a802f9960e6050dfa?s=bsky

01.03.2026 17:20 👍 11 🔁 0 💬 0 📌 1
Preview
Continue reading on Firefly.Social This is quite an impressive experiment. Vibe-coding the entire 2030 roadmap within weeks. Obviously such a thing built in two weeks without even having the EIPs has massive caveats: almost certainly lots of critical bugs, and probably in some cases "stub" versions of a thing where the AI did not even try making the full version. But six months ago, even this was far outside the realm of possibility, and what matters is where the trend is going. AI is massively accelerating coding (yesterday, I tried agentic-coding an equivalent of my blog software, and finished within an hour, and that was using gpt-oss:20b running on my laptop (!!!!), kimi-2.5 would have probably just one-shotted it). But probably, the right way to use it, is to take half the gains from AI in speed, and half the gains in security: generate more test-cases, formally verify everything, make more multi-implementations of things. A collaborator of the @leanethereum effort managed to AI-code a machine-verifiable proof of one of the most complex theorems that STARKs rely on for security. A core tenet of @leanethereum is to formally verify everything, and AI is greatly accelerating our ability to do that. Aside from formal verification, simply being able to generate a much larger body of test cases is also important. Do not assume that you'll be able to put in a single prompt and get a highly-secure version out anytime soon; there WILL be lots of wrestling with bugs and inconsistencies between implementations. But even that wrestling can happen 5x faster and 10x more thoroughly. People should be open to the possibility (not certainty! possibility) that the Ethereum roadmap will finish much faster than people expect, at a much higher standard of security than people expect. On the security side, I personally am excited about the possibility that bug-free code, long considered an idealistic delusion, will finally become first possible and then a basic expectation. If we care about trustlessness, this is a necessary piece of the puzzle. Total security is impossible because ultimately total security means exact correspondence between lines of code and contents of your mind, which is many terabytes (see https://firefly.social/post/x/2025653045414273438 ). But there are many specific cases, where specific security claims can be made and verified, that cut out >99% of the negative consequences that might come from the code being broken.

This is quite an impressive experiment. Vibe-coding the entire 2030 roadmap within weeks.

Obviously such a thing built in two weeks without even having the EIPs has massive caveats: almost certainly lots of critical bugs…

https://firefly.social/post/ff-492f8489baf246e6a66ce9c3b0d8a1e5?s=bsky

28.02.2026 16:20 👍 15 🔁 2 💬 2 📌 1

Now, account abstraction.

We have been talking about account abstraction ever since early 2016, see the original EIP-86: https://github.com/ethereum/EIPs/issues/86

Now, we finally have EIP-8141 (…

https://firefly.social/post/ff-a91536c9a4464eb19888c3dc6b9ddd9c?s=bsky

28.02.2026 15:53 👍 6 🔁 0 💬 1 📌 0
Preview
Continue reading on Firefly.Social Now, scaling. There are two buckets here: short-term and long-term. Short term scaling I've written about elsewhere. Basically: * Block level access lists (coming in Glamsterdam) allow blocks to be verified in parallel. * ePBS (coming in Glamsterdam) has many features, of which one is that it becomes safe to use a large fraction of each slot (instead of just a few hundred milliseconds) to verify a block * Gas repricings ensure that gas costs of operations are aligned with the actual time it takes to execute them (plus other costs they impose). We're also taking early forays into multidimensional gas, which ensures that different resources are capped differently. Both allow us to take larger fractions of a slot to verify blocks, without fear of exceptional cases. There is a multi-stage roadmap for multidimensional gas. First, in Glamsterdam, we separate out "state creation" costs from "execution and calldata" costs. Today, an SSTORE that changes a slot from nonzero -> nonzero costs 5000 gas, an SSTORE that changes zero -> nonzero costs 20000. One of the Glamsterdam repricings greatly increases that extra amount (eg. to 60000); our goal doing this + gas limit increases is to scale execution capacity much more than we scale state size capacity, for reasons I've written before ( https://ethresear.ch/t/hyper-scaling-state-by-creating-new-forms-of-state/24052 ). So in Glamsterdam, that SSTORE will charge 5000 "regular" gas and (eg.) 55000 "state creation gas". State creation gas will NOT count toward the ~16 million tx gas cap, so creating large contracts (larger than today) will be possible. One challenge is: how does this work in the EVM? The EVM opcodes (GAS, CALL...) all assume one dimension. Here is our approach. We maintain two invariants: * If you make a call with X gas, that call will have X gas that's usable for "regular" OR "state creation" OR other future dimensions * If you call the GAS opcode, it tells you you have Y gas, then you make a call with X gas, you still have at least Y-X gas, usable for any function, _after_ the call to do any post-operations What we do is, we create N+1 "dimensions" of gas, where by default N=1 (state creation), and the extra dimension we call "reservoir". EVM execution by default consumes the "specialized" dimensions if it can, and otherwise it consumes from reservoir. So eg. if you have (100000 state creation gas, 100000 reservoir), then if you use SSTORE to create new state three times, your remaining gas goes (100000, 100000) -> (45000, 95000) -> (0, 80000) -> (0, 20000). GAS returns reservoir. CALL passes along the specified gas amount from the reservoir, plus _all_ non-reservoir gas. Later, we switch to multi-dimensional *pricing*, where different dimensions can have different floating gas prices. This gives us long-term economic sustainability and optimality (see https://vitalik.eth.limo/general/2024/05/09/multidim.html ). The reservoir mechanism solves the sub-call problem at the end of that article. Now, for long-term scaling, there are two parts: ZK-EVM, and blobs. For blobs, the plan is to continue to iterate on PeerDAS, and get it to an eventual end-state where it can ideally handle ~8 MB/sec of data. Enough for Ethereum's needs, not attempting to be some kind of global data layer. Today, blobs are for L2s. In the future, the plan is for Ethereum block data to directly go into blobs. This is necessary to enable someone to validate a hyperscaled Ethereum chain without personally downloading and re-executing it: ZK-SNARKs remove the need to re-execute, and PeerDAS on blobs lets you verify availability without personally downloading. For ZK-EVM, the goal is to step up our "comfort" relying on it in stages: * Clients that let you participate as an attester with ZK-EVMs will exist in 2026. They will not be safe enough to allow the network to run on them, but eg. 5% of the network relying on them will be ok. (If the ZK-EVM breaks, you *will not* be slashed, you'll just have a risk of building on an invalid block and losing revenue) * In 2027, we'll start recommending for a larger minority of the network to run on ZK-EVMs, and at the same time full focus will be on formally verifying, maximizing their security, etc. Even 20% of the network running ZK-EVMs will let us greatly increase the gaslimit, because it allows gas limits to greatly increase while having a cheap path for solo stakers, who are under 20% anyway. * When ready, we move to 3-of-5 mandatory proving. For a block to be valid, it would need to contain 3 of 5 types of proofs from different proof systems. By this point, we would expect that all nodes (except nodes that need to do indexing) will rely on ZK-EVM proofs. * Keep improving the ZK-EVM, and make it as robust, formally verified, etc as possible. This will also start to involve any VM change efforts (eg. RISC-V)

Now, scaling.

There are two buckets here: short-term and long-term.

Short term scaling I've written about elsewhere. Basically:

* Block level access lists (coming in Glamsterdam) allow blocks to be verified in parallel…

https://firefly.social/post/ff-0d44f5d8c8474740aaad5b781ce897af?s=bsky

27.02.2026 15:20 👍 8 🔁 0 💬 2 📌 2
Preview
Continue reading on Firefly.Social Now, the quantum resistance roadmap. Today, four things in Ethereum are quantum-vulnerable: * consensus-layer BLS signatures * data availability (KZG commitments+proofs) * EOA signatures (ECDSA) * Application-layer ZK proofs (KZG or groth16) We can tackle these step by step: ## Consensus-layer signatures Lean consensus includes fully replacing BLS signatures with hash-based signatures (some variant of Winternitz), and using STARKs to do aggregation. Before lean finality, we stand a good chance of getting the Lean available chain. This also involves hash-based signatures, but there are much fewer signatures (eg. 256-1024 per slot), so we do not need STARKs for aggregation. One important thing upstream of this is choosing the hash function. This may be "Ethereum's last hash function", so it's important to choose wisely. Conventional hashes are too slow, and the most aggressive forms of Poseidon have taken hits on their security analysis recently. Likely options are: * Poseidon2 plus extra rounds, potentially non-arithmetic layers (eg. Monolith) mixed in * Poseidon1 (the older version of Poseidon, not vulnerable to any of the recent attacks on Poseidon2, but 2x slower) * BLAKE3 or similar (take the most efficient conventional hash we know) ## Data availability Today, we rely pretty heavily on KZG for erasure coding. We could move to STARKs, but this has two problems: 1. If we want to do 2D DAS, then our current setup for this relies on the "linearity" property of KZG commitments; with STARKs we don't have that. However, our current thinking is that it should be sufficient given our scale targets to just max out 1D DAS (ie. PeerDAS). Ethereum is taking a more conservative posture, it's not trying to be a high-scale data layer for the world. 2. We need proofs that erasure coded blobs are correctly constructed. KZG does this "for free". STARKs can substitute, but a STARK is ... bigger than a blob. So you need recursive starks (though there's also alternative techniques, that have their own tradeoffs). This is okay, but the logistics of this get harder if you want to support distributed blob selection. Summary: it's manageable, but there's a lot of engineering work to do. ## EOA signatures Here, the answer is clear: we add native AA (see https://eips.ethereum.org/EIPS/eip-8141 ), so that we get first-class accounts that can use any signature algorithm. However, to make this work, we also need quantum-resistant signature algorithms to actually be viable. ECDSA signature verification costs 3000 gas. Quantum-resistant signatures are ... much much larger and heavier to verify. We know of quantum-resistant hash-based signatures that are in the ~200k gas range to verify. We also know of lattice-based quantum-resistant signatures. Today, these are extremely inefficient to verify. However, there is work on vectorized math precompiles, that let you perform operations (+, *, %, dot product, also NTT / butterfly permutations) that are at the core of lattice math, and also STARKs. This could greatly reduce the gas cost of lattice-based signatures to a similar range, and potentially go even lower. The long-term fix is protocol-layer recursive signature and proof aggregation, which could reduce these gas overheads to near-zero. ## Proofs Today, a ZK-SNARK costs ~300-500k gas. A quantum-resistant STARK is more like 10m gas. The latter is unacceptable for privacy protocols, L2s, and other users of proofs. The solution again is protocol-layer recursive signature and proof aggregation. So let's talk about what this is. In EIP-8141, transactions have the ability to include a "validation frame", during which signature verifications and similar operations are supposed to happen. Validation frames cannot access the outside world, they can only look at their calldata and return a value, and nothing else can look at their calldata. This is designed so that it's possible to replace any validation frame (and its calldata) with a STARK that verifies it (potentially a single STARK for all the validation frames in a block). This way, a block could "contain" a thousand validation frames, each of which contains either a 3 kB signature or even a 256 kB proof, but that 3-256 MB (and the computation needed to verify it) would never come onchain. Instead, it would all get replaced by a proof verifying that the computation is correct. Potentially, this proving does not even need to be done by the block builder. Instead, I envision that it happens at mempool layer: every 500ms, each node could pass along the new valid transactions that it has seen, along with a proof verifying that they are all valid (including having validation frames that match their stated effects). The overhead is static: only one proof per 500ms. Here's a post where I talk about this: https://ethresear.ch/t/recursive-stark-based-bandwidth-efficient-mempool/23838

Now, the quantum resistance roadmap.

Today, four things in Ethereum are quantum-vulnerable:

* consensus-layer BLS signatures
* data availability (KZG commitments+proofs)
* EOA signatures (ECDSA)
* Application-layer ZK…

https://firefly.social/post/ff-4d97f0c59aac4c618b78d190a7f722b3?s=bsky

26.02.2026 17:35 👍 21 🔁 2 💬 2 📌 0
Preview
Continue reading on Firefly.Social A very important document. Let's walk through this one "goal" at a time. We'll start with fast slots and fast finality. I expect that we'll reduce slot time in an incremental fashion, eg. I like the "sqrt(2) at a time" formula (12 -> 8 -> 6 -> 4 -> 3 -> 2, though the last two steps are more speculative and depend on heavy research). It is possible to go faster or slower here; but the high level is that we'll view the slot time as a parameter that we adjust down when we're confident it's safe to, similar to the blob target. Fast slots are off in their own lane at the top of the roadmap, and do not really seem to connect to anything. This is because the rest of the roadmap is pretty independent of the slot time: we would need to do roughly the same things whether the slot time is 2 seconds or 32 seconds There are a few intersection areas though. One is p2p improvements. @raulvk has recently been working on an optimized p2p layer for Ethereum, which uses erasure coding to greatly improve on the bandwidth/latency tradeoff frontier. Roughly speaking: in today's design, each node receives a full block body from several peers, and is able to accept and rebroadcast it as soon as it receives the first one. If the "width" (number of peers sending you the block) is low, then one bad peer can greatly delay when you receive the block. If width is high, there is a lot of unneeded data overhead. With erasure coding, you can choose a k-of-n setup, eg: split each block into 8 pieces so that with any 4 of them you can reconstruct the full block. This gives you much of the redundancy benefits of high width, without the overhead. We have stats that show that this architecture can greatly reduce 95th percentile block propagation time, making shorter slots viable with no security tradeoffs (except increased protocol complexity, though here the performance-gain-to-lines-of-code ratio is quite favorable) Another intersection area is the more complex slot structure that comes with ePBS, FOCIL, and the fast confirmation rule. These have important benefits, but they decrease the safe latency maximum from slot/3 to slot/5. There's ongoing research to try to pipeline things better to minimize losses (also note: the slot time is lower-bounded not just by slot latency, but also by the fixed-cost part of ZK prover latency), but there are some tradeoffs here. One way we are exploring to compensate for this is to change to an architecture where only ~256-1024 randomly selected attesters sign on each slot. For a fork choice (non-finalizing) function, this is totally sufficient. The smaller number of signatures lets us remove the aggregation phase, shortening the slots. Fast finality is more complex (the ultimate protocol is IMO simpler than status quo Gasper, but the change path is complex). Today, finality takes 16 minutes (12s slots * 32 slot epochs * 2.5 epochs) on average. The goal is to decouple slots and finality, so allow us to reason about both separately, and we are aiming to use a one-round-finality BFT algorithm (a Minimmit variant) to finalize. So endgame finality time might be eg. 6-16 sec. Because this is a very invasive set of changes, the plan is to bundle the largest step in each change with a switch of the cryptography, notably to post-quantum hash-based signatures, and to a maximally STARK-friendly hash (there are three possible responses to the recent Poseidon2 attacks: (i) increase round count or introduce other countermeasures such as a Monolith layer, (ii) go back to Poseidon1, which is even more lindy than Poseidon2 and has not seen flaws, (iii) use BLAKE3 or other maximally-cheap "conventional" hash. All are being researched). Additionally, there is a plan to introduce many of these changes piece-by-piece, eg. "1-epoch finality" means we adjust the current consensus to change from FFG-style finalization to Minimmit-style finalization. One possible finality time trajectory is: 16 min (today) -> 10m40s (8s slots) -> 6m24s (one-epoch finality) -> 1m12s (8-slot epochs, 6s slots) -> 48s (4s slots) -> 16s (minimmit) -> 8s (minimmit with more aggressive parameters) One interesting consequence of the incremental approach is that there is a pathway to making the slots quantum-resistant much sooner than making the finality quantum-resistant, so we may well quite quickly get to a regime where, if quantum computers suddenly appear, we lose the finality guarantee, but the chain keeps chugging along. Summary: expect to see progressive decreases of both slot time and finality time, and expect to see these changes to be intertwined with a "ship of Theseus" style component-by-component replacement of Ethereum's slot structure and consensus with a cleaner, simpler, quantum-resistant, prover-friendly, end-to-end formally-verified alternative.

A very important document. Let's walk through this one "goal" at a time. We'll start with fast slots and fast finality.

I expect that we'll reduce slot time in an incremental fashion, eg. I like the "sqrt(2) at a time"…

https://firefly.social/post/ff-29d1d9370b954ccd8a4156e7b2994e64?s=bsky

25.02.2026 21:56 👍 11 🔁 0 💬 1 📌 1

💛💙

24.02.2026 23:00 👍 25 🔁 2 💬 1 📌 0

It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences.

(For those who are not aware, so far they have been maintaining the two red lines of "no fully…

https://firefly.social/post/ff-aa1b6a1b4b184f2a9b006c502460c530?s=bsky

24.02.2026 19:44 👍 46 🔁 2 💬 2 📌 3
Preview
Continue reading on Firefly.Social I agree with this. Though with the proviso that because Ethereum is permissionless, various centralized and closed things will inevitably exist on top of it. Our job should be to make the open-source, permissionless, trustless, secure censorship resistant ecosystem strong, so that it can hold its own and ultimately prove itself superior to both anything closed / permissioned / trusted-party-backdoored on Ethereum, and to such things outside Ethereum in the traditional world.

I agree with this.

Though with the proviso that because Ethereum is permissionless, various centralized and closed things will inevitably exist on top of it. Our job should be to make the open-source, permissionless…

https://firefly.social/post/ff-5f003ce51ff5495ba01e08b826a3cbbd?s=bsky

24.02.2026 19:33 👍 8 🔁 1 💬 0 📌 1
Preview
Continue reading on Firefly.Social I actually like private property more than I did a few years ago. One variable that changed for me is "stable era mindset vs chaotic era mindset". When you're in a "stable era", you see how private property is suboptimal, how economics can easily churn out 10+ categories of situations where it's obvious that certain taxes, incentives to make things available at better prices, etc can produce first-order gains with only second-order deadweight losses (which means that at low levels, the gains greatly exceed the losses). "Pure" private property is only "optimal" under spherical-cow economic assumptions like perfect competition. But in a "chaotic era", private property is more about schelling points - it's about creating a bulwark that's easy for people to understand and rally around defending, that says "your attempt to intervene in my life from the outside ends here". In the chaotic era, infringements on personal space are less likely to be well-meaning bureaucrats who overreach because they have not read enough Hayek, and more likely to be coming from a place of outright indifference or even hostility to your well-being. And looking at modern politics, yeah, there's a lot of that now. Since a lot of "Vitalik hates private property" sentiment comes from me liking Harberger taxes, I'll address that topic directly. My biggest update since the original 2016-19 era ideas was that, when designing details of Harberger taxes, the best motivating example to organize thought around is not "your house", rather it's "corporate intellectual property and walled gardens". If we think about the underlying complaints that people have about powerful corporations, the walled gardens and various ways in which centralized power accumulates on itself is top 5 on the list. What would it look like to build a "Harberger tax" that would tax eg. social platforms, Apple, etc more if they acted as walled gardens, and less if they enabled interoperability (and zero if they were fully open-source and interoperable and forkable)? There is a lot of energy right now around wanting to tax very wealthy individuals and corporations more, and I wonder: what if the best way to do that is not to tax *wealth* or *unrealized gains* (which has large downsides), but instead to tax *enclosure*? This way you raise revenue in a way that actually *increases* efficiency (any losses from people working less hard are more-than-compensated by gains from people shifting their work into formats where it's easier for people to build on top of each other and markets becoming more competitive). Any tax is an infringement on private property. But if you think about "tax on social platform that's proportional to some metric of how walled-garden-y they are", in an intuitive human sense, it really doesn't feel like "bureaucrats intervening in my life". It feels like "keeping concentrations of power from getting too out of hand". So I am in favor of doing things like that, and much less than before in favor of anything that forces people (incl entrepreneurs) to outright sell their assets, as eg. "Harberger tax on everything" does. A world where startup entrepreneurs are forced to constantly sell shares, realistically to the same few large VCs, in order to pay unrealized-gains or wealth tax bills strikes me as a world that's likely to be more soulless and homogeneous than today. But a world where the top 50% of large companies ranked by walled-garden-ness are taxed more (and the bottom 25% by that metric taxed less, perhaps some even zero), is a world that feels more dynamic and open and free. But even the above is somewhat of a "stable era" perspective, because it tries to make a more-perfect solution from the perspective of the political layer being friendly. We live in a chaotic era, and the point of crypto should be to solve important problems from the bottom up (whether "individualistic bottom up", enabling people to resist and escape various shackles, or "collective bottom up", communities organizing around shifting entire equilibria to their benefit) This ties into what I mean by wanting Ethereum to protect financial self-sovereignty. I do not think that Ethereum has much to offer to the trillion-dollar companies whose goal it is to offer products and services in a way that maximizes walled gardens and enclosure - in fact, much the opposite, censorship resistance can serve as the baseline for rebel communities that play the adversarial game of routing around those walled gardens. I do think Ethereum offers stronger security to people who want to maintain security of (including ability to use) their own financial resources, including surviving through great economic and political turmoil, for their personal or economic needs. And Ethereum offers a base layer for communities to organize large sudden collective shifts away from harmful equilibria into better ones; DAOs should try to solve that problem more.

I actually like private property more than I did a few years ago.

One variable that changed for me is "stable era mindset vs chaotic era mindset". When you're in a "stable era", you see how private property is suboptimal…

https://firefly.social/post/ff-f159237f41b541f4b19b89758303ad8c?s=bsky

24.02.2026 19:17 👍 12 🔁 1 💬 0 📌 0
Preview
Continue reading on Firefly.Social Defi is a central part of the value that Ethereum provides. Financial empowerment is a central part of what it means to have agency and freedom in our current world. Finance is far from the only thing that Ethereum is good for, but it is an important thing. This post discusses how the Ethereum Foundation is approaching defi. Defi today makes the world's best savings, risk management and wealth-building opportunities permissionlessly available worldwide. We need to build on that. Ethereum's early defi era was great because it dared to dream and innovate and come up with totally new paradigms (eg. AMMs). Defi tomorrow will bring back that spirit. Don't just "make a better stablecoin", dig a layer deeper, and think about the underlying problem (risk management, hedging one's future expenses), and come up with an even better solution. But also, as the EF, we are not interested in supporting "onchain finance" or even "defi" indiscriminately. We have a specific vision of what we want to see out of defi: permissionless, open-source, private, security-first global finance that maximizes people's control over their own assets, minimizes centralized chokepoints and trusted third parties, and democratizes risk management and wealth building (the two key goals of finance according to modern portfolio theory) as well as payments. We want protocols that pass the walkaway test: that keep working even if the original team suddenly disappears without warning (or even: becomes hostile / compromised without warning). Bringing this vision to reality will inevitably take a lot of work. Defi is a complex toolchain, including various onchain components, user-side offchain components (ie. wallet, local agent...), other offchain components, etc. The things that we care about include areas like: * Improving security of defi through "traditional" means, eg. audits, standards, wallet-side safeguards * Improving security of defi through "new" means, eg. AI-assisted formal verification, user-side agents as safeguards * Oracle security and decentralization (there's A LOT of skeletons in the closet here, we as an ecosystem really need to point a big eye of sauron at it for a while) * Privacy. Both privacy-preserving payments, and privacy of more complex use cases (eg. what does it mean to have a maximally privacy-preserving CDP? there are clearly benefits in reducing liquidation-sniping risk, but it requires hard tech to get there) * Open source, and improving the licensing / forkability situation in defi Ethereum is a permissionless protocol, and nothing stops people from deploying insecure protocols, protocols that enshrine ultimately unneeded centralized trust in the name of convenience, or dopamine-maximizing gambleslop. However, we *are* interested in working with anyone aligned to make permissionless, open-source, intermediary-minimizing and security and user-agency-maximizing defi ecosystem as strong as possible, so that it can be not just individuals and institutions' first choice in Ethereum, but also a globally compelling way to manage funds for anyone who needs its properties.

Defi is a central part of the value that Ethereum provides. Financial empowerment is a central part of what it means to have agency and freedom in our current world. Finance is far from the only thing that Ethereum is good…

https://firefly.social/post/ff-eb5cfde961cb446f89816bdc642b5103?s=bsky

24.02.2026 16:46 👍 9 🔁 0 💬 2 📌 0
Preview
Continue reading on Firefly.Social I'm actually pretty open-minded about the anti-data-center populism. From everything I've seen from people working on this, reducing industrial-scale hardware availability seems to be both the most practical, and most non-dystopian / non-invasive way to lengthen AGI timelines. So if the movement that makes that happen starts out with anti-data-center populism, that seems fine? Of course you have to do things other than going after data centers located in populated areas to really make a dent on AGI timelines (my intuition is that 10-100x compute reduction is feasible in a "static" model of the world, and 100-10000x if you compare to a counterfactual that includes future chip design progress; those numbers *would* make a dent), but there is a first step for everything.

I'm actually pretty open-minded about the anti-data-center populism.

From everything I've seen from people working on this, reducing industrial-scale hardware availability seems to be both the most practical, and most…

https://firefly.social/post/ff-5e7407a9171648eebd81cb1903f4f1a1?s=bsky

24.02.2026 16:11 👍 9 🔁 1 💬 0 📌 0
Preview
Continue reading on Firefly.Social Interesting to scroll through the comments of this. At least on the socials, there is pretty much zero public support for (i) corporate intellectual property [especially in this case, given how basically all the models were trained] (ii) the vision of "let's protect against Authoritarian Bad Guys by making sure that the self-appointed Good Guys are the only ones with the best toys" https://x.com/AnthropicAI/status/2025997928242811253

Interesting to scroll through the comments of this. At least on the socials, there is pretty much zero public support for

(i) corporate intellectual property [especially in this case, given how basically all the models…

https://firefly.social/post/ff-f329ed2617ea44129d7db861662e0500?s=bsky

24.02.2026 15:09 👍 10 🔁 1 💬 3 📌 0
Preview
Continue reading on Firefly.Social How I think about "security": The goal is to minimize the divergence between the user's intent, and the actual behavior of the system. "User experience" can also be defined in this way. Thus, "user experience" and "security" are thus not separate fields. However, "security" focuses on tail risk situations (where downside of divergence is large), and specifically tail risk situations that come about as a result of adversarial behavior. One thing that becomes immediately obvious from the above definition, is that "perfect security" is impossible. Not because machines are "flawed", or even because humans designing the machines are "flawed", but because "the user's intent" is fundamentally an extremely complex object that the user themselves does not have easy access to. Suppose the user's intent is "I want to send 1 ETH to Bob". But "Bob" is itself a complicated meatspace entity that cannot be easily mathematically defined. You could "represent" Bob with some public key or hash, but then the possibility that the public key or hash is not actually Bob becomes part of the threat model. The possibility that there is a contentious hard fork, and so the question of which chain represents "ETH" is subjective. In reality, the user has a well-formed picture about these topics, which gets summarized by the umbrella term "common sense", but these things are not easily mathematically defined. Once you get into more complicated user goals - take, for example, the goal of "preserving the user's privacy" - it becomes even more complicated. Many people intuitively think that encrypting messages is enough, but the reality is that the metadata pattern of who talks to whom, and the timing pattern between messages, etc, can leak a huge amount of information. What is a "trivial" privacy loss, versus a "catastrophic" loss? If you're familiar with early Yudkowskian thinking about AI safety, and how simply specifying goals robustly is one of the hardest parts of the problem, you will recognize that this is the same problem. Now, what do "good security solutions" look like? This applies for: * Ethereum wallets * Operating systems * Formal verification of smart contracts or clients or any computer programs * Hardware * ... The fundamental constraint is: anything that the user can input into the system is fundamentally far too low-complexity to fully encode their intent. I would argue that the common trait of a good solution is: the user is specifying their intention in multiple, overlapping ways, and the system only acts when these specifications are aligned with each other. Examples: * Type systems in programming: the programmer first specifies *what the program does* (the code itself), but then also specifies *what "shape" each data structure has at every step of the computation*. If the two diverge, the program fails to compile. * Formal verification: the programmer specifies what the program does (the code itself), and then also specifies mathematical properties that the program satisfies * Transaction simulations: the user specifies first what action they want to take, and then clicks "OK" or "Cancel" after seeing a simulation of the onchain consequences of that action * Post-assertions in transactions: the transaction specifies both the action and its expected effects, and both have to match for the transaction to take effect * Multisig / social recovery: the user specifies multiple keys that represent their authority * Spending limits, new-address confirmations, etc: the user specifies first what action they want to take, and then, if that action is "unusual" or "high-risk" in some sense, the user has to re-specify "yes, I know I am doing something unusual / high-risk" In all cases, the pattern is the same: there is no perfection, there is only risk reduction through redundancy. And you want the different redundant specifications to "approach the user's intent" from different "angles": eg. action, and expected consequences, expected level of significance, economic bound on downside, etc This way of thinking also hints at the right way to use LLMs. LLMs done right are themselves a simulation of intent. A generic LLM is (among other things) like a "shadow" of the concept of human common sense. A user-fine-tuned LLM is like a "shadow" of that user themselves, and can identify in a more fine-grained way what is normal vs unusual. LLMs should under no circumstances be relied on as a sole determiner of intent. But they are one "angle" from which a user's intent can be approximated. It's an angle very different from traditional, explicit, ways of encoding intent, and that difference itself maximizes the likelihood that the redundancy will prove useful. One other corollary is that "security" does NOT mean "make the user do more clicks for everything". Rather, security should mean: it should be easy (if not automated) to do low-risk things, and hard to do dangerous things. Getting this balance right is the challenge.

How I think about "security":

The goal is to minimize the divergence between the user's intent, and the actual behavior of the system.

"User experience" can also be defined in this way. Thus, "user experience" and…

https://firefly.social/post/ff-f7f0d22ee8cf44369e19afb31d6369b4?s=bsky

22.02.2026 19:24 👍 20 🔁 0 💬 5 📌 3
Preview
Continue reading on Firefly.Social "AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance. The core problem with democratic / decentralized modes of governance (including DAOs on ethereum) is limits to human attention: there are many thousands of decisions to make, involving many domains of expertise, and most people don't have the time or skill to be experts in even one, let alone all of them. The usual solution, delegation, is disempowering: it leads to a small group of delegates controlling decision-making while their supporters, after they hit the "delegate" button, have no influence at all. So what can we do? We use personal LLMs to solve the attention problem! Here are a few ideas: ## Personal governance agents If a governance mechanism depends on you to make a large number of decisions, a personal agent can perform all the necessary votes for you, based on preferences that it infers from your personal writing, conversation history, direct statements, etc. If the agent is (i) unsure how you would vote on an issue, and (ii) convinced the issue is important, then it should ask you directly, and give you all relevant context. ## Public conversation agents Making good decisions often cannot come from a linear process of taking people's views that are based only on their own information, and averaging them (even quadratically). There is a need for processes that aggregate many people's information, and then give each person (or their LLM) a chance to respond *based on that*. This includes: * Inferring and summarizing your own views and converting them into a format that can be shared publicly (and does not expose your private info) * Summarizing commonalities between people's inputs (expressed as words), similar to the various LLM+pol.is ideas ## Suggestion markets If a governance mechanism values "high-quality inputs" of any type (this could be proposals, or it could even be arguments), then you can have a prediction market, where anyone can submit an input, AIs can bet on a token representing that input, and if the mechanism "accepts" the input (either accepting the proposal, or accepting it as a "unit" of conversation that it then passes along to its participant), it pays out $X to the holders of the token. Note that this is basically the same as https://firefly.social/post/x/2017956762347835488 ## Decentralized governance with private information One of the biggest weaknesses of highly decentralized / democratic governance is that it does not work well when important decisions need to be made with secret information. Common situations: (i) the org engaging in adversarial conflicts or negotiations (ii) internal dispute resolution (iii) compensation / funding decisions. Typically, orgs solve this by appointing individuals who have great power to take on those tasks. But with multi-party computation (currently I've seen this done with TEEs; I would love to see at least the two-party case solved with garbled circuits https://vitalik.eth.limo/general/2020/03/21/garbled.html so we can get pure-cryptographic security guarantees for it), we could actually take many people's inputs into account to deal with these situations, without compromising privacy. Basically: you submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement. You don't see the private info, and no one else sees the contents of your personal LLM. ## The importance of privacy All of these approaches involve each participant making use of much more information about themselves, and potentially submitting much larger-sized inputs. Hence, it becomes all the more important to protect privacy. There are two kinds of privacy that matter: * Anonymity of the participant: this can be accomplished with ZK. In general, I think all governance tools should come with ZK built in * Privacy of the contents: this has two parts. First, the personal LLM should do what it can to avoid divulging private info about you that it does not need to divulge. Second, when you have computation that combines multiple LLMs or multiple people's info, you need multi-party techniques to compute it privately. Both are important.

"AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of…

https://firefly.social/post/ff-669078d24e3d46ad933c6de23225e55a?s=bsky

21.02.2026 15:05 👍 33 🔁 3 💬 4 📌 3

Checking it out now!

What are the biggest differences from OrganicMaps that I should keep an eye out for?

20.02.2026 17:57 👍 2 🔁 0 💬 1 📌 2

Will take a look!

20.02.2026 15:57 👍 4 🔁 0 💬 2 📌 0
Google Maps Is Now Less Useful If You’re Not Signed In Those who aren't signed into a Google account are moved to what Maps calls a 'limited view.'

Good time to switch to Openstreetmap (OrganicMaps is a good mobile app for it)

https://www.pcmag.com/news/google-maps-is-now-less-useful-if-youre-not-signed-in

My experience has been that OSM's biggest weakness is less…

https://firefly.social/post/ff-509d2d9a6e3446028401cbf88d595ee0?s=bsky

20.02.2026 03:42 👍 51 🔁 5 💬 4 📌 2

If someone figures out a way to do crime-fighting with cameras in a way that is verifiable and transparent and provably limited in its operation and doesn't unaccountably centralize power, the obvious name to call it is "Frodo".

20.02.2026 00:17 👍 25 🔁 2 💬 3 📌 3
Preview
Continue reading on Firefly.Social ## Harden FOCIL is already a significant hardening of Ethereum. But beyond that, the most important work this year is not in the glamorous (heh) EIPs, it's in the gritty stuff: * Network security testing * Post-quantum readiness (eg. we are also exploring EIPs to make it much more gas-efficient to verify quantum-resistant signature schemes inside the EVM) * Improving our ability to analyze the network's geographic decentralization * All the various work Kohaku is doing on the security side (trustless RPCs, social recovery, local simulation, various other features in progress for this year) Ethereum will emerge from the year a far stronger, more powerful and more self-sovereign protocol than what it was entering this year.

## Harden

FOCIL is already a significant hardening of Ethereum.

But beyond that, the most important work this year is not in the glamorous (heh) EIPs, it's in the gritty stuff:

* Network security testing
* Post-quantum…

https://firefly.social/post/ff-5a57d0b3fd9a41fa926d15cd6490d17a?s=bsky

19.02.2026 18:59 👍 5 🔁 0 💬 1 📌 1
Preview
Continue reading on Firefly.Social ## Improve UX We often make the mistake of thinking about "improving UX" and "improving [user-layer] security" as being two separate domains. In reality though, they are a tightly interconnected tradeoff space. It's not UX vs security, it's improving the UX of security (or improving the security of usage patterns that already have good UX). AA (8141) and FOCIL are two major EIPs targeted for Hegota, and I talked about their value here: https://firefly.social/post/x/2024523896360464791 Their goal is to take flows that are already possible today IF you are willing to accept intermediaries and censorship vulnerability, and make them a native first-class part of the protocol, with strong inclusion guarantees, and accessible through a public mempool. The most important remaining work on OIF this year involves improving trust minimization of fast cross-L2 transfers, that avoid both the latency and the cost of the underlying bridges.

## Improve UX

We often make the mistake of thinking about "improving UX" and "improving [user-layer] security" as being two separate domains. In reality though, they are a tightly interconnected tradeoff space. It's not…

https://firefly.social/post/ff-7c11f286389c4b65abc297e7a9629968?s=bsky

19.02.2026 18:59 👍 6 🔁 0 💬 1 📌 1
Preview
Continue reading on Firefly.Social ## Scale 2025 saw the Ethereum gas limit double (from 30M to 60M), and it introduced PeerDAS which gave us a huge amount of breathing room of capacity for L2s (and in the future for putting Ethereum L1 data into blobs). But 2026 is the year where all of last year's preparatory work really pays off. We will get: * Block-level access lists (BALs), which allow parallellized verification of blocks * ePBS, which (among other things) makes it safe for block execution to take a much larger portion of the slot * Gas repricings, which make slow operations more expensive, making it safe to greatly increase the gas limit without vulnerability to DoS risks The above three things stack *multiplicatively* with each other (eg. if each of the above gives us 3x gain, together we get 27x gain). However, this applies only to execution. The other two major resources are calldata and state creation. For calldata, (i) ePBS also applies, and (ii) p2p improvements that Raul and others are working hard on will improve broadcasting efficiency. So we will get a boost, but a smaller boost than execution. To compensate for this difference, the floor gas cost of calldata will go up somewhat. For state creation, this doc is still a good guide to our thinking: https://ethresear.ch/t/hyper-scaling-state-by-creating-new-forms-of-state/24052 Basically, BALs also enable faster sync, but on the whole, it's much harder to scale state than the other two resources, and so the gas cost of state will go up significantly relative to the other two resources. Applications should make tradeoffs to save less state in exchange for more calldata and/or more execution. In the longer term, we will likely figure out some other type of state to introduce alongside existing state that will be able to scale the full 1000x.

## Scale

2025 saw the Ethereum gas limit double (from 30M to 60M), and it introduced PeerDAS which gave us a huge amount of breathing room of capacity for L2s (and in the future for putting Ethereum L1 data into blobs)…

https://firefly.social/post/ff-67656c4ad4e847768e68f92d1c0bffb3?s=bsky

19.02.2026 18:59 👍 3 🔁 0 💬 1 📌 1

Ethereum L1 protocol research is taking leaps forward in 2026. A good post from @ralexstokes:

https://x.com/ralexstokes/status/2024155683319611850

* Scale
* Improve UX
* Harden

19.02.2026 18:59 👍 16 🔁 2 💬 5 📌 3