ePrint Updates's Avatar

ePrint Updates

@eprint.ing.bot

Unofficial bot tracking the IACR Cryptology ePrint Archive (eprint.iacr.org). Maintained by @str4d.xyz. Currently only posts about new papers. Author names are linkified to Bluesky accounts (cryptography.social); contact maintainer for inclusion/removal.

1,124
Followers
1
Following
6,278
Posts
01.05.2023
Joined
Posts Following

Latest posts by ePrint Updates @eprint.ing.bot

Abstract. Signal is a secure messaging app offering end-to-end security for pairwise and group communications. It has tens of millions of users, and has heavily influenced the design of other secure messaging apps (including WhatsApp). Signal has been heavily analysed and, as a result, is rightly regarded as setting the “gold standard” for messaging apps by the scientific community. We present two practical attacks that break the integrity properties of Signal in its advertised threat model. Each attack arises from different features of Signal that are poorly documented and have eluded formal security analyses. The first attack, affecting Android and Desktop, arises from Signal’s introduction of identities based on usernames (instead of phone numbers) in early 2022. We show that the protocol for resolving identities based on usernames and on phone numbers introduced a vulnerability that allows a malicious server to inject arbitrary messages into one-to-one conversations under specific circumstances. The injection causes a user-visible alert about a change of safety numbers, but if the users compare their safety numbers, they will be correct. The second attack is even more severe. It arises from Signal’s Sealed Sender (SSS) feature, designed to allow sender identities to be hidden. We show that a combination of two errors in the SSS implementation in Android allows a malicious server to inject arbitrary messages into both one-to-one and group conversations. The errors relate to missing key checks and the loss of context when cryptographic processing is distributed across multiple software components. The attack is undetectable by users and can be mounted at any time, without any preconditions. As far as we can tell, the vulnerability has been present since the introduction of SSS in 2018. We disclosed both attacks to Signal. The vulnerabilities were promptly acknowledged and patched: the first vulnerability was fixed two days after disclosure, while the second one was patched after eight days. Beyond presenting these devastating attacks on Signal’s end-to-end security guarantees, we discuss more broadly what can be learned about the challenges of deploying new security features in complex software projects.

Abstract. Signal is a secure messaging app offering end-to-end security for pairwise and group communications. It has tens of millions of users, and has heavily influenced the design of other secure messaging apps (including WhatsApp). Signal has been heavily analysed and, as a result, is rightly regarded as setting the “gold standard” for messaging apps by the scientific community. We present two practical attacks that break the integrity properties of Signal in its advertised threat model. Each attack arises from different features of Signal that are poorly documented and have eluded formal security analyses. The first attack, affecting Android and Desktop, arises from Signal’s introduction of identities based on usernames (instead of phone numbers) in early 2022. We show that the protocol for resolving identities based on usernames and on phone numbers introduced a vulnerability that allows a malicious server to inject arbitrary messages into one-to-one conversations under specific circumstances. The injection causes a user-visible alert about a change of safety numbers, but if the users compare their safety numbers, they will be correct. The second attack is even more severe. It arises from Signal’s Sealed Sender (SSS) feature, designed to allow sender identities to be hidden. We show that a combination of two errors in the SSS implementation in Android allows a malicious server to inject arbitrary messages into both one-to-one and group conversations. The errors relate to missing key checks and the loss of context when cryptographic processing is distributed across multiple software components. The attack is undetectable by users and can be mounted at any time, without any preconditions. As far as we can tell, the vulnerability has been present since the introduction of SSS in 2018. We disclosed both attacks to Signal. The vulnerabilities were promptly acknowledged and patched: the first vulnerability was fixed two days after disclosure, while the second one was patched after eight days. Beyond presenting these devastating attacks on Signal’s end-to-end security guarantees, we discuss more broadly what can be learned about the challenges of deploying new security features in complex software projects.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Signal Lost (Integrity): The Signal App is More than the Sum of its Protocols (Kien Tuong Truong, Noemi Terzo, Kenneth G. Paterson) ia.cr/2026/484

09.03.2026 01:23 👍 4 🔁 1 💬 0 📌 0
Abstract. Decentralized lending protocols rely on liquidation mechanisms tied to volatile oracle-derived prices, creating cascading systemic risk during market downturns. We introduce debt-aware discrete bonding curves (DABC)—piecewise-linear bonding curves with a distinguished floor segment whose price is provably non-decreasing. A reserve invariant couples the curve’s collateral to outstanding debt, enabling a credit facility for issuance-native collateral in which borrowing capacity is anchored to the endogenous floor price rather than a market oracle. We prove that no loan originated at or below the floor-anchored LTV can become under-collateralized due to collateral price declines—eliminating protocol-triggered liquidation for this borrowing model. The trade-off: non-repayment results in permanent token lock, not forced sale. By internalizing issuance, trading, and borrowing in a single contract, the mechanism captures fee revenue that would otherwise accrue to external parties, directing it toward floor elevation. A recursive buy-lock-borrow-buy loop enables leveraged positions without liquidation risk; token launches are the most compelling application. To the best of our knowledge, this work provides the first fully formalized treatment in which borrowing safety is derived from a debt-aware reserve invariant and a provably non-decreasing endogenous floor price. We verify the mechanism through stateful fuzz testing and formal verification of a concrete Solidity implementation.

Abstract. Decentralized lending protocols rely on liquidation mechanisms tied to volatile oracle-derived prices, creating cascading systemic risk during market downturns. We introduce debt-aware discrete bonding curves (DABC)—piecewise-linear bonding curves with a distinguished floor segment whose price is provably non-decreasing. A reserve invariant couples the curve’s collateral to outstanding debt, enabling a credit facility for issuance-native collateral in which borrowing capacity is anchored to the endogenous floor price rather than a market oracle. We prove that no loan originated at or below the floor-anchored LTV can become under-collateralized due to collateral price declines—eliminating protocol-triggered liquidation for this borrowing model. The trade-off: non-repayment results in permanent token lock, not forced sale. By internalizing issuance, trading, and borrowing in a single contract, the mechanism captures fee revenue that would otherwise accrue to external parties, directing it toward floor elevation. A recursive buy-lock-borrow-buy loop enables leveraged positions without liquidation risk; token launches are the most compelling application. To the best of our knowledge, this work provides the first fully formalized treatment in which borrowing safety is derived from a debt-aware reserve invariant and a provably non-decreasing endogenous floor price. We verify the mechanism through stateful fuzz testing and formal verification of a concrete Solidity implementation.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Debt-Aware Bonding Curves: Non-Decreasing Floor Prices and Non-Liquidatable Borrowing (Ömer Demirel, Michael Lewkowitz, Tiago Santana) ia.cr/2026/483

09.03.2026 01:22 👍 0 🔁 0 💬 0 📌 0
Abstract. In this work, we present new cryptanalytic attacks on recently proposed, theory-inspired constructions of weak pseudorandom functions (weak-PRFs). We demonstrate attacks on several such designs, showing that the initial security arguments require significant refinement. Methodologically, our approach relies on novel observations about the structure of cyclic matrices, applications of Wagner’s generalized birthday technique, and conversion into polynomial systems over 𝔽₃. These findings highlight the need for a more careful analysis of those weak-PRF candidates

Abstract. In this work, we present new cryptanalytic attacks on recently proposed, theory-inspired constructions of weak pseudorandom functions (weak-PRFs). We demonstrate attacks on several such designs, showing that the initial security arguments require significant refinement. Methodologically, our approach relies on novel observations about the structure of cyclic matrices, applications of Wagner’s generalized birthday technique, and conversion into polynomial systems over 𝔽₃. These findings highlight the need for a more careful analysis of those weak-PRF candidates

Cryptanalysis of Two Alternating Moduli Weak PRFs (Kai Hu, Gregor Leander, Håvard Raddum, Arne Sandrib, Aleksei Udovenko) ia.cr/2026/482

09.03.2026 01:07 👍 0 🔁 0 💬 0 📌 0
Abstract. We present Remise, a two-server authorized anonymous communication system built on Distributed Oblivious RAM (DORAM). Remise supports two modes of opera- tion: (i) an anonymous bulletin board, where messages are publicly revealed at the end of each epoch without linking senders to messages, and (ii) anonymous commu- nication channels, where messages remain secret-shared, and writers selectively grant and revoke read access to chosen readers. In both modes, the two servers execute read and write operations without learning which indices are accessed, which authorization tokens are used, or which relationships exist between writers and readers, assuming at least one honest server. A central contri- bution of Remise is a lightweight and efficient access- control mechanism. Authorization proofs are maintained in secret-shared form across the two servers, enabling oblivious verification while preventing leakage even un- der client–server collusion. Unlike prior DPF-based sys- tems, Remise provides built-in auditing by having the semi-honest servers generate standard-basis vector shares internally, eliminating the need for server-side DPF va- lidity checks. We implement a prototype of Remise and evaluate it under realistic network conditions. Our ex- periments show 80×improvement in online server time when compared to PACL (Spectrum) (IEEE S&P 2023) for databases of size 22²⁴

Abstract. We present Remise, a two-server authorized anonymous communication system built on Distributed Oblivious RAM (DORAM). Remise supports two modes of opera- tion: (i) an anonymous bulletin board, where messages are publicly revealed at the end of each epoch without linking senders to messages, and (ii) anonymous commu- nication channels, where messages remain secret-shared, and writers selectively grant and revoke read access to chosen readers. In both modes, the two servers execute read and write operations without learning which indices are accessed, which authorization tokens are used, or which relationships exist between writers and readers, assuming at least one honest server. A central contri- bution of Remise is a lightweight and efficient access- control mechanism. Authorization proofs are maintained in secret-shared form across the two servers, enabling oblivious verification while preventing leakage even un- der client–server collusion. Unlike prior DPF-based sys- tems, Remise provides built-in auditing by having the semi-honest servers generate standard-basis vector shares internally, eliminating the need for server-side DPF va- lidity checks. We implement a prototype of Remise and evaluate it under realistic network conditions. Our ex- periments show 80×improvement in online server time when compared to PACL (Spectrum) (IEEE S&P 2023) for databases of size 22²⁴

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Remise: Authorized Anonymous Communication Systems (Rohan Ravi, Paritosh Shukla, Adithya Vadapalli) ia.cr/2026/481

09.03.2026 01:07 👍 0 🔁 0 💬 0 📌 0
Abstract. We present CHOPIN, a pairing-based multilinear polynomial commitment scheme (PCS) achieving constant proof size and a linear-time prover, constructed modularly from a bivariate PCS. CHOPIN generalizes the recent compilers from univariate to multilinear PCS MERCURY (Eagen and Gabizon, ePrint 2025/385) and Samaritan (Ganesh, Patranabis, and Singh, ASIACRYPT 2025). Due to its modular design, we obtain a direct proof of knowledge soundness via a reduction to the knowledge soundness of the underlying bivariate PCS in the standard model. In particular, our analysis avoids idealized models such as the Algebraic Group Model.

When instantiated with the bivariate KZG scheme (Papamanthou, Shi, and Tamassia, TCC 2013), CHOPIN achieves a similar proof size to Mercury and Samaritan while offering a two-fold speedup in the main bottleneck for the prover time, which arises because CHOPIN requires only a single large MSM proportional to the size of the committed multilinear polynomial, in contrast to the two large MSMs required by the prior works, at the cost of one additional pairing for the verifier.

Abstract. We present CHOPIN, a pairing-based multilinear polynomial commitment scheme (PCS) achieving constant proof size and a linear-time prover, constructed modularly from a bivariate PCS. CHOPIN generalizes the recent compilers from univariate to multilinear PCS MERCURY (Eagen and Gabizon, ePrint 2025/385) and Samaritan (Ganesh, Patranabis, and Singh, ASIACRYPT 2025). Due to its modular design, we obtain a direct proof of knowledge soundness via a reduction to the knowledge soundness of the underlying bivariate PCS in the standard model. In particular, our analysis avoids idealized models such as the Algebraic Group Model. When instantiated with the bivariate KZG scheme (Papamanthou, Shi, and Tamassia, TCC 2013), CHOPIN achieves a similar proof size to Mercury and Samaritan while offering a two-fold speedup in the main bottleneck for the prover time, which arises because CHOPIN requires only a single large MSM proportional to the size of the committed multilinear polynomial, in contrast to the two large MSMs required by the prior works, at the cost of one additional pairing for the verifier.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

CHOPIN: Optimal Pairing-Based Multilinear Polynomial Commitments from Bivariate KZG (Juraj Belohorec, Pavel Hubáček, Aleksi Kalsta, Kristýna Mašková) ia.cr/2026/480

09.03.2026 01:06 👍 0 🔁 0 💬 0 📌 0
Abstract. Understanding the complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing and cryptography. Existing round- or communication lower bounds either restrict the class of protocols they apply to in terms of communication, setup assumptions, or determinism. Another class of lower bounds holds only with respect to a very powerful (arguably unrealistic) adaptive adversary that can delete undelivered messages sent by a newly corrupted party while it was still honest. On the other hand, many popular BA protocols including the consensus protocol underlying the Algorand cryptocurrency assume only a standard adaptive adversary which cannot perform after-the-fact message removals. In this work, we aim to further narrow the gap between existing upper and lower bounds. We first revisit existing communication lower bounds of Abraham et al. (PODC 2019) and Blum et al. (TCC 2020) which show that, under certain conditions, Ω(t²) messages are necessary in expectation for randomized BA protocols with security against t adaptive corruptions. We give two new lower bounds on the communication complexity of randomized BA protocols that hold against even a standard adaptive adversary, for previously unexplored settings of practical interest. Our bounds assume a complete network of authenticated communication channels. Our first bound improves over Abraham et al. when the setup is limited to a common reference string (CRS), and the second one improves the bit complexity of Blum et al since in the authenticated setting, i.e., we allow idealized signatures. As a technical contribution, we present a new formal model for protocols using idealized signatures, which may be of independent interest. We then turn our attention to the round complexity of randomized BA protocols in which only a subset of parties may speak. We show that such protocols must either rely on erasures or determine whether or not to first speak in the protocol and decide in dependence the parties’ inputs. We discuss in detail how both of these design paradigms have been used in prior work and how they lead to efficiency and design-related issues for practical BA protocols.

Abstract. Understanding the complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing and cryptography. Existing round- or communication lower bounds either restrict the class of protocols they apply to in terms of communication, setup assumptions, or determinism. Another class of lower bounds holds only with respect to a very powerful (arguably unrealistic) adaptive adversary that can delete undelivered messages sent by a newly corrupted party while it was still honest. On the other hand, many popular BA protocols including the consensus protocol underlying the Algorand cryptocurrency assume only a standard adaptive adversary which cannot perform after-the-fact message removals. In this work, we aim to further narrow the gap between existing upper and lower bounds. We first revisit existing communication lower bounds of Abraham et al. (PODC 2019) and Blum et al. (TCC 2020) which show that, under certain conditions, Ω(t²) messages are necessary in expectation for randomized BA protocols with security against t adaptive corruptions. We give two new lower bounds on the communication complexity of randomized BA protocols that hold against even a standard adaptive adversary, for previously unexplored settings of practical interest. Our bounds assume a complete network of authenticated communication channels. Our first bound improves over Abraham et al. when the setup is limited to a common reference string (CRS), and the second one improves the bit complexity of Blum et al since in the authenticated setting, i.e., we allow idealized signatures. As a technical contribution, we present a new formal model for protocols using idealized signatures, which may be of independent interest. We then turn our attention to the round complexity of randomized BA protocols in which only a subset of parties may speak. We show that such protocols must either rely on erasures or determine whether or not to first speak in the protocol and decide in dependence the parties’ inputs. We discuss in detail how both of these design paradigms have been used in prior work and how they lead to efficiency and design-related issues for practical BA protocols.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Strong Efficiency Lower Bounds for Byzantine Agreement (Clément Ducros, Julian Loss, Matthieu Rambaud) ia.cr/2026/479

09.03.2026 01:06 👍 0 🔁 0 💬 0 📌 0
Abstract. We present Hardware/Software co-optimization of Hamming Quasi-Cyclic (HQC) enabled by tightly coupled accelerators implemented on a 32-bit Ibex RISC-V core. On the hardware side, we propose a unified multiplier capable of efficiently performing carryless multiplication for both polynomial multiplication over F_2[X]/(X^{n}−1) and multiplication over F_2^{8}. We also design a Keccak permutation accelerator to support efficient randomness sampling. On the software side, we identify the optimal combination of Toom–Cook and Karatsuba methods for efficient polynomial multiplication on the Ibex core and enhance its performance by minimizing the number of memory accesses during its execution.With our co-optimization strategies, our HQC implementation achieves a performance improvement of several tens of times over the reference implementation.

Abstract. We present Hardware/Software co-optimization of Hamming Quasi-Cyclic (HQC) enabled by tightly coupled accelerators implemented on a 32-bit Ibex RISC-V core. On the hardware side, we propose a unified multiplier capable of efficiently performing carryless multiplication for both polynomial multiplication over F_2[X]/(X^{n}−1) and multiplication over F_2^{8}. We also design a Keccak permutation accelerator to support efficient randomness sampling. On the software side, we identify the optimal combination of Toom–Cook and Karatsuba methods for efficient polynomial multiplication on the Ibex core and enhance its performance by minimizing the number of memory accesses during its execution.With our co-optimization strategies, our HQC implementation achieves a performance improvement of several tens of times over the reference implementation.

A Hardware/Software Co-Optimization of HQC Using Tightly-Coupled Accelerators on a 32-bit Ibex Core (Seog Chung Seo, YoungBeom Kim) ia.cr/2026/478

09.03.2026 01:06 👍 1 🔁 0 💬 0 📌 0
Abstract. One of the most important fundamental elements in guaranteeing data security is data access management. The two primary security components of data access control are typically authorisation and authentication. Data access control is the selective restriction of data access. First, we present an effective data access control mechanism for medical devices that are implanted in this study. Through a signcryption method with proxy reencryption (DAC-PRE), the protocol guarantees anonymous data access control and supports the user’s anonymity behaviour. The security is proven in oracle model. Our experimental analysis shows the proposed protocol has low computational cost.

Abstract. One of the most important fundamental elements in guaranteeing data security is data access management. The two primary security components of data access control are typically authorisation and authentication. Data access control is the selective restriction of data access. First, we present an effective data access control mechanism for medical devices that are implanted in this study. Through a signcryption method with proxy reencryption (DAC-PRE), the protocol guarantees anonymous data access control and supports the user’s anonymity behaviour. The security is proven in oracle model. Our experimental analysis shows the proposed protocol has low computational cost.

DAC-PRE: Practical Anonymous Data Access Scheme Control with Proxy Re-encryption for Implantable Medical Devices (Jayaprakash Kar, Xiaoguang Liu, Fagen Li) ia.cr/2026/477

08.03.2026 02:19 👍 0 🔁 0 💬 0 📌 0
Abstract. Garbling schemes are powerful primitives that enable secure computation between a mutually untrusting garbler and evaluator. A projective garbling scheme is one that encodes the evaluator’s input in a simple bit-by-bit manner. Projective schemes, such as the seminal scheme of Yao, are versatile, as they are naturally compatible with other simple tools, such as 1-out-of-2 oblivious transfer (OT). There exist garbling schemes that naturally operate over large finite fields, some of which require only efficient information-theoretic (IT) techniques. However, here the evaluator’s input is encoded via an affine function over a large field, so these schemes are not naturally projective, reducing their versatility.

We provide a transformation that efficiently projectivizes such schemes. Consider an arithmetic garbling scheme where the evaluator’s input consists of elements from a large prime field. Our symmetric-key-based garbling techniques give a mechanism to translate from Yao-style garbled labels to IT-style garbled labels at cost proportional to the input and output labels: crossing the border is duty-free!

We apply our technique to two problems. (1) Recent works show that projective garbling schemes solve a problem central to trust-minimized bridges for the Bitcoin blockchain. BABE (Garg et al., 2026) and Argo MAC (Eagen and Lai, 2026) give two different approaches. Both works implicitly construct an efficient IT garbling scheme, then use naive bit-decomposition to achieve projectivity. We construct drop-in replacements for both; we improve BABE’s encoding size by 45×, and Argo MAC’s by 20×. (2) Our technique implies a non-interactive reduction from vector oblivious linear evaluations (VOLEs) over 𝔽_(p) to 1-out-of-2 OTs. To our knowledge, ours is the state-of-the-art Minicrypt (plus base OTs) protocol for large field VOLE secure against a malicious receiver. It costs only O((λ + n)lg p) bits.

Abstract. Garbling schemes are powerful primitives that enable secure computation between a mutually untrusting garbler and evaluator. A projective garbling scheme is one that encodes the evaluator’s input in a simple bit-by-bit manner. Projective schemes, such as the seminal scheme of Yao, are versatile, as they are naturally compatible with other simple tools, such as 1-out-of-2 oblivious transfer (OT). There exist garbling schemes that naturally operate over large finite fields, some of which require only efficient information-theoretic (IT) techniques. However, here the evaluator’s input is encoded via an affine function over a large field, so these schemes are not naturally projective, reducing their versatility. We provide a transformation that efficiently projectivizes such schemes. Consider an arithmetic garbling scheme where the evaluator’s input consists of elements from a large prime field. Our symmetric-key-based garbling techniques give a mechanism to translate from Yao-style garbled labels to IT-style garbled labels at cost proportional to the input and output labels: crossing the border is duty-free! We apply our technique to two problems. (1) Recent works show that projective garbling schemes solve a problem central to trust-minimized bridges for the Bitcoin blockchain. BABE (Garg et al., 2026) and Argo MAC (Eagen and Lai, 2026) give two different approaches. Both works implicitly construct an efficient IT garbling scheme, then use naive bit-decomposition to achieve projectivity. We construct drop-in replacements for both; we improve BABE’s encoding size by 45×, and Argo MAC’s by 20×. (2) Our technique implies a non-interactive reduction from vector oblivious linear evaluations (VOLEs) over 𝔽_(p) to 1-out-of-2 OTs. To our knowledge, ours is the state-of-the-art Minicrypt (plus base OTs) protocol for large field VOLE secure against a malicious receiver. It costs only O((λ + n)lg p) bits.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Duty-Free Bits: Projectivizing Garbling Schemes (Nakul Khambhati, Anwesh Bhattacharya, David Heath) ia.cr/2026/476

08.03.2026 02:04 👍 0 🔁 0 💬 0 📌 0
Abstract. We present a new framework for secure computation of arithmetic circuits with two-thirds honest majority that lifts semi-honest protocols to full malicious security. Our framework works with any linear secret sharing over any finite ring and maintains the type of security of the underlying semi-honest protocol (i.e., computational or information-theoretic). The framework has the following overhead complexity with respect to size |C| of the computed circuit C: it incurs only logarithmic communication overhead in |C| over the cost of the semi-honest protocol, the number of additional rounds is independent of the circuit’s size, and the computational work per party is O(|C|) arithmetic operations.

Even when limiting the scope to static adversaries, previous works could only achieve two of these three measures: Either the communication is logarithmic and the computational overhead per party is O(|C|), but the number of additional rounds grows with the circuit’s size, or communication is logarithmic and the number of rounds is O(n), but the computational overhead is O(n ⋅ |C|). To the best of our knowledge, we are the first to achieve the desired complexity in all three fronts, making the cost of achieving full security in MPC lower than ever.

Our result is achieved via a new verification technique based on a robust recursive search that finds and removes cheaters from the computation.

We further improve our result by reducing costs associated with the number of parties~n. While the initial result incurs cubic communication overhead with respect to n and O(n) additional rounds in the worst case, we show how to reduce it to $\sqrt{n^5}$ communication overhead and $O(\sqrt{n\log\log n})$ additional rounds, without sacrificing the computational overhead, which remains O(|C|). This is achieved via a novel technique we call gap amplification that accelerates the player elimination process, enabling us to reduce the number of calls to the verification subprotocol. This technique is of independent interest as it is general and can be directly applied to any protocol that relies on player elimination.

Abstract. We present a new framework for secure computation of arithmetic circuits with two-thirds honest majority that lifts semi-honest protocols to full malicious security. Our framework works with any linear secret sharing over any finite ring and maintains the type of security of the underlying semi-honest protocol (i.e., computational or information-theoretic). The framework has the following overhead complexity with respect to size |C| of the computed circuit C: it incurs only logarithmic communication overhead in |C| over the cost of the semi-honest protocol, the number of additional rounds is independent of the circuit’s size, and the computational work per party is O(|C|) arithmetic operations. Even when limiting the scope to static adversaries, previous works could only achieve two of these three measures: Either the communication is logarithmic and the computational overhead per party is O(|C|), but the number of additional rounds grows with the circuit’s size, or communication is logarithmic and the number of rounds is O(n), but the computational overhead is O(n ⋅ |C|). To the best of our knowledge, we are the first to achieve the desired complexity in all three fronts, making the cost of achieving full security in MPC lower than ever. Our result is achieved via a new verification technique based on a robust recursive search that finds and removes cheaters from the computation. We further improve our result by reducing costs associated with the number of parties~n. While the initial result incurs cubic communication overhead with respect to n and O(n) additional rounds in the worst case, we show how to reduce it to $\sqrt{n^5}$ communication overhead and $O(\sqrt{n\log\log n})$ additional rounds, without sacrificing the computational overhead, which remains O(|C|). This is achieved via a novel technique we call gap amplification that accelerates the player elimination process, enabling us to reduce the number of calls to the verification subprotocol. This technique is of independent interest as it is general and can be directly applied to any protocol that relies on player elimination.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Image showing part 3 of abstract.

Image showing part 3 of abstract.

Scaling Fully Secure MPC via Robust Recursive Search and Gap Amplification (Matan Hamilis, Ariel Nof) ia.cr/2026/475

08.03.2026 00:19 👍 0 🔁 0 💬 0 📌 0
Abstract. We present a privacy protocol implemented on Starknet that enables confidential transactions while maintaining regulatory compliance. Transfers hide the sender, receiver, and amount from external observers, with validity enforced by zero-knowledge proofs generated on the client side using the Stwo STARK prover.

The protocol introduces three key innovations: (1) an efficient note discovery mechanism, (2) a practical compliance framework that enables an auditing entity to selectively unshield transactions upon legitimate regulatory request, and (3) anonymous integration with existing Starknet DeFi contracts. The system supports multiple token types in a single pool and leverages Starknet’s native account abstraction for transaction authorization. All proof logic and contract code are written in Cairo, providing a unified codebase that simplifies auditing and development.

Abstract. We present a privacy protocol implemented on Starknet that enables confidential transactions while maintaining regulatory compliance. Transfers hide the sender, receiver, and amount from external observers, with validity enforced by zero-knowledge proofs generated on the client side using the Stwo STARK prover. The protocol introduces three key innovations: (1) an efficient note discovery mechanism, (2) a practical compliance framework that enables an auditing entity to selectively unshield transactions upon legitimate regulatory request, and (3) anonymous integration with existing Starknet DeFi contracts. The system supports multiple token types in a single pool and leverages Starknet’s native account abstraction for transaction authorization. All proof logic and contract code are written in Cairo, providing a unified codebase that simplifies auditing and development.

Scalable Compliant Privacy on Starknet (Lior Goldberg, Maya Dotan, Ittay Dror, Gideon Kaempfer, Nir Levi, Noa Oved, Arad Reder, Anat Veredgorn, Noa Wolfgor) ia.cr/2026/474

08.03.2026 00:19 👍 1 🔁 0 💬 0 📌 0
Abstract. Recent work at Eurocrypt 2025 by Basso and Maino introduced POKÉ, an isogeny-based public key encryption (PKE) scheme. POKÉ shows how two parties can derive a shared secret on a higher-dimensional, SIDH-like commutative diagram via basis evaluations, giving the fastest isogeny-based PKE to date with performance comparable to the original SIDH.

In this paper we present PIKE, a new isogeny-based PKE obtained by tweaking the POKÉ design. Our key change is to use pairings to derive the shared secret while preserving post-quantum security. This brings two benefits: (i) decryption is directly faster, and (ii) by relaxing the required prime form, we can choose smaller primes, further improving overall runtime.

We provide a proof-of-concept implementation in SageMath. Under the NIST~I setting, our benchmarks show speedups of 1.30× (key generation), 1.24× (encryption), and 1.47× (decryption) over POKÉ, while maintaining competitive public key and ciphertext sizes. In addition, we provide a C implementation. The encryption and decryption take 53~Mcycles (23~ms) and 34~Mcycles (15~ms) on an Intel i7 2.3 GHz CPU, respectively.

Abstract. Recent work at Eurocrypt 2025 by Basso and Maino introduced POKÉ, an isogeny-based public key encryption (PKE) scheme. POKÉ shows how two parties can derive a shared secret on a higher-dimensional, SIDH-like commutative diagram via basis evaluations, giving the fastest isogeny-based PKE to date with performance comparable to the original SIDH. In this paper we present PIKE, a new isogeny-based PKE obtained by tweaking the POKÉ design. Our key change is to use pairings to derive the shared secret while preserving post-quantum security. This brings two benefits: (i) decryption is directly faster, and (ii) by relaxing the required prime form, we can choose smaller primes, further improving overall runtime. We provide a proof-of-concept implementation in SageMath. Under the NIST~I setting, our benchmarks show speedups of 1.30× (key generation), 1.24× (encryption), and 1.47× (decryption) over POKÉ, while maintaining competitive public key and ciphertext sizes. In addition, we provide a C implementation. The encryption and decryption take 53~Mcycles (23~ms) and 34~Mcycles (15~ms) on an Intel i7 2.3 GHz CPU, respectively.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

PIKE: Faster Isogeny-Based Public Key Encryption with Pairing-Assisted Decryption (Shiping Cai, Mingjie Chen, Yi-Fu Lai, Kaizhan Lin) ia.cr/2026/473

08.03.2026 00:18 👍 1 🔁 1 💬 0 📌 0
Abstract. ML-DSA (formerly CRYSTALS-Dilithium), the primary NIST post-quantum signature standard, relies on rejection sampling to ensure that released signatures are statistically independent of the secret key. Recent work by Liu et al. and Damm et al. showed that this protection breaks down as soon as an attacker obtains even a single bit of the generated masking randomness per signature, enabling key recovery via linear regression over the resulting noisy linear system. However, this regression approach requires the attacker to collect a large number of such leaky signatures—up to 2.4 million so-called informative relations for ML-DSA-65—thereby limiting the attack’s practical applicability. We dramatically reduce this cost by reformulating the key recovery as a constraint-satisfaction problem solvable by local optimization. Our approach rests on two contributions. First, we construct a verification routine that checks candidate subkeys using only the collected leakage relations, i.e. without knowledge of the remaining secret-key components or relying on computation-intensive reductions. Second, building on theoretical insights from this verification method, we design a multi-tier hill-climbing algorithm that iteratively refines candidates by minimizing a scoring function. In the exact leakage setting, our attack recovers ML-DSA subkeys from as few as 5 000 to 35 000 informative relations across all parameter sets and leakage bit indices the attack is applicable for, constituting a reduction by a factor of 37–68× over the previous state of the art. We further extend the attack to a noisy leakage model, where the leaked bit is flipped independently with error probability p. We demonstrate experimentally that key recovery remains feasible even at noise rates as high as 45%, again with substantially fewer leakage information than prior work.

Abstract. ML-DSA (formerly CRYSTALS-Dilithium), the primary NIST post-quantum signature standard, relies on rejection sampling to ensure that released signatures are statistically independent of the secret key. Recent work by Liu et al. and Damm et al. showed that this protection breaks down as soon as an attacker obtains even a single bit of the generated masking randomness per signature, enabling key recovery via linear regression over the resulting noisy linear system. However, this regression approach requires the attacker to collect a large number of such leaky signatures—up to 2.4 million so-called informative relations for ML-DSA-65—thereby limiting the attack’s practical applicability. We dramatically reduce this cost by reformulating the key recovery as a constraint-satisfaction problem solvable by local optimization. Our approach rests on two contributions. First, we construct a verification routine that checks candidate subkeys using only the collected leakage relations, i.e. without knowledge of the remaining secret-key components or relying on computation-intensive reductions. Second, building on theoretical insights from this verification method, we design a multi-tier hill-climbing algorithm that iteratively refines candidates by minimizing a scoring function. In the exact leakage setting, our attack recovers ML-DSA subkeys from as few as 5 000 to 35 000 informative relations across all parameter sets and leakage bit indices the attack is applicable for, constituting a reduction by a factor of 37–68× over the previous state of the art. We further extend the attack to a noisy leakage model, where the leaked bit is flipped independently with error probability p. We demonstrate experimentally that key recovery remains feasible even at noise rates as high as 45%, again with substantially fewer leakage information than prior work.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Descent into Broken Trust: Uncovering ML-DSA Subkeys with Scarce Leakage and Local Optimization (Carsten Schubert, Niklas Julius Müller, Jean-Pierre Seifert, Marian Margraf) ia.cr/2026/472

08.03.2026 00:18 👍 0 🔁 0 💬 0 📌 0
Abstract. Lookup arguments are a key technique in SNARKs for reducing the cost of operations which are “arithmetization-unfriendly”, such as range checks and bitwise comparisons. The idea is to encode the valid outputs of the operation in a publicly known table, and then prove that every element of the SNARK witness belongs to the table. Existing constructions for lookup arguments, however, are designed for working over fields, making them incompatible with recent post-quantum lattice-based schemes that operate over rings.

In this work we formalize lookup arguments over rings for tables containing arbitrary ring elements. We bring attention to systematic issues that arise when translating techniques from fields to rings by showing several known lookup arguments are susceptible to attacks. We then extend two central polynomial IOPs, Plookup and LogUp, over the ring ℛ = ℤ_(q)[X]/(X^(d) + 1), and show how to compile them with polynomial commitments based on lattice assumptions to get succinct lattice-based lookup arguments. We additionally show how to apply ring lookups to obtain succinct arguments for batch-verification of RAM updates where the RAM entries are arbitrary ring elements.

Abstract. Lookup arguments are a key technique in SNARKs for reducing the cost of operations which are “arithmetization-unfriendly”, such as range checks and bitwise comparisons. The idea is to encode the valid outputs of the operation in a publicly known table, and then prove that every element of the SNARK witness belongs to the table. Existing constructions for lookup arguments, however, are designed for working over fields, making them incompatible with recent post-quantum lattice-based schemes that operate over rings. In this work we formalize lookup arguments over rings for tables containing arbitrary ring elements. We bring attention to systematic issues that arise when translating techniques from fields to rings by showing several known lookup arguments are susceptible to attacks. We then extend two central polynomial IOPs, Plookup and LogUp, over the ring ℛ = ℤ_(q)[X]/(X^(d) + 1), and show how to compile them with polynomial commitments based on lattice assumptions to get succinct lattice-based lookup arguments. We additionally show how to apply ring lookups to obtain succinct arguments for batch-verification of RAM updates where the RAM entries are arbitrary ring elements.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Lookup Arguments over Rings and Applications to Batch-Verification of RAM Programs (Jonathan Bootle, Julia Guskind, Sikhar Patranabis, Katerina Sotiraki) ia.cr/2026/471

08.03.2026 00:18 👍 0 🔁 0 💬 0 📌 0
Abstract. Byzantine Agreement and Broadcast are traditionally studied in one of two extremes: the authenticated setting, where a public key infrastructure (PKI) enables universally verifiable signatures and yields higher fault tolerance, and the unauthenticated setting, where no PKI is available and resilience necessarily drops. Motivated by Proof-of-Stake blockchains, where only a stable subset of participants (e.g., validators) have registered long-term keys while others do not, we initiate a systematic study of consensus in the setting, where a subset of parties are in a PKI and the remaining parties are .

We provide a nearly complete feasibility characterization of the resilience as a function of the number s of registered parties among n total parties. First, when the sender is not registered, we show that Byzantine Agreement and Broadcast are possible if and only if t ≤ max {⌈s/2⌉, ⌈n/3⌉} − 1, matching a simple protocol and an impossibility bound. Second, when the sender is registered, we give a deterministic synchronous broadcast protocol tolerating up to t ≤ s + ⌈(n − s)/3⌉ − 1 Byzantine faults (equivalently, 3t < n + 2s); while we present the binary case in the main body for clarity, our techniques extend to an efficient multivalued protocol. We complement this with a matching lower bound in a strengthened leakage model in which the adversary learns each party’s private state at the end of every round, ruling out both deterministic protocols and randomized protocols that rely only on short-lived secrets and the basic signing/verification interface.

Abstract. Byzantine Agreement and Broadcast are traditionally studied in one of two extremes: the authenticated setting, where a public key infrastructure (PKI) enables universally verifiable signatures and yields higher fault tolerance, and the unauthenticated setting, where no PKI is available and resilience necessarily drops. Motivated by Proof-of-Stake blockchains, where only a stable subset of participants (e.g., validators) have registered long-term keys while others do not, we initiate a systematic study of consensus in the setting, where a subset of parties are in a PKI and the remaining parties are . We provide a nearly complete feasibility characterization of the resilience as a function of the number s of registered parties among n total parties. First, when the sender is not registered, we show that Byzantine Agreement and Broadcast are possible if and only if t ≤ max {⌈s/2⌉, ⌈n/3⌉} − 1, matching a simple protocol and an impossibility bound. Second, when the sender is registered, we give a deterministic synchronous broadcast protocol tolerating up to t ≤ s + ⌈(n − s)/3⌉ − 1 Byzantine faults (equivalently, 3t < n + 2s); while we present the binary case in the main body for clarity, our techniques extend to an efficient multivalued protocol. We complement this with a matching lower bound in a strengthened leakage model in which the adversary learns each party’s private state at the end of every round, ruling out both deterministic protocols and randomized protocols that rely only on short-lived secrets and the basic signing/verification interface.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Byzantine Consensus in the Partially Authenticated Setting (Christoph Lenzen, Julian Loss, Kecheng Shi, Benedikt Wagner) ia.cr/2026/470

08.03.2026 00:18 👍 0 🔁 0 💬 0 📌 0
Abstract. In this note, we demonstrate that the construction of asynchronous complete secret sharing (ACSS) proposed in the recent work by Qin et al., accepted at Eurocrypt 2026, is insecure. In particular, we identify several issues affecting the liveness of their protocol and present attacks that compromise its correctness.

Abstract. In this note, we demonstrate that the construction of asynchronous complete secret sharing (ACSS) proposed in the recent work by Qin et al., accepted at Eurocrypt 2026, is insecure. In particular, we identify several issues affecting the liveness of their protocol and present attacks that compromise its correctness.

A Note on ``Linear-Communication ACSS with Guaranteed Termination and Lower Amortized Bound’’ (Xiaoyu Ji, Junru Li, Yifan Song) ia.cr/2026/469

08.03.2026 00:18 👍 0 🔁 0 💬 0 📌 0
Abstract. In the NIST post-quantum standardization process, Fujisaki-Okamoto-like (FO-like) transformation has become the de facto paradigm for constructing IND-CCA secure key encapsulation mechanisms (KEMs) from public-key encryption (PKE). However, most post-quantum PKE schemes exhibit decryption error, which poses significant challenges for the security proofs of FO-like PKE-to-KEM transformations, particularly in the quantum-accessible random oracle model (QROM). Hofheinz, Hövelmanns, and Kiltz (TCC 2017) gave the first QROM security proofs for PKE-to-KEM transformations under decryption error. To relax this to the more designer-friendly one of decryption error, Duman et al. (PKC 2023) presented two transformations, FOAC₀ and FOAC, which are under average-case decryption error but introduce substantial loss in QROM reduction tightness (𝒪(q⁸) for FOAC₀ and 𝒪(q⁶) for FOAC) and the need for the γ-spread assumption on the underlying PKEs. Very recently, Ge et al. (ePrint 2025) removed the γ-spread assumption for FOAC₀ and improved the QROM reduction tightness to 𝒪(q⁴) for both FOAC₀ and FOAC.

In this work, we make further advances by introducing two refined variants: FOAC′₀ and FOAC′. We provide new security analyses in both the ROM and the QROM, and present the following key contributions: (1) Compared with previous transformations under average-case decryption error, FOAC′₀ and FOAC′ exhibit tighter security proofs with QROM reduction loss of only 𝒪(q²) for FOAC′₀ and 𝒪(q³) for FOAC′ when the underlying PKE is OW‑CPA secure, and just 𝒪(q) when it is deterministic or IND‑CPA security; (2) Both FOAC′₀ and FOAC′ eliminate the γ-spread assumption entirely, further relaxing the requirements on the underlying PKE.

To support our QROM proofs, we provide three new QROM proof techniques that build on Zhandry’s compressed oracle technique (CRYPTO 2019). These techniques may be of independent interest and could have broader applicability in post-quantum cryptography.

Abstract. In the NIST post-quantum standardization process, Fujisaki-Okamoto-like (FO-like) transformation has become the de facto paradigm for constructing IND-CCA secure key encapsulation mechanisms (KEMs) from public-key encryption (PKE). However, most post-quantum PKE schemes exhibit decryption error, which poses significant challenges for the security proofs of FO-like PKE-to-KEM transformations, particularly in the quantum-accessible random oracle model (QROM). Hofheinz, Hövelmanns, and Kiltz (TCC 2017) gave the first QROM security proofs for PKE-to-KEM transformations under decryption error. To relax this to the more designer-friendly one of decryption error, Duman et al. (PKC 2023) presented two transformations, FOAC₀ and FOAC, which are under average-case decryption error but introduce substantial loss in QROM reduction tightness (𝒪(q⁸) for FOAC₀ and 𝒪(q⁶) for FOAC) and the need for the γ-spread assumption on the underlying PKEs. Very recently, Ge et al. (ePrint 2025) removed the γ-spread assumption for FOAC₀ and improved the QROM reduction tightness to 𝒪(q⁴) for both FOAC₀ and FOAC. In this work, we make further advances by introducing two refined variants: FOAC′₀ and FOAC′. We provide new security analyses in both the ROM and the QROM, and present the following key contributions: (1) Compared with previous transformations under average-case decryption error, FOAC′₀ and FOAC′ exhibit tighter security proofs with QROM reduction loss of only 𝒪(q²) for FOAC′₀ and 𝒪(q³) for FOAC′ when the underlying PKE is OW‑CPA secure, and just 𝒪(q) when it is deterministic or IND‑CPA security; (2) Both FOAC′₀ and FOAC′ eliminate the γ-spread assumption entirely, further relaxing the requirements on the underlying PKE. To support our QROM proofs, we provide three new QROM proof techniques that build on Zhandry’s compressed oracle technique (CRYPTO 2019). These techniques may be of independent interest and could have broader applicability in post-quantum cryptography.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Image showing part 3 of abstract.

Image showing part 3 of abstract.

Tighter Proofs for PKE-to-KEM Transformations under Average-Case Decryption Error and without γ-Spread (Jinrong Chen, Rongmao Chen, Yi Wang, Haodong Jiang, Cong Peng, Xinyi Huang, Debiao He, Xiaofeng Chen) ia.cr/2026/468

08.03.2026 00:18 👍 1 🔁 1 💬 0 📌 0
Abstract. Zero-knowledge codes, introduced by Decatur, Goldreich, and Ron (ePrint 1997), are error-correcting codes in which few codeword symbols reveal no information about the encoded message, and have been extensively used in cryptographic constructions. Quantum CSS codes, introduced by Calderbank and Shor (Phys. Rev. A 1996) and Steane (Royal Society A 1996), are error-correcting codes that allow for quantum error correction, and are also useful for applications in quantum complexity theory. In this short note, we show that (linear, perfect) zero-knowledge codes and quantum CSS codes are equivalent. We demonstrate the potential of this equivalence by using it to obtain explicit asymptotically-good zero-knowledge locally-testable codes.

Abstract. Zero-knowledge codes, introduced by Decatur, Goldreich, and Ron (ePrint 1997), are error-correcting codes in which few codeword symbols reveal no information about the encoded message, and have been extensively used in cryptographic constructions. Quantum CSS codes, introduced by Calderbank and Shor (Phys. Rev. A 1996) and Steane (Royal Society A 1996), are error-correcting codes that allow for quantum error correction, and are also useful for applications in quantum complexity theory. In this short note, we show that (linear, perfect) zero-knowledge codes and quantum CSS codes are equivalent. We demonstrate the potential of this equivalence by using it to obtain explicit asymptotically-good zero-knowledge locally-testable codes.

A Note on the Equivalence Between Zero-knowledge and Quantum CSS Codes (Noga Ron-Zewi, Mor Weiss) ia.cr/2026/467

08.03.2026 00:18 👍 0 🔁 0 💬 0 📌 0
Abstract. The algebraic group model (AGM), formalized by Fuchsbauer, Kiltz, and Loss (Crypto 2018), has recently garnered significant attention. Notably, Katz, Zhang, and Zhou (Asiacrypt 2022) challenged a widely held belief: that hardness results proven in the AGM imply corresponding results in the generic group model (GGM). They showed that this implication fails under Shoup’s GGM framework. In response, Jaeger and Mohan (Crypto 2024) proposed an alternative interpretation based on Maurer’s GGM and proved that, under this interpretation, the implication indeed holds.

Many cryptographic applications analyzed in the AGM also rely on the random oracle model (ROM), which is largely absent from Jaeger and Mohan’s framework. Because Maurer’s GGM and the ROM are inherently incomparable, Jaeger and Mohan’s framework may not capture all AGM-based proofs. To bridge this gap and faithfully translate all known AGM-based proofs into the GGM setting, we make the following contributions:

-   Limitations of JM’s framework: Jaeger and Mohan’s framework captures only those primitives that can be instantiated within Maurer’s GGM. We identify a primitive—specifically, a public-key encryption scheme with short ciphertexts—whose security has been analyzed in the AGM, and establish a black-box separation between this primitive and Maurer’s GGM, even when combined with the ROM. This result provides concrete evidence that JM’s framework cannot encompass all known AGM-based proofs.

-   Augmented framework: We propose an augmented framework that integrates Maurer’s GGM with a carefully constructed ROM, enabling inputs and outputs to incorporate group elements defined within Maurer’s model.

-   Transferring all known AGM-based proofs to GGM setting: Building on our augmented framework, we prove the lifting lemma from the AGM to our model, demonstrating that hardness results in the AGM+ROM directly carry over to our setting, thereby justifying all known AGM-based proofs.

Abstract. The algebraic group model (AGM), formalized by Fuchsbauer, Kiltz, and Loss (Crypto 2018), has recently garnered significant attention. Notably, Katz, Zhang, and Zhou (Asiacrypt 2022) challenged a widely held belief: that hardness results proven in the AGM imply corresponding results in the generic group model (GGM). They showed that this implication fails under Shoup’s GGM framework. In response, Jaeger and Mohan (Crypto 2024) proposed an alternative interpretation based on Maurer’s GGM and proved that, under this interpretation, the implication indeed holds. Many cryptographic applications analyzed in the AGM also rely on the random oracle model (ROM), which is largely absent from Jaeger and Mohan’s framework. Because Maurer’s GGM and the ROM are inherently incomparable, Jaeger and Mohan’s framework may not capture all AGM-based proofs. To bridge this gap and faithfully translate all known AGM-based proofs into the GGM setting, we make the following contributions: - Limitations of JM’s framework: Jaeger and Mohan’s framework captures only those primitives that can be instantiated within Maurer’s GGM. We identify a primitive—specifically, a public-key encryption scheme with short ciphertexts—whose security has been analyzed in the AGM, and establish a black-box separation between this primitive and Maurer’s GGM, even when combined with the ROM. This result provides concrete evidence that JM’s framework cannot encompass all known AGM-based proofs. - Augmented framework: We propose an augmented framework that integrates Maurer’s GGM with a carefully constructed ROM, enabling inputs and outputs to incorporate group elements defined within Maurer’s model. - Transferring all known AGM-based proofs to GGM setting: Building on our augmented framework, we prove the lifting lemma from the AGM to our model, demonstrating that hardness results in the AGM+ROM directly carry over to our setting, thereby justifying all known AGM-based proofs.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Image showing part 3 of abstract.

Image showing part 3 of abstract.

Hashing in Generic Groups: Completing the AGM-to-GGM Transfer (Taiyu Wang, Cong Zhang, Hong-Sheng Zhou, Xin Wang, Keyu Ji, Zhihong Jia, Li Lin, Changzheng Wei, Ying Yan, Kui Ren, Chun Chen) ia.cr/2026/466

08.03.2026 00:18 👍 0 🔁 0 💬 0 📌 0
Abstract. We show how to translate some of the more advanced techniques used in LWE-based cryptography to the setting of lattice-isomorphism-based cryptography, which was recently introduced by Ducas and van Woerden [Eurocrypt, 2022]. In particular, we show constructions of two powerful cryptographic primitives, identity-based encryption (IBE) and fully homomorphic encryption (FHE). We then prove their security under the assumption that a suitable version of the Lattice Isomorphism Problem is hard.

Our constructions use quite general and modular techniques, and we expect them to have other applications. Specifically, we show that the now-ubiquitous techniques introduced by Gentry, Peikert, and Vaikuntanathan [STOC, 2008] to build IBE and Gentry, Sahai, and Waters [Crypto, 2013] to build FHE can be made to work in our setting using any sufficiently “nice” lattice.

Abstract. We show how to translate some of the more advanced techniques used in LWE-based cryptography to the setting of lattice-isomorphism-based cryptography, which was recently introduced by Ducas and van Woerden [Eurocrypt, 2022]. In particular, we show constructions of two powerful cryptographic primitives, identity-based encryption (IBE) and fully homomorphic encryption (FHE). We then prove their security under the assumption that a suitable version of the Lattice Isomorphism Problem is hard. Our constructions use quite general and modular techniques, and we expect them to have other applications. Specifically, we show that the now-ubiquitous techniques introduced by Gentry, Peikert, and Vaikuntanathan [STOC, 2008] to build IBE and Gentry, Sahai, and Waters [Crypto, 2013] to build FHE can be made to work in our setting using any sufficiently “nice” lattice.

Advanced cryptography from lattice isomorphism—new constructions of IBE and FHE (Huck Bennett, Zhengnan Lai, Noah Stephens-Davidowitz) ia.cr/2026/465

08.03.2026 00:17 👍 0 🔁 0 💬 0 📌 0
Abstract. Model extraction attacks aim to recover the internal parameters of neural networks through black-box queries. While significant progress has been achieved for fully connected ReLU networks, far less is known about structured architectures such as Convolutional Neural Networks (CNNs), which are widely used in practice. In particular, convolutional layers introduce locality and weight sharing, while max-pooling operations leak only relative activation information, both of which require rethinking and extending existing extraction techniques. In this work, we study the extraction of CNNs combining ReLU activations and max-pooling layers in the soft-label setting. We first demonstrate that max-pooling can be understood as a natural extension of the ReLU non-linear operation and classify the different types of critical points observed in this setting. We show that max-pooling critical points capture the difference between two convolutional neurons and that the local structure of convolution allows us to determine the shift between them and reconstruct the underlying convolutional kernel. We also introduce optimizations that take advantage of the specific structure of CNNs: by using receptive-field analysis, we design efficient methods to filter noise and localize critical points. These improvements significantly reduce the computational cost compared to a naive reduction to a large sparse fully connected network. Finally, we validate our approach experimentally on a compact VGG-style convolutional neural network trained on CIFAR-10. The results show that our methods permit accurate weight extraction in CNNs in practice, while correctly localizing critical points in convolutional layers.

Abstract. Model extraction attacks aim to recover the internal parameters of neural networks through black-box queries. While significant progress has been achieved for fully connected ReLU networks, far less is known about structured architectures such as Convolutional Neural Networks (CNNs), which are widely used in practice. In particular, convolutional layers introduce locality and weight sharing, while max-pooling operations leak only relative activation information, both of which require rethinking and extending existing extraction techniques. In this work, we study the extraction of CNNs combining ReLU activations and max-pooling layers in the soft-label setting. We first demonstrate that max-pooling can be understood as a natural extension of the ReLU non-linear operation and classify the different types of critical points observed in this setting. We show that max-pooling critical points capture the difference between two convolutional neurons and that the local structure of convolution allows us to determine the shift between them and reconstruct the underlying convolutional kernel. We also introduce optimizations that take advantage of the specific structure of CNNs: by using receptive-field analysis, we design efficient methods to filter noise and localize critical points. These improvements significantly reduce the computational cost compared to a naive reduction to a large sparse fully connected network. Finally, we validate our approach experimentally on a compact VGG-style convolutional neural network trained on CIFAR-10. The results show that our methods permit accurate weight extraction in CNNs in practice, while correctly localizing critical points in convolutional layers.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Model Extraction of Convolutional Neural Networks with Max-Pooling (Haolin Liu, Adrien Siproudhis, Christina Boura, Thomas Peyrin) ia.cr/2026/464

08.03.2026 00:17 👍 0 🔁 0 💬 0 📌 0
Abstract. Individual genomic data is a uniquely sensitive type of user data. While many papers have considered using Multi-Party Computation (MPC) or Fully Homomorphic Encryption (FHE) to allow collaborators to study combined genomic datasets they cannot share, few have considered verifying the results of genomic computations, either in research studies or in the emerging area of personalized genetic therapies.

In this paper, we initiate the first systematic study of zero-knowledge proofs for verifiable genomics, providing both building blocks for verifying common operations in computational genomics, such as sequence alignment, and exploring two end-to-end applications:

Verifiable Genome-Wide Association Studies: A Genome-Wide Association Study (GWAS) study operates over a repository of genomic data, identifying statistical correlations between genetic variations and observed traits or medical conditions. Our system enables third parties to verify that research was honestly computed over an authenticated, untampered database, ensuring both the integrity of the underlying data set and the correctness of the resulting science. We achieve practical performance (<20 minutes proving time) for studies of sizes equal to those in the existing genomics literature.

Verifiable CRISPR Eligibility: We propose using zk-SNARKs in the context of gene engineering (e.g. CRISPR). To our knowledge, this is a new use case for zk-SNARKs. We implement and optimize models for detecting “on-target” and “off-target” sites for a CRISPR probe in zk-SNARKs, so users can, for example, demonstrate eligibility for a therapy or trial without having to reveal their own DNA sequence.

In support of these applications, we develop new building blocks, like zero-knowledge proofs of sequence alignment that are 30x faster than the prior state of the art, and storage-efficient indexes for Merkle trees for large scale genomic data that asymptotically reduce storage costs.

Abstract. Individual genomic data is a uniquely sensitive type of user data. While many papers have considered using Multi-Party Computation (MPC) or Fully Homomorphic Encryption (FHE) to allow collaborators to study combined genomic datasets they cannot share, few have considered verifying the results of genomic computations, either in research studies or in the emerging area of personalized genetic therapies. In this paper, we initiate the first systematic study of zero-knowledge proofs for verifiable genomics, providing both building blocks for verifying common operations in computational genomics, such as sequence alignment, and exploring two end-to-end applications: Verifiable Genome-Wide Association Studies: A Genome-Wide Association Study (GWAS) study operates over a repository of genomic data, identifying statistical correlations between genetic variations and observed traits or medical conditions. Our system enables third parties to verify that research was honestly computed over an authenticated, untampered database, ensuring both the integrity of the underlying data set and the correctness of the resulting science. We achieve practical performance (<20 minutes proving time) for studies of sizes equal to those in the existing genomics literature. Verifiable CRISPR Eligibility: We propose using zk-SNARKs in the context of gene engineering (e.g. CRISPR). To our knowledge, this is a new use case for zk-SNARKs. We implement and optimize models for detecting “on-target” and “off-target” sites for a CRISPR probe in zk-SNARKs, so users can, for example, demonstrate eligibility for a therapy or trial without having to reveal their own DNA sequence. In support of these applications, we develop new building blocks, like zero-knowledge proofs of sequence alignment that are 30x faster than the prior state of the art, and storage-efficient indexes for Merkle trees for large scale genomic data that asymptotically reduce storage costs.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Icefish: Practical zk-SNARKs for Verifiable Genomics (Alexander Frolov, Maurice Shih, Rob Patro, Ian Miers) ia.cr/2026/463

08.03.2026 00:16 👍 1 🔁 0 💬 0 📌 1
Abstract. This survey article provides an overview of the Semigroup Action Problem (SAP) as a pivotal generalization of the Discrete Logarithm Problem (DLP), tracing its theoretical evolution from foundational algebraic cryptography in the early 2000s to its application in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization process. We examine the mathematical framework of semigroup actions, contrasting them with classical group-theoretic assumptions, and detail the generalizations of Diffie-Hellman and ElGamal protocols within this broader context. Finally, the paper investigates the renaissance of group and semigroup actions in the design of next-generation digital signatures, providing a detailed algebraic analysis of candidates in the current NIST competition.

Abstract. This survey article provides an overview of the Semigroup Action Problem (SAP) as a pivotal generalization of the Discrete Logarithm Problem (DLP), tracing its theoretical evolution from foundational algebraic cryptography in the early 2000s to its application in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization process. We examine the mathematical framework of semigroup actions, contrasting them with classical group-theoretic assumptions, and detail the generalizations of Diffie-Hellman and ElGamal protocols within this broader context. Finally, the paper investigates the renaissance of group and semigroup actions in the design of next-generation digital signatures, providing a detailed algebraic analysis of candidates in the current NIST competition.

Semigroup Action Problems and Their Uses in Post-Quantum Cryptography (Joachim Rosenthal, Silvia Sconza) ia.cr/2026/462

08.03.2026 00:01 👍 0 🔁 0 💬 0 📌 0
Abstract. Hamming Quasi-Cyclic (HQC) is a leading code-based key-encapsulation mechanism (KEM), recently selected by NIST for standardization, whose bandwidth and efficiency are balanced with the concrete cost of information-set decoding (ISD) attacks. However, the current balance relies on (1) the decryption-failure-rate (DFR) is directly configured to be less than 2^(−λ) (λ is the security parameter), rather than carefully determined by choosing conservative parameters to resist known attacks as the Kyber team did in the design of NIST FIPS 203; (2) the error distribution in the underlying quasi-cyclic syndrome decoding problem is restricted to be balanced.

In this paper, we show how to quantitatively and conservatively evaluate the impact of removing the aforementioned two restrictions on the complexities of known attacks, and thus find a new balance among bandwidth, efficiency, and security for HQC. In detail, we first formalize the best-known decryption-failure attack against HQC, and derive an upper bound on the probability that an adversary triggers a decryption-failure event under realistic query and time limits, enabling an attack-aware upper bound on the secure DFR. Second, we quantify how the weight distribution of (r₁, r₂, e) (the random low-weight polynomials used in encryption) affects the concrete cost of ISD attacks and DFR. This yields an weight strategy that strictly lowers the DFR without sacrificing the targeted bit security, leading to a new variant called . By combining these analyses, we provide optimized parameters for UHQC. Across all NIST security levels, UHQC reduces bandwidth by 10-12% and improves runtime by 6-8%.

Abstract. Hamming Quasi-Cyclic (HQC) is a leading code-based key-encapsulation mechanism (KEM), recently selected by NIST for standardization, whose bandwidth and efficiency are balanced with the concrete cost of information-set decoding (ISD) attacks. However, the current balance relies on (1) the decryption-failure-rate (DFR) is directly configured to be less than 2^(−λ) (λ is the security parameter), rather than carefully determined by choosing conservative parameters to resist known attacks as the Kyber team did in the design of NIST FIPS 203; (2) the error distribution in the underlying quasi-cyclic syndrome decoding problem is restricted to be balanced. In this paper, we show how to quantitatively and conservatively evaluate the impact of removing the aforementioned two restrictions on the complexities of known attacks, and thus find a new balance among bandwidth, efficiency, and security for HQC. In detail, we first formalize the best-known decryption-failure attack against HQC, and derive an upper bound on the probability that an adversary triggers a decryption-failure event under realistic query and time limits, enabling an attack-aware upper bound on the secure DFR. Second, we quantify how the weight distribution of (r₁, r₂, e) (the random low-weight polynomials used in encryption) affects the concrete cost of ISD attacks and DFR. This yields an weight strategy that strictly lowers the DFR without sacrificing the targeted bit security, leading to a new variant called . By combining these analyses, we provide optimized parameters for UHQC. Across all NIST security levels, UHQC reduces bandwidth by 10-12% and improves runtime by 6-8%.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Compact HQC with new (un)balance (Chaofeng Guan, Lan Luo, Haodong Jiang, Jianhua Hou, Tong Yu, Hong Wang, Kangquan Li, Longjiang Qu) ia.cr/2026/461

08.03.2026 00:01 👍 0 🔁 0 💬 0 📌 0
Abstract. Large-size Number Theoretic Transforms (NTTs) are key operations in modern Zero-Knowledge Proofs (ZKPs), where the NTT size often reaches millions of points and the arithmetic is over wide prime fields. To handle such NTTs on hardware, prior designs commonly rely on the decomposition algorithm, which makes large-size NTTs feasible by streaming sub-NTTs through limited on-chip buffers. However, in practical implementations, decomposition alone is insufficient to ensure high efficiency. Since coefficients and twiddle factors remain off-chip, performance tends to depend on data movement across the memory hierarchy and delivery to the processing elements (PEs). To address these remaining issues, we present RENTT, a resource-efficient accelerator for large-size NTTs with decomposition-oriented data movement schemes and memory hierarchy design.

First, we design a precomputation-based twiddle factor management scheme that feeds multiple PEs without conflicts, both reducing on-chip twiddle factor storage and avoiding on-the-fly twiddle factor generation. Second, we develop a burst-optimized transpose method that fuses coefficient reordering into the element-wise twiddle-multiplication pass, reducing the latency of off-chip accesses for the subsequent NTTs. Third, we design a decoupled, multi-banked on-chip memory hierarchy that sustains high PE utilization under off-chip streaming, while remaining configurable across PE counts and maximum supported NTT sizes. We implement RENTT on an FPGA and experimentally verify that RENTT supports NTT sizes up to N = 2²⁸ over 256-bit fields and completes a 2²⁸-point NTT in 1.52 seconds with 16 processing elements. Compared with the state-of-the-art FPGA baseline SAM, RENTT provides 2.64× speedup (1.52s vs. 4.02s), reduces 46.4% DSP usage (3629 vs. 6776 DSPs), and achieves a 3.58× lower area-time product.

Abstract. Large-size Number Theoretic Transforms (NTTs) are key operations in modern Zero-Knowledge Proofs (ZKPs), where the NTT size often reaches millions of points and the arithmetic is over wide prime fields. To handle such NTTs on hardware, prior designs commonly rely on the decomposition algorithm, which makes large-size NTTs feasible by streaming sub-NTTs through limited on-chip buffers. However, in practical implementations, decomposition alone is insufficient to ensure high efficiency. Since coefficients and twiddle factors remain off-chip, performance tends to depend on data movement across the memory hierarchy and delivery to the processing elements (PEs). To address these remaining issues, we present RENTT, a resource-efficient accelerator for large-size NTTs with decomposition-oriented data movement schemes and memory hierarchy design. First, we design a precomputation-based twiddle factor management scheme that feeds multiple PEs without conflicts, both reducing on-chip twiddle factor storage and avoiding on-the-fly twiddle factor generation. Second, we develop a burst-optimized transpose method that fuses coefficient reordering into the element-wise twiddle-multiplication pass, reducing the latency of off-chip accesses for the subsequent NTTs. Third, we design a decoupled, multi-banked on-chip memory hierarchy that sustains high PE utilization under off-chip streaming, while remaining configurable across PE counts and maximum supported NTT sizes. We implement RENTT on an FPGA and experimentally verify that RENTT supports NTT sizes up to N = 2²⁸ over 256-bit fields and completes a 2²⁸-point NTT in 1.52 seconds with 16 processing elements. Compared with the state-of-the-art FPGA baseline SAM, RENTT provides 2.64× speedup (1.52s vs. 4.02s), reduces 46.4% DSP usage (3629 vs. 6776 DSPs), and achieves a 3.58× lower area-time product.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

A Resource-Efficient Hardware Accelerator for Large-Size NTT via Algorithm–Architecture Co-Design (Kaixuan Wang, Yifan Yanggong, Xiaoyu Yang, Chenti Baixiao, Lei Wang) ia.cr/2026/460

08.03.2026 00:01 👍 0 🔁 0 💬 0 📌 0
Abstract. In this work, we propose novel security notions for encryption schemes that simulate an adversary in the black-box model equipped with additional side-channel power. More concretely, the adversary is allowed to probe values of the secret-key sensitive algorithms, i.e. key generation and decryption. We then prove a generalization of the well-known Naor-Yung (NY) transform, generically lifting IND-CPA secure encryption schemes to IND-CCA ones in this new probing context. Moreover, we instantiate the resulting framework from lattices, constructing Rutile, a masking-friendly IND-CPA encryption scheme inspired by Kyber, and then Topaz its IND-CCA secure extension. In our proposal, the masking-unfriendly parts of Kyber, namely the central binomial distributions and the FO-transform, are replaced by masking-friendly counterparts (sum of uniforms and the aforementioned NY-transform).

Abstract. In this work, we propose novel security notions for encryption schemes that simulate an adversary in the black-box model equipped with additional side-channel power. More concretely, the adversary is allowed to probe values of the secret-key sensitive algorithms, i.e. key generation and decryption. We then prove a generalization of the well-known Naor-Yung (NY) transform, generically lifting IND-CPA secure encryption schemes to IND-CCA ones in this new probing context. Moreover, we instantiate the resulting framework from lattices, constructing Rutile, a masking-friendly IND-CPA encryption scheme inspired by Kyber, and then Topaz its IND-CCA secure extension. In our proposal, the masking-unfriendly parts of Kyber, namely the central binomial distributions and the FO-transform, are replaced by masking-friendly counterparts (sum of uniforms and the aforementioned NY-transform).

Naor-Yung Transform for IND-CCA Probing Security with Lattice Instantiations (Katharina Boudgoust, Laurent Imbert, Loïc Masure, Laz Panard) ia.cr/2026/459

08.03.2026 00:01 👍 0 🔁 0 💬 0 📌 0
Abstract. A useful linearization technique (or a trick) was introduced in the Marlin and Plonk SNARKs, which significantly reduces the number of KZG polynomial commitment openings a SNARK prover has to send. Subsequently, many other KZG-based protocols have taken advantage of it.

We revisit and formalize this technique: – We define a Linearization Polynomial Commitment Scheme (LPCS) that abstracts their linearization technique. – We formalize LinKZG, a LPCS version of the KZG commitment scheme, and show that it achieves a weak form of extractability under a target group version of the ARSDH assumption. We show that Plonk is secure under the same assumption in the ROM. – We show how to construct LPCSs from any homomorphic polynomial commitment scheme. Thus, enabling the linearization technique also for those polynomial commitment schemes, and potentially improving the efficiency of many other SNARKs.

Abstract. A useful linearization technique (or a trick) was introduced in the Marlin and Plonk SNARKs, which significantly reduces the number of KZG polynomial commitment openings a SNARK prover has to send. Subsequently, many other KZG-based protocols have taken advantage of it. We revisit and formalize this technique: – We define a Linearization Polynomial Commitment Scheme (LPCS) that abstracts their linearization technique. – We formalize LinKZG, a LPCS version of the KZG commitment scheme, and show that it achieves a weak form of extractability under a target group version of the ARSDH assumption. We show that Plonk is secure under the same assumption in the ROM. – We show how to construct LPCSs from any homomorphic polynomial commitment scheme. Thus, enabling the linearization technique also for those polynomial commitment schemes, and potentially improving the efficiency of many other SNARKs.

The Art of Linearization: From a KZG’s Trick to a General Commitment Framework (Janno Siim) ia.cr/2026/458

08.03.2026 00:01 👍 0 🔁 0 💬 0 📌 2
Abstract. Distributed key generation (DKG) protocols enable a set of parties to distributively generate a threshold-shared key pair (pk, sk), such that at least t parties must participate to reconstruct the secret. We introduce the first DKG protocol for discrete-logarithm based keys that are both universally composable and adaptively secure without erasure, inconsistent players, interactive assumptions, or oracle-aided simulation.

Our contributions are as follows: (1) an adaptively secure and universally composable DKG that achieves guaranteed output delivery in three rounds assuming an honest majority, (2) an adaptively secure and universally composable committed DKG that realizes our novel committed DKG functionality, tolerates a full corruption threshold, and achieves identifiable abort in two rounds, (3) an adaptively secure and universally composable committed DKG that achieves guaranteed output in three rounds assuming an honest majority, (4) as an application, an incredibly simple threshold Schnorr protocol in the committed DKG-hybrid model–implying an adaptively secure and universally composable threshold Schnorr protocol tolerating a full corruption threshold with identifiable abort in three rounds, and an adaptively secure and universally composable threshold Schnorr protocol for an honest majority with guaranteed output in four rounds.

Most importantly, our DKG constructions are secure in the random oracle model under the DDH assumption. Our output guarantees are proven under the assumption of synchrony. All existing synchronous DKG protocols for discrete-logarithm based keys satisfy weaker security notions or require stronger assumptions.

Abstract. Distributed key generation (DKG) protocols enable a set of parties to distributively generate a threshold-shared key pair (pk, sk), such that at least t parties must participate to reconstruct the secret. We introduce the first DKG protocol for discrete-logarithm based keys that are both universally composable and adaptively secure without erasure, inconsistent players, interactive assumptions, or oracle-aided simulation. Our contributions are as follows: (1) an adaptively secure and universally composable DKG that achieves guaranteed output delivery in three rounds assuming an honest majority, (2) an adaptively secure and universally composable committed DKG that realizes our novel committed DKG functionality, tolerates a full corruption threshold, and achieves identifiable abort in two rounds, (3) an adaptively secure and universally composable committed DKG that achieves guaranteed output in three rounds assuming an honest majority, (4) as an application, an incredibly simple threshold Schnorr protocol in the committed DKG-hybrid model–implying an adaptively secure and universally composable threshold Schnorr protocol tolerating a full corruption threshold with identifiable abort in three rounds, and an adaptively secure and universally composable threshold Schnorr protocol for an honest majority with guaranteed output in four rounds. Most importantly, our DKG constructions are secure in the random oracle model under the DDH assumption. Our output guarantees are proven under the assumption of synchrony. All existing synchronous DKG protocols for discrete-logarithm based keys satisfy weaker security notions or require stronger assumptions.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Adaptively Secure, Universally Composable Distributed Generation of Discrete-Logarithm Based Keys (Hanna Ek, Kelsey Melissaris, Lawrence Roy) ia.cr/2026/457

08.03.2026 00:01 👍 0 🔁 0 💬 0 📌 0
Abstract. We propose Libra, a compiler framework that automates efficient code generation for cross-scheme fully homomorphic encryption (FHE) on highly parallel computing architectures. While it is known that leveraging multiple FHE schemes in a single application can improve the overall efficiency, the exact mapping of cross-scheme FHE operators onto high-performance architectures, such as general-purpose graphic processing units (GPGPUs), remains challenging. To address such challenge, Libra integrates both the FHE computational patterns and hardware-aware scheduling strategies to establish an algorithm-hardware co-optimization framework. Specifically, Libra defines a novel cross-scheme representation for FHE that abstracts common program patterns for each of the FHE schemes. Then, we dynamically optimize the output FHE program based on the combined execution costs of FHE primitives derived from multiple scheme switching patterns. Next, to accelerate inter-operator execution on GPUs, Libra introduces a computational scheduling strategy that bridges high-level computation characteristics with low-level execution plans. Through the proposed pattern-scheduling co-optimization process, Libra generates efficient codes for cross-scheme high-precision FHE computations on GPGPUs. Experiment results show that Libra achieves up to 270× speedup on microbenchmarks and 19× on the applications compared to state-of-the-art cross-scheme, while improving compute unit and memory bandwidth utilization by 44% and 36.1%.

Abstract. We propose Libra, a compiler framework that automates efficient code generation for cross-scheme fully homomorphic encryption (FHE) on highly parallel computing architectures. While it is known that leveraging multiple FHE schemes in a single application can improve the overall efficiency, the exact mapping of cross-scheme FHE operators onto high-performance architectures, such as general-purpose graphic processing units (GPGPUs), remains challenging. To address such challenge, Libra integrates both the FHE computational patterns and hardware-aware scheduling strategies to establish an algorithm-hardware co-optimization framework. Specifically, Libra defines a novel cross-scheme representation for FHE that abstracts common program patterns for each of the FHE schemes. Then, we dynamically optimize the output FHE program based on the combined execution costs of FHE primitives derived from multiple scheme switching patterns. Next, to accelerate inter-operator execution on GPUs, Libra introduces a computational scheduling strategy that bridges high-level computation characteristics with low-level execution plans. Through the proposed pattern-scheduling co-optimization process, Libra generates efficient codes for cross-scheme high-precision FHE computations on GPGPUs. Experiment results show that Libra achieves up to 270× speedup on microbenchmarks and 19× on the applications compared to state-of-the-art cross-scheme, while improving compute unit and memory bandwidth utilization by 44% and 36.1%.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Libra: Pattern-Scheduling Co-Optimization for Cross-Scheme FHE Code Generation over GPGPU (Song Bian, Yintai Sun, Zian Zhao, Haowen Pan, Mingzhe Zhang, Zhenyu Guan) ia.cr/2026/456

08.03.2026 00:00 👍 0 🔁 0 💬 0 📌 0
Abstract. Most prior works on secure Multi-Party Computation (MPC) in asynchronous networks study Guaranteed Output Delivery (GOD), meaning that all parties learn the function output. Asynchronous MPC protocols with GOD necessarily tolerate only t < n/3 corruptions, and they necessarily allow the adversary to exclude the inputs of up to t honest parties from the computation, a phenomenon referred to as input loss. Seeking improvements to threshold/input loss, we consider weakening GOD to security with abort, a standard notion studied in the context of synchronous networks. Unfortunately we show that, when these standard notions are applied in asynchrony, it is not possible to improve the corruption threshold or the input loss.

We therefore study relaxations of these standard notions under which protocols can be improved. In particular, we propose a relaxation of the standard notion of correctness for asynchronous MPC protocols. Our relaxed notion requires only that parties obtain the correct output when all parties are honest and when at most a threshold A of the parties are asynchronous. We present several impossibility and feasibility results that completely characterize what is possible in the context of our relaxed correctness. For instance, it is possible to achieve selective abort even when t < n parties are corrupt if (and only if) A < (n-t)/2, but it is impossible to achieve unanimous abort unless t < n/3, even when A=0. We additionally propose a new notion of identifiable abort for asynchronous networks (aIA), and we show that we can achieve fair MPC with aIA and min(A,t) input loss.

Abstract. Most prior works on secure Multi-Party Computation (MPC) in asynchronous networks study Guaranteed Output Delivery (GOD), meaning that all parties learn the function output. Asynchronous MPC protocols with GOD necessarily tolerate only t < n/3 corruptions, and they necessarily allow the adversary to exclude the inputs of up to t honest parties from the computation, a phenomenon referred to as input loss. Seeking improvements to threshold/input loss, we consider weakening GOD to security with abort, a standard notion studied in the context of synchronous networks. Unfortunately we show that, when these standard notions are applied in asynchrony, it is not possible to improve the corruption threshold or the input loss. We therefore study relaxations of these standard notions under which protocols can be improved. In particular, we propose a relaxation of the standard notion of correctness for asynchronous MPC protocols. Our relaxed notion requires only that parties obtain the correct output when all parties are honest and when at most a threshold A of the parties are asynchronous. We present several impossibility and feasibility results that completely characterize what is possible in the context of our relaxed correctness. For instance, it is possible to achieve selective abort even when t < n parties are corrupt if (and only if) A < (n-t)/2, but it is impossible to achieve unanimous abort unless t < n/3, even when A=0. We additionally propose a new notion of identifiable abort for asynchronous networks (aIA), and we show that we can achieve fair MPC with aIA and min(A,t) input loss.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Asynchronous MPC with Abort (Ananya Appan, David Heath, Ling Ren) ia.cr/2026/455

07.03.2026 08:28 👍 2 🔁 0 💬 0 📌 0