Incomplete Musings on Applied Cryptography in 2025
This short summary condenses all of the formal and informal discussions I had with teams and colleagues over the second half of 2024 - thanks to all of them!
It sometimes touches on areas I am not very familiar with, I might have made mistakes. Also, it’s a bit opinionated and far from exhaustive. For instance, I will not be covering all of the current GKR work and will not be going too much into zkVMs. The order of this note goes from what I feel are low-hanging fruit areas to the least practical/feasible research space (warning: kuiper belt maths at the end).
zk-S(N/T)ARKs
One area I'm looking forward to is client-side zk proving, where the state remains somewhat underwhelming. Still, there are teams of privacy-focused projects pushing to advance it, such as Aztec or zk-email. I think that folding schemes, with their constant RAM usage per folding step, are an interesting direction since STARKs may be less suitable - mainly due to FFTs. In that vein, Nebula showed that using folding approaches makes it possible to build space-efficient zkEVMs able to run on low-end devices.
On a more researchy topic, sumcheck has been (re)gaining attention. Interestingly, this surge in popularity has been witnessed regardless of the type of the proving scheme we talk about - be it ec or hash based. This isn't too surprising given "sumcheck's unreasonable power". For instance, in hash-based land, recent research in polynomial commitment schemes shows that we can achieve convincing improvements in the number of verifier queries (translating in an efficient EVM verifier) when leveraging sumcheck combined with a particular kind of reed-solomon (RS) codes - i.e. WHIR's "constrained RS codes".
Interesting research areas:
- Crossovers between STARKs and sumcheck-based provers.
- Client-side zk using folding schemes.
Post Quantum Cryptography (PQC)
For obvious reasons, PQC is very relevant to Ethereum's consensus, execution and data availability layers. Let's customize a bit Mosca's theorem stating to understand the timeline we face: adding the time where consensus remains secure () with how much it will take to upgrade Ethereum () should not be greater than when a powerful enough quantum computer to break it will appear ().
While Google recently announced some (unverifiable1) progress on a quantum supremacy experiment, PQC standards have been decided by NIST, so it can be a good time to explore those more seriously.
Still, we also need to keep in mind that (1) recent announcements are... well, announcements and (2) not all of PQC is relevant to ethereum right now. Take PQ proof systems. In the zk rollups case, the time it will take until we see quantum computers able to forge a groth16 proof for a batch of accounts between 2 blocks should still be quite far - assuming that the rollup TVL would be large enough to attract such an amount of compute.
This does not mean we should not work on such EVM verifiable PQ proof systems, as it will take time to take it to production level readiness. In fact, this research is crucial and we already have some pretty good and productionalized versions of them already2. Rather, it means that those PQ proof systems might not be really where today's hypothesized urgency is, compared to things like efficient PQ signature aggregation schemes.
Interesting research areas:
- PQ signature aggregation techniques - relates to ETH consensus layer.
- PQ signature schemes - relates to ETH execution layer.
- PQ proof systems - how small can we make the PQ overhead? (i.e. the overhead induced from using PQ secure primitives)
MPC
Most projects today use a delegated security model—encrypting all secret states with a single master public key—which requires trust assumptions. These assumptions are less relevant in 2PC but become critical in large scale MPC apps. Some progress has been made toward reducing these assumptions with concepts like accountability (using traitor-tracing in threshold decryption) and alternatives like laconic function evaluation (LFE).
Another barrier to MPC adoption is UX. Current protocols require significant hardware resources and have strict liveness requirements which limit usability. That means we shouldn't expect low-end phones to be running demanding MPC protocols soon.
Since there is an interesting crossover between MPC and zk, there has been some interesting devex innovations. To avoid stack duplication, some teams like TACEO have been working on lowering entry costs for developers by reusing existing work (e.g. using circom and noir for coSNARKs).
Interesting research areas:
- MPC protocols with baked-in accountability mechanisms.
- Evaluating new cryptographic primitives like LFE for trust reduction.
- Improve low-end devices accessibility and decrease liveness requirements.
FHE
FHE’s UX initially appears more promising than MPC, given its low hardware and liveness demands and the ability to delegate computation to untrusted servers. However, challenges remain: while we’re now in the third generation of bootstrapping procedures, noise growth remains a persistent issue, and the performance of these schemes hasn’t advanced significantly since 2016.

In that aspect, a breakthrough in noiseless FHE could be transformative. However, it’s unclear if this direction is even feasible — it might require novel algebraic structures beyond lattices. Another interesting direction is the hardware one. Accelerating NTTs could offer notable performance gains. Here, we should be able to mutualize progress made on accelerating SNARKs.
There is another interesting connection between FHE and SNARKs. Indeed, in the case of lattice based ones, generating encrypted SNARKs should have a much lower overhead when those said SNARKs use the same underlying primitive as the FHE protocol it is running inside. This would help us build much more efficient versions of verifiable FHE, compared to what we have today. Dan Boneh made a pretty good overview of those points in episode 345 of the zk podcast.
However, latest patent discussions3 make the field somewhat uncertain. Patents have been known for slowing down cryptography adoption, which notoriously thrives on openness and free software. I don't have details, though people I look up to seem concerned. I'm curious to see what will happen on the FHE legal side of things this year.
Interesting research areas:
- Hardware acceleration for FHE.
- Efficient Verifiable FHE implementations. How small can we make the overhead of native snark vs snark in FHE?
- Accelerating bootstrapping. What improvements can we expect from hardware acceleration?
- Is noiseless FHE possible?
iO/WE/FE
MPC often struggles with the need for extensive communication within a committee and FHE typically requires the holder of a secret to decrypt results. iO, functional encryption, and witness encryption mitigate if not solve those challenges. This field is pretty dynamic and might get a lot more interesting, with cool developments on the horizon.

Our understanding of iO has been advancing. Recently, concrete security definitions based on well-founded assumptions (including things like LWE) have been laid out. Using that line of work, researchers have been looking to develop iO from recursive functional encryption. According to some, that path can in fact be completed today, at least for small functions, because such an approach leverages things like MSMs and matrix multiplications which are amenable to GPU parallelization.
Others have sougth to get to iO starting from more relaxed security requirements, using exotic assumptions. This is a riskier bet, since similar previous attempts have been broken multiple times. In that vein, local mixing and reversible circuits have received some attention. In fact, there is an ongoing effort to gauge this approach's viability, since a 10k bounty to break it has been released on November 14th at devcon.
Interesting research areas:
- From my understanding numerous potential research breakthrough are on the line, from benchmarking today's schemes practicality (when possible) to developing foundational mathematical tools or breaking newly designed schemes.
Footnotes
-
There is a good explanation of why at point 6. of this Scott Aaronson writeup ↩
-
Shameless plug to our WHIR gas costs benchmarks ↩