Ethereum co-founder Vitalik Buterin and Ethereum Foundation AI lead Davide Crapis published a research post proposing a Zero-Knowledge framework meant to anonymize how people and agents interact with AI APIs. Their core claim is that privacy can be achieved without breaking payment guarantees for providers.
Instead of identity-based accounts, the model uses a stake-based ZK payment design that aims to prevent linkability across API calls while still ensuring settlement. The practical objective is to enable repeated API usage without creating an identity trail and without forcing per-call on-chain payments.
The internet made access identity-based.
Every API call tied to your email, card, or wallet.
AI will make it worse.
Our proposal:
Replace identity with stake.Deposit once.
Make thousands of API calls.
Stay unlinkable.
Abuse gets slashed.New post with @VitalikButerin 👇 pic.twitter.com/xUAA2cCUtJ
— Davide Crapis (@DavideCrapis) February 11, 2026
How the ZK credit model works
The flow starts with a one-time deposit, such as USDC, into a smart contract that mints anonymized usage credits. A single deposit is intended to replace repeated on-chain payments and lower publishing costs and latency.
Each API call is paired with a Zero-Knowledge Proof that attests the caller has enough credit to pay, without revealing who they are or allowing requests to be correlated. The proof is positioned as the payment-capacity verifier that protects privacy while still giving the provider settlement assurance.
To deter abuse, the framework leans on “stake-based identity,” where economic stake substitutes for traditional identifiers and makes misconduct expensive. The proposal treats economic stake as the control surface for access, rather than names, accounts, or persistent identity.
Enforcement mechanics are built into the deposit: double-spend claims allow claimants to seize deposits, and policy violations route stakes to a burn address, with events recorded immutably on-chain for auditability. The design deliberately makes punishment visible and enforceable while keeping ordinary usage unlinkable.
Constraints, scaling, and what adoption could look like
Buterin and Crapis also draw a hard boundary around what ZK can realistically prove today when AI is involved. They explicitly acknowledge that proving full AI model execution inside ZK circuits remains impractical given current prover costs.
The limiting factor is the size of modern models and what it would take to translate that computation into verifiable circuit steps, which drives extreme demands in compute, memory, and time. In the near term, they argue ZK methods are better suited to verifying outputs or discrete components than the full internals of large neural networks.
They frame the approach as useful but initially niche, with early deployments expected to operate at modest scale compared with today’s dominant on-chain activity. The post’s implied market posture is that this is a privacy capability for specific AI use cases, not an immediate rewrite of Ethereum’s primary throughput or fee dynamics.
Operationally, the trade-off shifts workload toward proving: proof generation introduces prover load and latency, while verification remains comparatively simple and scalable for the service side. The authors emphasize that real-world integration will depend on engineering progress in proof compression, batching, and prover performance to hit practical throughput and cost targets.
If implemented as described, the framework would position Ethereum as a privacy-preserving economic layer for agentic AI, where autonomous agents can pay for and consume services without leaving identity trails. The adoption path ultimately hinges on prover efficiency improvements and ecosystem integration by AI providers and marketplaces, which will determine whether this remains niche or becomes broadly useful.
