AI x Blockchain: our 2026 research agenda

Most "AI + crypto" projects are tokens with AI branding. But there are real intersections where blockchain infrastructure solves genuine AI problems — and where AI transforms what's possible on-chain. Here's what we're researching.

AI x Blockchain intersection map
Where AI and blockchain genuinely solve each other's problems.

Cutting through the noise

"AI x Blockchain" has become one of the most hyped narratives in crypto. Most of it is noise: tokens that fund GPU clusters, chatbots with wallets, or "decentralised AI" projects that are neither decentralised nor doing meaningful AI work.

But underneath the hype, there are real technical intersections where blockchain solves problems that centralised AI infrastructure can't, and where AI enables capabilities that blockchain systems currently lack. That's what we're focused on.

Research area 1: Verifiable inference

The core problem: when an AI model produces an output, how do you know which model produced it, that it ran correctly, and that the input wasn't tampered with?

In centralised systems, you trust the provider (OpenAI, Google, etc.). In decentralised systems, trust must be cryptographic. This is where ZK proofs meet machine learning:

  • ZK proofs of inference: Projects like Modulus Labs and Giza are building systems that generate cryptographic proofs that a specific model produced a specific output. This enables trustless AI on-chain.
  • Optimistic verification: Rather than proving every inference, assume correctness and challenge disputed outputs. Similar to optimistic rollups but for AI computation.
  • Trusted execution environments (TEEs): Hardware-based isolation (Intel SGX, ARM TrustZone) for AI workloads. Weaker guarantees than ZK but much cheaper and faster.

Our focus: We're benchmarking the cost/latency/security trade-offs between these approaches. The question isn't "which is best" but "which is appropriate for which use case?"

Research area 2: Autonomous on-chain agents

AI agents that can hold assets, execute transactions, and interact with smart contracts autonomously. This is one of a16z's top predictions for 2026 and we agree it's significant.

The open questions:

  • Agent wallets: How should an AI agent manage private keys? Multi-sig with human oversight? Account abstraction with spending limits? Hardware security modules?
  • Intent-based transactions: Agents expressing "what" they want rather than "how" to execute it. Solver networks that fulfil agent intents optimally.
  • Agent-to-agent protocols: Standardised interfaces for AI agents to negotiate, transact, and collaborate on-chain. Think of it as APIs but for autonomous economic actors.
  • Liability and control: When an AI agent makes a bad trade or interacts with a malicious contract, who is responsible? How do you build safety rails that are enforceable on-chain?

Research area 3: Decentralised compute markets

AI training and inference require enormous compute resources. Centralised cloud providers (AWS, GCP, Azure) dominate this market. Decentralised alternatives are emerging:

  • Render Network: Decentralised GPU marketplace with utility-driven token economics. Real demand for compute, not speculative token activity.
  • Bittensor: Decentralised network for AI model training and inference. Miners contribute compute, validators verify quality.
  • Ritual: Infrastructure for integrating AI models directly into smart contracts. Making AI a native primitive of blockchain applications.

The hard problem: Decentralised compute is currently more expensive and less reliable than centralised alternatives. The value proposition must come from something other than cost: censorship resistance, data sovereignty, or incentive alignment that centralised providers can't offer.

Research area 4: AI-enhanced protocol security

This is where our security research background meets AI capabilities:

  • Automated vulnerability detection: Using LLMs and formal methods together to find smart contract vulnerabilities. Current tools (Slither, Mythril) use static analysis. Adding ML-based pattern recognition could catch higher-level logic bugs.
  • MEV detection and prevention: ML models that detect MEV extraction patterns in real-time and automatically route transactions through protected channels.
  • Anomaly detection: Real-time monitoring of on-chain activity to detect exploits, governance attacks, and unusual fund movements before they cause damage.
  • Automated incident response: AI systems that can pause protocols, alert stakeholders, and even execute emergency governance actions when an exploit is detected.

Research area 5: Data provenance and model transparency

As AI systems become more powerful and more consequential, proving the provenance of training data and the transparency of model behaviour becomes critical. Blockchain is uniquely suited to this:

  • Immutable records of training data sources and licensing.
  • On-chain model registries with version history and audit trails.
  • Cryptographic attestations of model behaviour and safety evaluations.

What we're shipping

Our AI x Blockchain research will produce:

  1. Benchmark reports — Comparing verifiable inference approaches (ZK, optimistic, TEE) on cost, latency, and security.
  2. Open-source tooling — Reference implementations for AI agent wallets, intent-based transaction routing, and automated security monitoring.
  3. Research notes — Regular analysis of new projects, protocols, and developments at the AI x blockchain intersection.
  4. Consulting engagements — Working directly with teams building at this intersection.
Researching AI x Blockchain?

We're looking for research partners, grant collaborators, and teams building at this intersection. Whether you're a protocol team, a foundation, or an independent researcher — let's talk.