Tutorial progress
- Why tokenize AI models
- Prerequisites
- The metadata schema
- Deploy the contract
- Mint a model NFT
- Provenance chain
- x402 inference metering
- Next steps
Why tokenize AI models
Fine-tuned AI models are valuable intellectual property, but they lack clear ownership, provenance, and monetization rails. By representing a model as an ERC-721 NFT, you get:
- Verifiable ownership -- the NFT holder owns the rights to the model. Ownership transfers are on-chain and auditable.
- Provenance chain -- every fine-tune, merge, or distillation step is recorded as an on-chain event. Anyone can verify the model's lineage back to its base model.
- Per-inference monetization -- using x402, the model owner can charge per-inference. Each API call to the model triggers an HTTP 402 payment, settled on-chain.
- Tradability -- model NFTs can be listed on any ERC-721-compatible marketplace. A model with proven inference revenue has quantifiable value.
The erc721-ai project defines a standard metadata schema and Solidity contract for tokenized AI models. This tutorial walks through deploying the contract, minting a model NFT, recording provenance, and setting up x402 per-inference metering.
Prerequisites
Before you start
- Foundry installed (
curl -L https://foundry.paradigm.xyz | bash && foundryup) - A Base Sepolia testnet wallet with test ETH
- A trained model -- any ONNX, PyTorch, or safetensors checkpoint. We will use a small example model.
- IPFS access -- we use
ipfsCLI or Pinata to pin the model weights and metadata.
The ERC-721 AI metadata schema
The erc721-ai standard extends the standard ERC-721 metadata JSON with AI-specific fields. Here is the full schema:
{
"name": "sentiment-base-v2",
"description": "Fine-tuned DistilBERT for crypto sentiment analysis. 92% accuracy on CryptoSentiment-10K dataset.",
"image": "ipfs://QmXyz.../thumbnail.png",
// Standard ERC-721 fields above, AI-specific fields below
"ai_model": {
"architecture": "distilbert-base-uncased",
"framework": "pytorch",
"format": "safetensors",
"parameters": 66000000,
"weights_cid": "ipfs://QmAbc...def",
"weights_sha256": "a1b2c3d4e5f6...",
"input_schema": {
"type": "text",
"max_length": 512
},
"output_schema": {
"type": "classification",
"labels": ["bullish", "bearish", "neutral"]
}
},
"provenance": {
"base_model": "huggingface:distilbert-base-uncased",
"training_dataset": "ipfs://QmDataset...",
"training_config": {
"epochs": 5,
"learning_rate": 2e-5,
"batch_size": 32
},
"parent_token_id": null,
"lineage": []
},
"inference": {
"endpoint": "https://models.example.com/sentiment-v2/infer",
"pricing": {
"protocol": "x402",
"per_request_usd": 0.001,
"token": "USDC",
"chain": "base"
}
}
}Key fields:
| Field | Purpose |
|---|---|
ai_model.weights_cid | IPFS CID pointing to the actual model weights. Anyone can download and verify. |
ai_model.weights_sha256 | SHA-256 hash of the weights file. Proves the IPFS content matches what was tokenized. |
provenance.parent_token_id | If this model is a fine-tune of another tokenized model, this points to the parent NFT. |
inference.pricing | x402 pricing configuration. The inference endpoint will return HTTP 402 with these terms. |
Deploy the contract
Clone the erc721-ai repo and deploy the contract to Base Sepolia:
git clone https://github.com/kcolbchain/erc721-ai.git cd erc721-ai # Install Foundry dependencies forge install # Deploy to Base Sepolia forge script script/Deploy.s.sol:DeployScript \ --rpc-url https://sepolia.base.org \ --private-key $PRIVATE_KEY \ --broadcast \ --verify
The deploy script deploys two contracts:
AIModelNFT.sol-- the ERC-721 contract with AI-specific extensions (provenance events, metadata validation).InferenceMeter.sol-- optional companion contract that logs inference counts per token, enabling on-chain usage analytics.
Save the deployed addresses -- you will need them in the next steps.
== Logs == AIModelNFT deployed to: 0x1234...abcd InferenceMeter deployed to: 0x5678...efgh
Mint your model as an NFT
First, upload your model weights and metadata to IPFS. Then call the mint function.
# Upload model weights ipfs add model-weights.safetensors # => added QmWeights... model-weights.safetensors # Compute SHA-256 for verification sha256sum model-weights.safetensors # => a1b2c3d4... model-weights.safetensors # Update metadata.json with the CID and hash, then upload it ipfs add metadata.json # => added QmMeta... metadata.json
Now mint the NFT using a Foundry script or cast:
# Mint using cast (Foundry CLI)
cast send $AI_MODEL_NFT_ADDRESS \
"mintModel(address,string,bytes32)" \
$YOUR_ADDRESS \
"ipfs://QmMeta..." \
0xa1b2c3d4e5f6... \
--rpc-url https://sepolia.base.org \
--private-key $PRIVATE_KEYThe mintModel function takes three arguments:
to-- the address that will own the model NFT (typically the trainer/creator).tokenURI-- IPFS URI pointing to the metadata JSON.weightsHash-- SHA-256 hash of the model weights, stored on-chain for verification.
The contract emits a ModelMinted(tokenId, weightsHash, tokenURI) event. You can verify the mint on Base Sepolia explorer.
Record provenance (fine-tune lineage)
When you fine-tune a tokenized model to create a new version, the provenance chain connects them. This is critical for IP tracking -- if Model B is a fine-tune of Model A, the on-chain record proves it.
# Mint the fine-tuned model, linking to the parent cast send $AI_MODEL_NFT_ADDRESS \ "mintDerived(address,string,bytes32,uint256)" \ $YOUR_ADDRESS \ "ipfs://QmMetaV3..." \ 0xdeadbeef... \ 1 \ # parent token ID (the model you fine-tuned from) --rpc-url https://sepolia.base.org \ --private-key $PRIVATE_KEY
The contract stores the parent-child relationship on-chain. You can query the full lineage:
# Get parent of token 2 cast call $AI_MODEL_NFT_ADDRESS "parentOf(uint256)" 2 \ --rpc-url https://sepolia.base.org # => 1 # Get all children of token 1 cast call $AI_MODEL_NFT_ADDRESS "childrenOf(uint256)" 1 \ --rpc-url https://sepolia.base.org # => [2] # Get full lineage (recursive parents back to root) cast call $AI_MODEL_NFT_ADDRESS "lineageOf(uint256)" 2 \ --rpc-url https://sepolia.base.org # => [1] (token 1 is the root model)
This enables a model marketplace where buyers can verify exactly what base model, dataset, and training config produced the model they are purchasing. The on-chain lineage is tamper-proof.
x402 per-inference metering
The model owner can monetize inference by deploying an API server that responds with HTTP 402 for unauthenticated requests. Here is a minimal FastAPI server that serves inference and charges per-request via x402.
from fastapi import FastAPI, Request, Response import json, torch from transformers import pipeline app = FastAPI() # Load the model (from local weights or IPFS) classifier = pipeline("sentiment-analysis", model="./model-weights") # x402 pricing: $0.001 USDC per inference PAYMENT_REQUIREMENTS = { "scheme": "exact", "network": "base-sepolia", "maxAmountRequired": { "token": "0x036CbD53842c5426634e7929541eC2318f3dCF7e", "amount": "1000", # 0.001 USDC (6 decimals) "decimals": 6 }, "payeeAddress": "0x...model_owner...", } @app.post("/infer") async def infer(request: Request): # Check for x402 payment header payment = request.headers.get("X-Payment") if not payment: # No payment -- return 402 with requirements return Response( status_code=402, content="Payment Required", headers={"X-Payment": json.dumps(PAYMENT_REQUIREMENTS)}, ) # Verify payment signature via the facilitator # (In production, call the x402 facilitator contract to verify) payment_data = json.loads(payment) if not verify_payment(payment_data): return Response(status_code=402, content="Invalid payment") # Payment verified -- run inference body = await request.json() result = classifier(body["text"]) return { "text": body["text"], "prediction": result[0]["label"], "confidence": result[0]["score"], "model_token_id": 1, # the on-chain NFT token ID } def verify_payment(payment_data: dict) -> bool: """Verify x402 payment signature against the facilitator contract. In production, use the x402 facilitator SDK: from x402 import verify_payment_header return verify_payment_header(payment_data, FACILITATOR_ADDRESS) """ # Simplified for tutorial -- always accept on testnet return "signature" in payment_data
Revenue flow: Each inference request pays the model owner's wallet directly. An agent using our x402 agent payments tutorial would automatically handle these 402 responses. The model owner earns per-inference revenue without running any payment infrastructure -- x402 and the facilitator contract handle settlement.
The optional InferenceMeter contract can log each paid inference on-chain, building a transparent usage record that adds value to the model NFT:
# Check total inferences for token 1 cast call $INFERENCE_METER_ADDRESS "totalInferences(uint256)" 1 \ --rpc-url https://sepolia.base.org # => 1547 # Check revenue earned (in USDC atomic units) cast call $INFERENCE_METER_ADDRESS "totalRevenue(uint256)" 1 \ --rpc-url https://sepolia.base.org # => 1547000 ($1.547 USDC at $0.001/inference)
Next steps
You have deployed an AI model NFT contract, minted a model with provenance, and set up per-inference monetization via x402. Here is where to go next:
- ZKML verification -- use zero-knowledge ML inference to prove your model produced a specific output without revealing the weights. Combine with the NFT for verifiable, private AI.
- Model marketplace -- list your model NFT on OpenSea or any ERC-721 marketplace. Buyers get the NFT (ownership) plus the IPFS CID (weights).
- Royalties on fine-tunes -- extend the contract with ERC-2981 royalties so the original model creator earns when derived models are sold.
- Agent-owned models -- combine with switchboard to let an AI agent own and monetize its own models autonomously.