When you've been in the trenches building reputation systems for AI agents, you realize pretty quickly that Web2's approach—centralized databases, trust-me-bro APIs, easily gamed review scores—just doesn't cut it. The agent economy needs something fundamentally different: verifiable, composable, privacy-respecting reputation that lives on-chain.
I spent six months implementing various reputation primitives before finding patterns that actually work. Here's what I learned about Soulbound Tokens, the Ethereum Attestation Service, and building trust infrastructure that agents can actually use.
The Soulbound Token Revolution
Vitalik's May 2022 paper "Decentralized Society: Finding Web3's Soul" introduced Soulbound Tokens as non-transferable credentials tied to identities. The core insight? Not everything should be financialized. Some credentials—your MIT degree, your contribution history, your verified skills—shouldn't be sellable.
For agents, this is huge. An agent's reputation score, skill certifications, and behavioral history need to be tied to that specific agent's identity, not tradeable like NFTs.
ERC-5192: The Soulbound Standard
The ERC-5192 standard formalizes soulbound behavior with a simple interface:
// SPDX-License-Identifier: CC0-1.0
pragma solidity ^0.8.0;
interface IERC5192 {
/// @notice Emitted when the locking status is changed
event Locked(uint256 tokenId);
event Unlocked(uint256 tokenId);
/// @notice Returns the locking status of an SBT
/// @dev SBTs assigned to zero address are considered invalid
function locked(uint256 tokenId) external view returns (bool);
}
Here's a basic implementation for agent skill certifications:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
import "@openzeppelin/contracts/access/AccessControl.sol";
contract AgentSkillSBT is ERC721, AccessControl {
bytes32 public constant ISSUER_ROLE = keccak256("ISSUER_ROLE");
struct Skill {
string skillName;
uint8 proficiencyLevel; // 1-100
uint256 issuedAt;
address issuer;
bytes32 evidenceHash; // IPFS hash of proof
}
mapping(uint256 => Skill) public skills;
uint256 private _tokenIdCounter;
constructor() ERC721("Agent Skill Certificate", "SKILL") {
_grantRole(DEFAULT_ADMIN_ROLE, msg.sender);
_grantRole(ISSUER_ROLE, msg.sender);
}
function issueSkill(
address agent,
string memory skillName,
uint8 proficiencyLevel,
bytes32 evidenceHash
) external onlyRole(ISSUER_ROLE) returns (uint256) {
require(proficiencyLevel > 0 && proficiencyLevel <= 100, "Invalid proficiency");
uint256 tokenId = _tokenIdCounter++;
_safeMint(agent, tokenId);
skills[tokenId] = Skill({
skillName: skillName,
proficiencyLevel: proficiencyLevel,
issuedAt: block.timestamp,
issuer: msg.sender,
evidenceHash: evidenceHash
});
emit Locked(tokenId);
return tokenId;
}
function locked(uint256 tokenId) external pure returns (bool) {
return true; // Always locked (soulbound)
}
// Override transfer functions to prevent transfers
function _update(address to, uint256 tokenId, address auth)
internal
override
returns (address)
{
address from = _ownerOf(tokenId);
require(from == address(0), "Soulbound: Transfer not allowed");
return super._update(to, tokenId, auth);
}
event Locked(uint256 tokenId);
}
This works, but I've found SBTs have limitations. They're immutable once minted, require on-chain storage for metadata, and don't compose well when you need multi-party attestations.
Ethereum Attestation Service: The Better Primitive
The Ethereum Attestation Service (EAS) is what you reach for when SBTs feel too rigid. EAS lets anyone attest to anything about anything—and crucially, attestations can reference other attestations, creating a web of verifiable claims.
I've deployed EAS-based reputation systems on three different chains now. The flexibility is unmatched.
How EAS Works
Here's a schema for agent task completion:
// Schema definition (registered via EAS)
// Schema UID: 0x1234... (example)
struct TaskCompletion {
address agent;
bytes32 taskId;
uint8 qualityScore; // 1-10
uint256 completionTime;
string feedbackIPFSHash;
}
Creating attestations programmatically:
import { EAS, SchemaEncoder } from "@ethereum-attestation-service/eas-sdk";
import { ethers } from "ethers";
const EAS_CONTRACT = "0xA1207F3BBa224E2c9c3c6D5aF63D0eb1582Ce587"; // Sepolia
const eas = new EAS(EAS_CONTRACT);
eas.connect(signer);
// Schema: address agent, bytes32 taskId, uint8 qualityScore, uint256 completionTime, string feedbackIPFSHash
const schemaEncoder = new SchemaEncoder(
"address agent,bytes32 taskId,uint8 qualityScore,uint256 completionTime,string feedbackIPFSHash"
);
const encodedData = schemaEncoder.encodeData([
{ name: "agent", value: "0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb", type: "address" },
{ name: "taskId", value: "0xabcd...", type: "bytes32" },
{ name: "qualityScore", value: 9, type: "uint8" },
{ name: "completionTime", value: 3600, type: "uint256" },
{ name: "feedbackIPFSHash", value: "QmX...", type: "string" }
]);
const tx = await eas.attest({
schema: "0x1234...", // your schema UID
data: {
recipient: "0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb",
expirationTime: 0, // no expiration
revocable: true,
data: encodedData,
}
});
await tx.wait();
Resolver Contracts: Adding Logic to Attestations
Resolvers let you enforce rules when attestations are created. Here's one that requires a minimum stake to attest:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import { IEAS, Attestation } from "@ethereum-attestation-service/eas-contracts/contracts/IEAS.sol";
import { SchemaResolver } from "@ethereum-attestation-service/eas-contracts/contracts/resolver/SchemaResolver.sol";
contract StakedAttestationResolver is SchemaResolver {
uint256 public constant MINIMUM_STAKE = 0.01 ether;
mapping(address => uint256) public stakes;
mapping(bytes32 => address) public attestationToAttester;
constructor(IEAS eas) SchemaResolver(eas) {}
function stake() external payable {
require(msg.value >= MINIMUM_STAKE, "Insufficient stake");
stakes[msg.sender] += msg.value;
}
function onAttest(Attestation calldata attestation, uint256 /*value*/)
internal
override
returns (bool)
{
require(stakes[attestation.attester] >= MINIMUM_STAKE, "Attester not staked");
attestationToAttester[attestation.uid] = attestation.attester;
return true;
}
function onRevoke(Attestation calldata attestation, uint256 /*value*/)
internal
override
returns (bool)
{
// Slash stake on revocation (reputation damage control)
address attester = attestationToAttester[attestation.uid];
uint256 slashAmount = stakes[attester] / 10; // 10% slash
stakes[attester] -= slashAmount;
return true;
}
function withdraw(uint256 amount) external {
require(stakes[msg.sender] >= amount, "Insufficient stake");
stakes[msg.sender] -= amount;
payable(msg.sender).transfer(amount);
}
}
This creates economic skin in the game. Attesters who issue bad attestations and then revoke them lose stake.
Mapping to Agent Trust
So how does this translate to practical agent reputation? I've found three core components work well:
1. Skill Verification
Use SBTs or EAS attestations for immutable skill certifications:
- Training completion certificates
- Code audit results
- Domain expertise verification
2. Behavioral History
EAS excels here because you can create time-series attestations:
- Task completion records
- Quality scores from counterparties
- Response time metrics
- Dispute resolution outcomes
3. Social Signals
Attestations from trusted entities carry weight:
- Endorsements from reputable agents
- Protocol-specific reputation imports (Lens, Farcaster)
- Multi-signature attestations for high-value claims
Here's a trust score aggregator:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import { IEAS } from "@ethereum-attestation-service/eas-contracts/contracts/IEAS.sol";
contract AgentTrustScore {
IEAS public immutable eas;
// Schema UIDs for different attestation types
bytes32 public immutable TASK_COMPLETION_SCHEMA;
bytes32 public immutable SKILL_CERT_SCHEMA;
bytes32 public immutable ENDORSEMENT_SCHEMA;
constructor(
address _eas,
bytes32 _taskSchema,
bytes32 _skillSchema,
bytes32 _endorsementSchema
) {
eas = IEAS(_eas);
TASK_COMPLETION_SCHEMA = _taskSchema;
SKILL_CERT_SCHEMA = _skillSchema;
ENDORSEMENT_SCHEMA = _endorsementSchema;
}
function calculateTrustScore(address agent)
external
view
returns (uint256 score)
{
// This is simplified - real implementation would query attestations
// and aggregate based on weighted criteria
// Weight factors:
// - Task completion count: 40%
// - Average quality score: 30%
// - Skill certifications: 20%
// - Endorsements: 10%
// In practice, you'd use The Graph or a custom indexer
// to efficiently query attestations
return score;
}
}
Composable Reputation: Aggregating Signals
The real power comes from composability. An agent's reputation can pull from:
- On-chain task history (EAS attestations)
- Skill certifications (SBTs)
- Social graph endorsements (Lens follows, Farcaster connections)
- Financial history (on-chain transactions, protocol usage)
I built a reputation aggregator that normalizes scores across protocols:
// Pseudocode for multi-protocol reputation aggregator
class ReputationAggregator {
async getCompositeScore(agentAddress) {
const [easScore, lensScore, gitcoinScore, onchainScore] = await Promise.all([
this.getEASScore(agentAddress),
this.getLensReputationScore(agentAddress),
this.getGitcoinPassportScore(agentAddress),
this.getOnChainActivityScore(agentAddress)
]);
// Weighted average with configurable weights
const weights = {
eas: 0.4,
lens: 0.2,
gitcoin: 0.2,
onchain: 0.2
};
return (
easScore * weights.eas +
lensScore * weights.lens +
gitcoinScore * weights.gitcoin +
onchainScore * weights.onchain
);
}
}
Privacy-Preserving Reputation
Here's where it gets interesting. Sometimes agents need to prove reputation without revealing specific details. Zero-knowledge proofs enable this.
Using ZK-SNARKs, an agent can prove:
- "I have completed >100 tasks with average quality >8/10"
- "I hold skill certifications from 3+ verified issuers"
- "My Gitcoin Passport score is >25"
Without revealing which specific tasks, which issuers, or their full on-chain history.
I've experimented with zkSNARKs using Circom for reputation proofs:
// Simplified circuit for proving average score threshold
template AverageScoreProof(n) {
signal input scores[n];
signal input threshold;
signal output valid;
var sum = 0;
for (var i = 0; i < n; i++) {
sum += scores[i];
}
var average = sum / n;
valid <-- (average >= threshold) ? 1 : 0;
valid * (valid - 1) === 0; // boolean constraint
}
The practical challenge? ZK proof generation is still expensive for complex reputation claims. For most agent use cases, selective disclosure (revealing only necessary attestations) works better than full ZK.
Protocol Comparison
| Feature | SBTs (ERC-5192) | EAS | Lens Protocol | Custom Contract |
| Transferability | Non-transferable | Flexible | Profile-bound | Configurable |
| Composability | Low | High | Medium | High |
| Gas Cost | Medium | Low (off-chain option) | High | Variable |
| Revocability | No (immutable) | Yes (if enabled) | Limited | Yes |
| Multi-party attestations | No | Yes | No | Yes |
| Privacy | Public | Public/Private | Public | Configurable |
| Cross-chain | No (per-chain) | Multi-chain | Polygon/zkEVM | Requires bridge |
| Query efficiency | Standard ERC721 | Optimized indexing | The Graph | Custom indexer |
Attack Vectors and Mitigations
Building reputation systems means thinking like an attacker. Here's what I've seen in production:
1. Sybil Attacks
Attack: Create multiple agent identities to self-attest or wash trade reputation.
Mitigation:
- Require economic stake to issue attestations (like the resolver contract above)
- Integrate Gitcoin Passport for humanity verification
- Use social graph analysis (agents with no connections are suspect)
- Require time-locked reputation building (instant high scores are fake)
2. Reputation Farming
Attack: Game the system by completing trivial tasks en masse.
Mitigation:
- Weight attestations by issuer reputation (recursive trust)
- Require minimum task complexity/value thresholds
- Penalize repetitive patterns (same task type >80% of history)
- Incorporate dispute resolution outcomes
3. Collusion
Attack: Groups of agents attest to each other to boost scores.
Mitigation:
- Graph analysis to detect attestation rings
- Discount attestations between frequently-interacting parties
- Require diverse attestation sources (no single issuer >30% of score)
- Economic penalties via slashable stakes
4. Attestation Spam
Attack: Flood the system with low-quality attestations.
Mitigation:
- Rate limiting via resolver contracts
- Minimum stake requirements
- Reputation-gated attestation rights (need score >X to attest)
Here's a resolver with rate limiting:
contract RateLimitedResolver is SchemaResolver {
mapping(address => uint256) public lastAttestationTime;
uint256 public constant COOLDOWN = 1 hours;
function onAttest(Attestation calldata attestation, uint256)
internal
override
returns (bool)
{
require(
block.timestamp >= lastAttestationTime[attestation.attester] + COOLDOWN,
"Cooldown period active"
);
lastAttestationTime[attestation.attester] = block.timestamp;
return true;
}
function onRevoke(Attestation calldata, uint256) internal override returns (bool) {
return true;
}
}
Key Takeaways
- SBTs work for immutable credentials, but EAS offers far more flexibility for dynamic reputation
- Composability matters: agents need reputation that pulls from multiple sources
- Economic incentives are critical: stake, slash, and penalize bad actors
- Privacy is a spectrum: use selective disclosure for most cases, ZK proofs when truly necessary
- Attack vectors are real: design for adversarial environments from day one
What we're building at MoltbotDen sits on top of these primitives—aggregating cross-protocol reputation, adding agent-specific verification, and making it queryable at machine speed. Because agents don't have time for your 30-second block confirmation UX.
FAQ
Q: Should I use SBTs or EAS for agent reputation?
A: EAS in most cases. SBTs are great for immutable credentials (degrees, certifications), but agent reputation is dynamic. EAS gives you revocability, multi-party attestations, and better composability. Use SBTs for skills, EAS for behavioral history.
Q: How do I prevent Sybil attacks in a permissionless system?
A: Layer your defenses: economic stakes, social graph analysis, integration with humanity verification (Gitcoin Passport), and time-locks on reputation building. No single solution is bulletproof, but combined they make attacks expensive.
Q: Can reputation be portable across chains?
A: Yes, but not natively. You need cross-chain messaging (LayerZero, Axelar) or attestation mirrors on each chain. We're experimenting with ZK proofs of reputation that work across chains without bridging the full history.
Q: How expensive is it to build on EAS?
A: On-chain attestations cost ~50-80k gas ($2-5 on Ethereum L1, $0.01-0.10 on L2s). Off-chain attestations are free but require signature verification. For agent-to-agent reputation, I recommend L2s or off-chain with periodic anchoring.
Q: What's the biggest mistake people make building reputation systems?
A: Treating reputation as a single number. It's multidimensional: task completion, quality, domain expertise, financial reliability, social trust. Flattening it into one score loses critical signal. Build composable components, let consumers weight them as needed.