Can Your Digital Twin Be Trusted?
Last year at TEDx, I posed a deceptively simple question: can your digital twin be trusted? It wasn’t just a provocation—it came from lived experience. As co-founder of Hour One, one of the first avatar startups, I helped build the mechanics to ethically compensate people for licensing their likeness. That experience made something clear: without structural trust, digital identity cannot scale.
From this, I developed the Virtual Human Economy (VHE)—my thesis for a future where individuals profit from their AI-enabled identities, just as platforms like Airbnb unlocked value from underused homes. But here, the asset is identity—replicated and augmented through AI. That might mean licensing your voice on ElevenLabs, or turning your expertise into an AI-powered consultancy, like with Wizly. These are early, tangible examples of the VHE in motion.
For that vision to hold, trust must be engineered—not assumed. My TEDx talk framed this through three essential principles: Control, Transparency, and Accountability. These are not just ethical ideals—they are the functional architecture of a viable AI-powered identity economy.
Last month’s Content Authenticity Summit (CAI) in NYC showed just how much progress we’ve made—and what’s still missing. Here’s where we stand.
I. Control: Proving Permission At Every Layer
Control means two things: consenting to create your digital twin, and authorizing how it's used. At Hour One, we had to secure both: permission to create the avatar and boundaries for its commercial deployment.
At CAI, I tested where we are now. A Leica camera embedded C2PA metadata—a digital signature standard that tracks content creation and editing history—into my headshot at the moment of capture. That image was resized, re-signed, and uploaded to LinkedIn with the full capture trail intact. You can see when, where, and how it was taken. A self-authenticating image.

This matters because it allows anyone to verify how content was created and altered—building a digital chain of custody.
But can we prove it was me who consented? Not quite. We can reasonably infer it: I was present, the image was real, unedited. But metadata alone can’t confirm identity or intent.
As Drummond Reed noted:
"We’re moving from a world of trust by default to trust by verification—but achieving true first-person verification is still a significant challenge."
His First Person Project is working on that exact gap. Because with agentic AI—autonomous systems that make decisions and take actions on our behalf—the cost of ambiguity compounds. To build trust, we need to prove not just content authenticity, but human authorship and consent.
II. Transparency: From Metadata to Movement
Transparency isn’t about checking a box—it’s about understanding how a digital artifact came to be.
The CAI Summit offered a standout demo: I posed for a photo captured by Leica, its metadata embedded via C2PA and preserved end-to-end till its appearance on LinkedIn. For the first time, you could hover over the image and view a full forensic trail.
This is what transparency looks like in practice. It means being able to ask: Where did this come from? What happened to it? Who touched it?
Andy Parsons, who leads Adobe’s Content Authenticity Initiative, put it plainly:
"Provenance is foundational for digital authenticity. But comprehensive provenance requires broad participation across the entire ecosystem. That’s still a major hurdle."
In other words: the tech works, but the system doesn't. Unless major platforms, creators, and intermediaries all opt in, transparency remains partial.
III. Accountability: A Landmark Law, with Leaks
Accountability is where trust meets consequence. If something goes wrong—if identity is misused, content is faked, consent is violated—then what?
The 2024 Take It Down Act is a historic first step. It criminalizes non-consensual intimate deepfakes and requires platforms to remove flagged content within 48 hours. It exists because people refused to accept “there’s nothing to be done.”
But as Hany Farid, AI forensics expert, pointed out: “Forty-eight hours is an eternity on the internet. There are no safeguards against false complaints; and the Act doesn’t cover sites creating their own abusive content—a massive loophole.”
These are structural flaws. They don’t undercut the importance of the law, but they do limit its effectiveness.
From Infrastructure to Action: Trust as a Prerequisite
So—can your digital twin be trusted?
If trust means full authorship, consent, and accountability—we’re not there yet. But we are closer.
Since I stood on that TEDx stage, provenance infrastructure has matured. Explicit control systems are emerging. Laws have passed.
But with the rise of agentic AI—systems that act independently on our behalf—the stakes are getting higher. Trust isn’t just about verifying pixels. It’s about proving people authorized the actions. When actions occur far from the source, the work to close the gap becomes more urgent.
That’s why the work of Drummond Reed’s First Person Project matters. It’s why standards like C2PA need broader adoption. And it’s why verified trust isn’t just a technical fix—it’s the foundation for a thriving AI economy, for individuals to truly own and benefit from their digital identities, and for AI to succeed in its promise of creating entirely new markets.
Andy Parsons captured the new reality:
"We went from a world of trust but verify, to a world where you verify before you trust."
With verified trust, individuals can scale, collaborate, and profit—ethically and securely—from their most valuable asset: themselves.
If we want to build an AI economy that serves people, verified trust isn’t optional—it’s the foundation. I’d love to hear what trust means to you in this new era.
For those curious, here’s what full content lineage looks like when verified through the CAI tool.