The Milli Vanilli Effect and the Collapse of Belief: When AI Fakes Everything, What Do We Trust?
We’re not just craving real. We’re forgetting how to be real. And design got us here.
Mark Cuban recently tweeted:
“Within the next 3 years, there will be so much AI, in particular AI video, people won’t know if what they see or hear is real. Which will lead to an explosion of face-to-face engagement, events and jobs… Call it the Milli Vanilli effect.”
It’s a clever reference, but it points to something more profound and more dangerous. Cuban is right that as generative AI floods the digital world with synthetic voices, faces, and perfectly faked video, people will crave something real. Human interaction will become a premium. But here’s the problem: we may have already eroded the social infrastructure we need to rebuild it.
Years of remote-first culture, algorithmic curation, surveillance capitalism, and platform optimization have left many people socially malnourished. Convenience has replaced connection. Identity has become performance. Trust, once rooted in shared experience, is now a calculation.
So even if people want to return to what’s real, they may no longer know how. And even if they do, they may no longer believe that what they’re seeing is true.
This isn’t just a societal glitch. It’s a design failure. The architecture of our tech ecosystem was built to maximize engagement and efficiency—not integrity.
That’s why I argued in Information Integrity by Design: The Missing Piece of Values-Aligned Tech that trustworthiness isn’t a byproduct, it must be a design principle. Right now, we’re asking people to navigate a reality where deepfakes, voice clones, and AI hallucinations are accelerating, but the systems serving them have no built-in mechanisms to verify, contextualize, or affirm truth. That’s like handing people a steering wheel in a self-driving car with no brakes, hoping they figure it out.
We need a radical redesign of how we build and govern technology, centered on information integrity from the start. That means auditable AI, verified media, contextual cues, and clear attribution baked into our platforms. It also means human-centered norms, not just user-centered features.
Because trust doesn’t scale by accident. It scales by design.
The “Milli Vanilli effect” might push people back into the world but it won’t fix the trust collapse on its own. The next frontier isn’t just building more human connection. It’s rebuilding the architecture that allows us to believe what’s real in the first place.
We still have time to get this right. But that window is shrinking. And if we don’t bake integrity into the very bones of our tech ecosystem, we’ll find ourselves in a world where we’re all lip-syncing and no one’s listening.
What are you doing to protect your sense of truth in an increasingly synthetic world? What design shifts do you think are essential to make “authentic” not just desirable, but possible?