Should governments ban AI-generated humans to stop the collapse of social trust?

In May 2025, millions of people saw Billie Eilish at the Met Gala.
Or so they thought.
Photos and videos of the pop star in a striking gown went viral. Some fans raved. Others mocked. The media picked it up, influencers weighed in, and the internet did what it does best: react.
But there was one problem—Billie Eilish wasn’t there.
“I wasn’t there. That’s AI. I had a show in Europe that night. Let me be,” she posted.
What looked real was completely fake. And it fooled everyone.
And this isn’t just about celebrity gossip.
Earlier this year, former U.S. President Donald Trump shared an image on Truth Social of a man deported in error—Abrego Garcia—with gang tattoos photoshopped onto his hand. The letters “MS13” had been digitally altered to fuel a political narrative. It wasn’t true, but it was convincing enough to gain traction.
“We are reaching a point where it’s becoming impossible to tell the difference between a real photo or video and a fake just by looking at it,” said Yuval Noah Harari.
DeepFakes and the Death of Reality
The use of AI and machine learning to create fake human videos isn’t new. Researchers and fringe hobbyists have been generating deepfakes for years. But with tools like OpenAI’s Sora and Google’s Veo 3, the gap between synthetic and real humans is closing fast. What used to require technical skill and time now takes a few prompts, and the results are shockingly realistic.
A recent study from the University of Waterloo found that people could only distinguish real images from AI-generated ones 61% of the time, barely better than a coin flip. The findings raise serious concerns about how much we can trust what we see and highlight the growing need for tools to detect synthetic content.
Fake humans, fake events, fake identities, all generated by AI, indistinguishable from reality.
And if we can no longer trust what we see, what happens to the foundations of our society?
We’ve entered the age of synthetic reality, and it raises one urgent question:
Should AI-Generated Humans Be Banned Like Counterfeit Money to Avoid the Collapse of Social Trust?
Harari draws a sharp comparison:
“Governments had very strict laws against faking money… because they knew that if you allow the circulation of fake money, people will lose trust in money, and the financial system will collapse.”
Today, with tools available to anyone with a laptop, it’s possible to create fake humans that pass as real, from facial expressions to tone of voice.
- We’ve seen fake photos of immigrants push false narratives.
- AI-generated influencers are building followings under false pretenses.
- Deepfake impersonations are scamming businesses and families.
Harari’s warning cuts through:
“Social trust is the foundation of society. If people can’t trust that other people are who they say they are, everything from politics to business to daily life begins to unravel. It should be illegal to fake human beings. We need to preserve social trust as much as financial trust.”
Parallels Between Counterfeit Currency and AI-Generated Humans
Counterfeit Money | AI-Generated Humans (Deepfakes) |
---|---|
Undermines economic trust | Undermines social/civic trust |
Created to deceive | Often created to deceive |
Banned outright | Largely unregulated or loosely governed |
Requires authentication | Needs similar authentication or traceability |
Should Governments Step In and Ban AI-Generated Humans?
It’s no longer a theoretical conversation.
When synthetic humans can be used to deceive, manipulate, or impersonate at scale, we have a problem that laws need to address.
Harari puts it plainly:
“If you allow the circulation of fake people, people will lose trust in other people—and society will collapse.”
Just like the government banned fake money to protect economies, should the creation of fake humans be banned to protect social trust?
What counts as fake? Where do we draw the line? And how do we keep things from spiraling further?
Why a Complete Ban Might Be Too Extreme
Banning AI-generated humans outright could also put an entire startup segment at risk. Over the past year, dozens of new companies have emerged offering AI-generated spokespersons, UGC videos, customer service avatars, influencers, and video content creators. These startups are riding the wave of synthetic media innovation, building tools that blur the line between human and machine, often with real business traction.
If governments impose strict bans without nuance, many of these businesses could be forced to shut down or pivot entirely. Investors might pull back, founders could face regulatory uncertainty, and the broader innovation ecosystem might stall. While some of these tools are vulnerable to abuse, others are simply trying to reduce costs or democratize access to video production.
The challenge is drawing a line that protects society without strangling innovation.
Not every use of synthetic humans is harmful.
A blanket ban could:
- Block legitimate use cases in film, gaming, education, and accessibility.
- Push abuse further underground to anonymous or offshore models.
- Confuse the issue and possibly censor creativity or parody.
Even Harari, while calling for bans on deception, isn’t arguing against AI. He’s calling for limits on pretending to be human.
The point isn’t to outlaw all AI humans. It’s to stop those meant to mislead.
What Can We Do Instead?
If a full ban doesn’t make sense, we still need strong guardrails:
1. Make Disclosure Mandatory:
Any AI-generated human content should come with a clear label, both on-screen and under the hood.
2. Hold Platforms Accountable:
Social networks and publishers must flag fake human content, the way they do with spam or scams.
3. Criminalize Deceptive Use:
Using AI to impersonate someone without consent—whether real or made up—should be treated like identity theft.
4. Bake in Digital Fingerprints:
AI tools that generate human-like content should embed invisible signatures to trace the origin.
5. No Disguises Allowed:
AI systems should never pretend to be a person in conversation, marketing, or media. Period.
None of these will stop every deepfake. But they push back against abuse and send a clear message: truth matters.
Why the Government Has to Act
This isn’t about overregulation. This is about drawing the line between truth and fiction.
The real threat isn’t AI. It’s AI pretending to be human.
Just like fake cash undermines economies, fake people chip away at trust in institutions, in relationships, in reality itself.
Here’s what governments can do right now:
- Make it illegal to impersonate a human—real or fictional—without disclosure.
- Require labels on AI-generated faces and voices.
- Fine or penalize platforms that let synthetic humans spread without oversight.
- Treat malicious deepfakes like digital forgery or harassment.
Final Thought
We’re now at a point when a celebrity has to deny being at an event she never attended… When a president shares a fake photo to make a political point… When we can’t tell who’s real and who’s not…
We’re no longer debating the future. We’re living in it.
If we don’t draw the line now—in law, in tech, in public expectations—truth itself could become optional.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured