Your Face Is Not Freeware: Denmark, Deepfakes, and the Copyright of Being Human
Synthetic identity is here. What will you do when your face goes viral?
Last month, Denmark made history, quietly but significantly.
It introduced legislation that lets people copyright their own face.
Not a performance. Not a selfie. Not a brand logo built on your cheekbones.
Your face. As a standalone asset.
It’s a direct response to the sharp rise in deepfakes—AI-generated content that can place your image or voice in a scene you never lived, saying things you never said. Denmark’s proposed legislation, currently under review, will reframe your likeness not just as a personal trait, but as intellectual property.
It’s an important move. But also a telling one. Because this is what it looks like when law scrambles to catch up with computation.
Deepfakes, Meet Copyright Law
The power of Denmark’s proposal lies in its simplicity. Unlike defamation or invasion of privacy, which require proof of harm, copyright law is strict liability, meaning unauthorized use equals infringement.
This reframes deepfake impersonation as a copyright violation, not merely a reputational issue. That gives individuals a firmer legal foundation, especially in the EU, where IP protections carry real weight.
It also underscores how differently systems treat your likeness. In the U.S., identity is increasingly handled like freeware: accessible by design, governed by default, and often monetized without your say.
Freeware, a term from computing, refers to software that’s free to use but often paid for in hidden ways—through ads, surveillance, or limited control. Increasingly, our digital selves are treated the same way.
“Treating your likeness as property gives you power. Treating it as freeware gives everyone else access.”
While Denmark treats your face as protected property, much of the tech industry treats it as open-source fodder for datasets, deepfake engines, and viral content. Your face becomes the raw material, not the protected asset.
While it’s an important step toward enshrining your right to own your likeness in unfamiliar contexts, this is still a reactive fix. It won’t stop deepfakes from being created. It simply gives you a legal hammer if you know your image has been stolen, if you can afford to act, and if you’re in a jurisdiction that honors the claim.
It’s a step. Not the solution.
And all the while, the incentives to generate deepfakes keep growing, from ad clicks to influence ops. Without stronger regulation, the internet will continue to reward synthetic deception.
[Curious what it really means to copyright your face? The Legal Deep Dive below breaks down what Denmark’s move covers, where it falls short, and what happens when bad actors get involved.]
One Problem, Many Legal Playbooks
Denmark’s proposal is unique in its intellectual property framing. By giving people the ability to copyright their face, it turns synthetic impersonation into a rights violation, not just a reputational wound. But it’s not the only country responding to deepfakes.
Australia, the UK, South Korea, India, Canada, China, and the EU are also advancing a mix of legal strategies. Some countries are introducing deepfake-specific legislation while others are adapting existing privacy and cybercrime laws. Most focus on consent, disclosure, or criminal misuse. Ownership isn’t the common lens, but the goal is similar: reduce harm, restore agency, and deter abuse.
These approaches vary by jurisdiction, but the trend is clear: synthetic identity abuse is no longer a fringe concern. It’s becoming a global policy priority.
Then there’s the United States.
The U.S. approach is uniquely fragmented but globally consequential. As home to many of the world’s most powerful AI platforms and media companies, its regulatory choices ripple outward. A deepfake created in Los Angeles can go viral in Lagos. An AI tool trained on U.S. users might shape elections in Jakarta. When American companies set the norms, the impact stretches far beyond national borders.
Which is why the absence of an overarching federal right to control your likeness is particularly notable. Instead, protections emerge from a patchwork of state laws and narrow federal efforts, each addressing only part of the problem.
Illinois requires informed consent for biometric data. California restricts commercial use of name, image, or likeness. New York bans pornographic deepfakes. Tennessee limits political ones near elections. The federal Take It Down Act mandates removal of non-consensual intimate AI content—but only within specific contexts. [Want specifics? Jump to the full U.S. legal breakdown in the Deeper Dive section below.]
These laws offer relief but not resilience. They weren’t built for the scale or speed of generative AI. Most Americans must still discover the abuse themselves, prove harm, and navigate a legal maze crafted for another era.
That complexity, and America’s centrality to global tech innovation, make its regulatory gaps particularly dangerous. Without robust U.S. action, platforms may follow the path of least resistance, exporting risk and importing outrage. Deepfakes generated in or distributed from the U.S. don’t stay within its borders. They affect elections abroad, fuel global scams, and shape digital norms worldwide.
The Private Sector’s “Become Your Clone” Workaround
In the absence of legal protection, companies like Metaphysic are offering a workaround—ownership without oversight.
Their model? Empower people to create and own their synthetic selves before someone else does.
Actors like Tom Hanks and Anne Hathaway are already banking high-fidelity AI versions of themselves. It’s not just for posterity, it’s legal strategy. Register your digital likeness now, so you can defend it later.
It’s part tactical, part existential. If your face is going to be digital currency, better to mint it yourself than wait to be cloned.
But synthetic self-ownership doesn’t scale. It solves for fame, not fairness. It favors celebrities and public figures. For most people, it’s expensive, time-consuming, and legally complex.
More importantly, ownership is not immunity.
Could Denmark’s Law Become a Global Safe Harbor?
If Denmark allows non-citizens to copyright their likeness, it could become a legal refuge for artists, journalists, and vulnerable groups. We’ve seen this dynamic before: GDPR reshaped privacy norms far beyond Europe.
While the EU AI Act tackles platform obligations and content labeling, Denmark’s proposal zooms in on individual rights—complementing the broader regulatory push with a personal legal tool.
Of course, international enforcement is messy. Suing a pseudonymous deepfake creator in a country that doesn’t honor Danish IP law is a legal dead end. But if Denmark’s move sets a new global standard, even partial compliance could shift how we define digital identity.
What You Can Do (Before the Law Catches Up)
⚠️ Caveat: The tools below are shared as illustrative starting points—not endorsements or guarantees. I’m not claiming to use or avoid them, just pointing you toward what exists. Practices change fast, which is why privacy policies, terms of service, and independent reporting from journalists and civil society groups remain your best bet for understanding how these tools operate. Stay sharp, read the fine print, and choose what aligns with your values and comfort level.
1. Spot Deepfakes and Misleading Content
Start here:
If something feels off, pause. Ask: Where did this come from? Who benefits from me believing it?
Before resharing anything divisive or suspicious, check a second or third source.
Go deeper:
Use detection tools like Reality Defender, Sensity AI, or Deepware Scanner.
Try browser extensions like Videntifier, or explore open-source detectors on GitHub.
Educators and parents: teach young people how to question and verify online content, especially videos. Remind them of laws against sharing nonconsensual sexual images (See Take It Down Act), especially of minors.
2. Protect Your Likeness
Start here:
Set up Google Alerts for your name to track public mentions.
Think twice before sharing high-res face scans or clean audio clips of your voice.
Add a watermark or small visual signature to public-facing photos and videos using tools like Canva or Photopea.
Go deeper:
Embed metadata or use invisible watermarks with tools like Adobe Content Credentials.
If you’re a creator, public figure, or activist, consider tokenizing your likeness or using identity platforms like BrightID or Proof of Humanity.
3. Prepare to Respond If You’re Targeted
Start here:
Talk to a trusted friend, colleague, or mentor about how you'd respond if your likeness was misused.
Write a simple draft response you could quickly update and share if needed.
Go deeper:
Build a personal response plan: Who would you call? What platforms would you notify? Who can help you amplify or verify your statement?
Use tools like SpyCloud to monitor for dark-web exposure of your data or identity.
4. Preserve Evidence When Something Feels Wrong
Start here:
Screenshot what you see, including the post, user, comments, and timestamp.
Copy and save the URL.
Go deeper:
Use ExifTool to preserve metadata from downloaded files.
Archive content using Wayback Machine or archive.today.
If needed, report to the hosting platform and consult a digital rights organization like the Cyber Civil Rights Initiative or the Electronic Frontier Foundation.
5. Support Policy That Works
Start here:
Share explainers, petitions, or articles about deepfakes and AI misuse.
Talk to your local representatives about stronger identity protections.
Go deeper:
Support legislation like the DEEPFAKES Accountability Act, which would require disclosure and consent for synthetic media.
Follow updates to the EU AI Act, which took effect in June 2023 and mandates watermarking for high-risk AI outputs.
Get involved with or follow the C2PA (Coalition for Content Provenance and Authenticity), which is building technical standards for trustworthy media.
Bottom Line
Most of us aren’t trying to license our face to a movie studio. We just want to keep it off a scam ad. Out of a fake porn clip. Away from AI-generated lies that could fool our friends, our boss, or our kids.
In this new reality, identity isn’t something you just have. It’s something you defend. The question isn’t only who owns your face but who gets to decide what it means to be you.
Identity is no longer static. It’s contested. The next phase of the internet will be shaped by who decides what counts as you.
What protections do you wish existed before your face becomes someone else’s content?
If you found this helpful, subscribe to Command Line with Camille for sharp, clear thinking on AI, digital identity, and trust in the age of synthetic everything.
If you’re building or governing in this space, CAS Strategies can help you stay ahead of the next synthetic threat.
Legal Deep Dive: What Denmark’s Face Copyright Law Actually Means
Denmark is proposing a first-of-its-kind law: giving individuals the right to copyright their face. That sounds simple, but it’s a profound legal shift with real implications for how we fight deepfakes and digital impersonation. Here’s what’s happening, what it could mean, and where the law still needs refinement.
Why Copyright? Why Now?
Copyright usually protects creative works—songs, books, photos. Denmark’s proposal flips that script. It treats your face as a “work” in its own right. If passed, this would allow individuals to register and enforce their likeness like an author protects their novel.
The key power of this approach lies in strict liability. Unlike defamation or privacy laws, copyright doesn’t require you to prove damage or malicious intent. Unauthorized use alone is enough to trigger a legal violation. That’s a big deal when dealing with fast-moving, hard-to-trace harms like deepfakes.
Civil, Not Criminal—and Why That Matters
Denmark’s law is structured as civil, not criminal. That means individuals, not the government, would bring claims. If someone misuses your copyrighted face, you can take them to court and potentially win damages. But law enforcement will not step in to investigate or prosecute the offender.
This has upsides. Civil claims are often faster to file, and they give people more direct control. They also avoid criminalizing creators or satirists who make honest mistakes or legitimate commentary.
But this approach comes with a tradeoff—especially when dealing with malicious actors.
Cybercriminals, deepfake scammers, and foreign influence operations are unlikely to be deterred by civil liability. They’re hard to identify, rarely within Danish jurisdiction, and may not fear a private lawsuit.
State-sponsored actors using deepfakes for political manipulation or psychological operations are almost certainly immune from civil consequences unless Denmark (or the EU) escalates with diplomatic or trade pressure.
In short, civil law works best against actors who are reachable, reputationally sensitive, or subject to platform enforcement. It struggles against bad-faith players who operate outside the law entirely.
That’s why some experts argue that Denmark, and others following suit, should eventually consider adding criminal penalties for the most egregious synthetic impersonation—especially when it involves minors, intimate content, or election interference.
Where Things Stand
As of July 2025, Denmark has introduced the likeness copyright proposal in Parliament, but it is still under review. The bill has gained traction in public debate, especially amid rising concerns about AI misuse and the EU’s push for stronger identity protections under the AI Act.
Key questions remain:
How will likenesses be registered?
What qualifies as infringement?
Will Denmark allow non-citizens to file claims?
How will takedowns and damages be handled?
The proposal is bold, but the implementation details will decide whether it’s a breakthrough or a symbolic gesture.
Challenges Ahead
Even if the law passes, several real-world challenges remain:
Jurisdiction is Limited
Denmark can’t compel a deepfake creator in Russia or a troll farm in another region to show up in court. Enforcement will depend on international cooperation, platform pressure, and whether other countries adopt similar laws.Proof Can Get Messy
If a deepfake blends multiple faces or creates a close approximation without direct copying, it may be hard to prove infringement without clear standards.Platforms May Resist
Social media and video-hosting sites may hesitate to comply without clear takedown obligations or legal liability. Voluntary compliance is uneven.False Claims Could Undermine Credibility
If anyone can register a face and claim infringement, platforms and courts could be flooded with bad-faith complaints or overreach that chills free expression.The Law Reacts, It Doesn’t Prevent
Like most IP law, this framework only kicks in after harm occurs. Without detection tools, watermarking, and proactive monitoring, deepfakes will still spread before victims can act.
Recommendations for Denmark and Everyone Watching
Pair civil remedies with clear platform obligations so takedowns don’t rely entirely on lawsuits.
Create a fast-track review process for sensitive cases like child exploitation, political misinformation, or explicit content.
Enable international filing so individuals in countries with weak protections can use Danish law as a shield.
Consider criminal penalties for repeat offenders or malicious actors, especially when synthetic content is used to intimidate, manipulate, or mislead the public.
The Takeaway
Denmark is trying something both symbolic and strategic. It’s claiming that your face, your likeness, your digital identity should belong to you.
It’s not a perfect fix. It won’t stop every deepfake or catch every bad actor. But it rebalances the legal playing field, gives individuals a clearer right to assert, and may help shape how the rest of the world thinks about ownership and identity.
And for a problem this global, we need every legal tool we can get.
Deeper Dive: A Closer Look at U.S. Deepfake and Identity Laws
U.S. State Laws:
Illinois Biometric Information Privacy Act (“BIPA”))
Requires informed consent before collecting or storing fingerprints, facial scans, or voiceprints. Grants a private right of action with statutory damages.California (Civil Code § 3344)
Protects the commercial use of name, image, voice, or likeness. Recent amendments extend post-mortem rights to AI replicas in audiovisual works. Employers also face limits on biometric data sharing.New York (Civil Rights Law §§ 50–51 + 2020/2025 Deepfake Amendments)
Bars nonconsensual commercial use of image or voice. Explicitly outlaws pornographic deepfakes and requires contract disclosures for synthetic replicas.Tennessee (AI Deepfake Election Ban)
Prohibits AI-generated political deepfakes within 30 days of an election—but only if they mislead voters about a candidate’s actions.
Federal Measures
First federal law against non-consensual deepfake and intimate imagery, requiring platforms to remove flagged content within 48 hours under FTC enforcement
FTC Act – Section 5:
Grants the FTC authority to act against deceptive or unfair AI practices. Being applied to synthetic content used in scams or misleading ads.Computer Fraud and Abuse Act (CFAA):
Originally for hacking, now tested in identity-related AI abuse cases.18 U.S. Code § 1028 – Identity Theft and Fraud:
Prohibits fraudulent impersonation. Could apply when deepfakes are used to deceive or harm reputations.Children’s Online Privacy Protection Act (COPPA):
Protects use of children’s likenesses under age 13. May apply to AI-generated child avatars or voice simulations.