NO FAKES Act Raises Alarm: Protecting AI Rights or Restricting Internet Freedoms?

A revised version of the NO FAKES Act—originally designed to curb unauthorised AI-generated deepfakes—has ignited serious concerns among digital rights advocates. What was once seen as a narrowly focused bill to protect individuals from deceptive AI impersonations has evolved into what critics describe as a sweeping legislative overreach that could jeopardize free expression and innovation online.
The legislation, formally titled the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, initially targeted the unauthorised use of a person's likeness via AI-generated media. However, its expanded scope has raised fears of unintended consequences, especially for small tech platforms, developers, and content creators.
From Targeted Protections to Broad Censorship?
Digital freedom organizations, including the Electronic Frontier Foundation (EFF), argue that the bill has drifted far from its original intent. Instead of offering specific protections against harmful AI deepfakes, the revised version proposes what the EFF describes as a “federalised image-licensing system,” which could impact a wide range of digital tools, platforms, and creative content.
A central point of contention is the bill’s requirement for internet platforms to proactively prevent the re-uploading of content flagged by takedown notices. This could lead to the widespread implementation of AI content filters—an approach that has shown mixed results in previous copyright enforcement systems such as YouTube’s Content ID.
Innovation at Risk
Perhaps most troubling for the AI industry is how the bill could target the very tools used to develop generative technologies. If a platform or software is deemed “primarily designed” for creating unauthorised replicas—or lacks enough other commercial use—it could face legal consequences.
Startups experimenting with AI-generated art, voice, or imagery may find themselves vulnerable to legal threats before gaining traction. This kind of legal uncertainty could chill innovation and reinforce the dominance of larger tech companies with the legal and financial means to comply.
“This is like banning a word processor because someone might misuse it,” noted one digital rights analyst.
The Cost of Compliance
The bill would also require platforms—regardless of size—to implement content recognition filters capable of identifying and blocking potentially infringing media. While the legislation includes exceptions for satire, parody, and commentary, enforcing such nuanced distinctions via algorithm is a known challenge.
Smaller platforms may find these compliance costs unaffordable, leading them to over-censor or exit the market entirely—effectively creating a digital environment that favors the largest players while reducing space for alternative or independent voices.
Anonymous Speech Under Threat
Another controversial aspect of the bill is its approach to anonymity. The revised NO FAKES Act would allow individuals to obtain subpoenas without judicial oversight to unmask users accused of creating unauthorised AI replicas. Critics warn this could be misused to silence whistleblowers or political critics by exposing their identities based on mere accusations.
Advocacy groups warn that such mechanisms could chill public discourse and suppress legitimate forms of criticism, especially if leveraged by powerful interests.

A Redundant Push?
The proposed law arrives shortly after the Take It Down Act, which already targets intimate and explicit digital content. Rather than evaluating the impact of existing legislation, Congress appears intent on introducing further regulation—raising questions about whether a more measured, evidence-based approach might be warranted.
A Defining Moment for Internet Regulation
As the NO FAKES Act progresses through the legislative process, the stakes are high. At its core lies a critical debate: how can lawmakers protect individuals from AI-generated harm without imposing restrictions that undermine free expression, innovation, and digital competition?
For developers, creators, and users alike, the coming weeks may help determine whether the internet remains a space for open creativity—or becomes subject to preemptive censorship and legal uncertainty.