Technology

Women Sue AI Companies Over Non-Consensual Deepfake Images

Three Arizona women are suing AI companies for creating explicit sexual images without their consent using photos from social media. The case tests whether old internet protections apply to AI systems

Martin HollowayPublished 7d ago5 min readBased on 4 sources
Reading level
Women Sue AI Companies Over Non-Consensual Deepfake Images

Women Sue AI Companies Over Non-Consensual Deepfake Images

Three women filed a lawsuit on January 31, 2026, in Arizona courts against AI companies they say used their social media photos to create and sell sexual images without their permission. The defendants named in the case include individuals Beau Schultz and Jackson Webb, and companies CreatorCore, AI ModelForge, FAL – Features & Labels, Inc., and Phyziro, LLC, according to court documents.

The three women, identified under pseudonyms M.G., H.R., and H.B. to protect their privacy, are in their early twenties and post lifestyle content on Instagram. One is from Kansas City, KCTV5 reported. The lawsuit claims the defendants took their publicly shared photos and used AI to create explicit sexual images without consent.

How the Operation Worked

The case focuses on AI ModelForge, described in the lawsuit as a platform that does two things: it generates sexual content and teaches users how to do it themselves. According to the complaint, the defendants ran several connected companies that each handled a different piece of the business — some created the images, others hosted them, and others provided the underlying AI tools.

The plaintiffs are asking the court to order the defendants to stop making these images or shut down their platforms. They also want money damages from the AI companies. AZ Central reported on the financial demands.

The Law Catches Up — Slowly

Arizona updated its revenge porn law in 2025 to include AI-generated images, adding language about "realistic pictorial representation" to cover synthetic media. Arizona is not alone; several states have tried to close similar legal gaps as AI image tools became easier to access.

The lawsuit shows a gap between the letter of the law and how companies actually behave. Court documents describe a "take it down" request page on the defendants' websites, suggesting they acknowledged the new rules. Yet the lawsuit alleges they kept generating non-consensual images anyway. This pattern — creating a compliance checklist without actually changing how the business operates — is worth flagging as a broader challenge in AI regulation. Companies often implement just enough process to point to when regulators ask questions, while their core operations stay the same.

A Problem With Online Law

The case runs into a complication rooted in older internet law. When Congress passed the Communications Decency Act in 1996, Section 230 of that law protected websites from being sued for what their users posted. It was meant to let the early internet grow without legal gridlock.

Now, Arizona and other states have borrowed that same legal language when writing rules about AI-generated sexual images. They carved out protections for "interactive computer services" — the same term from Section 230. But here is the problem: AI systems don't just host what users upload. They actively generate new content. It is not clear whether old laws written for passive platforms apply to systems that create things. When this case reaches court, a judge may need to decide whether companies can claim the old internet protections when they are actually manufacturing the harmful content.

How We Got Here

This is not the first time technology has outpaced law. In the late 1990s and early 2000s, the internet made it easy to share non-consensual intimate images, and the legal system took years to catch up. Platforms hid behind Section 230 while victims had little recourse. Today's AI systems change the problem's scale — a person or small group can now generate thousands of fake sexual images of someone without ever touching a camera. The underlying legal question is similar, but the speed and automation make it much more dangerous.

The lawsuit also gets at a tension that comes up whenever new technology becomes cheaper and easier to use. Tools that let more people create things also let more people do harm. The difference here is that previous cases mostly targeted companies that hosted bad content. This lawsuit goes after the companies that built the tools to create it in the first place.

Who Is Fighting Back

The plaintiffs' lawyers are Nick Brand of the Donlon Group and Cristina Perez Hesano, managing partner of Perez Law Group, according to the Perez Law Group. Their strategy is notable: they named not just the consumer-facing website where images were sold, but also the infrastructure companies that provided the AI models and computing power. This approach reflects a lesson learned from earlier cases, where shutting down one site just meant the operation moved somewhere else.

By targeting the entire supply chain, the lawyers are trying to cut off the whole business model rather than just one piece of it. The case also raises a new legal question: can courts hold the makers of AI systems directly responsible for what those systems produce, or just the people who directly use them.

What This Means Going Forward

The outcome will likely shape how AI companies design and protect their products. Right now, the industry has no single standard for stopping AI-generated sexual content. Some companies use basic image filters; others use more sophisticated detection systems. But no industry consensus has emerged.

This case also sends a signal to the companies that provide the underlying AI infrastructure — not just the consumer apps, but the cloud services and model-hosting platforms. They may face liability too, which could change how they set rules for who can use their services and how.

The legal arguments in this case will also feed into the broader debate over AI regulation. Congress and federal agencies are working on comprehensive rules for AI, and non-consensual synthetic sexual images are one of the clearest examples of an AI-enabled harm that needs immediate attention.