Arizona Women Sue AI Companies Over Non-Consensual Deepfake Content
Three women filed a lawsuit in Arizona against AI companies for creating non-consensual sexual imagery using their social media photos, testing new state laws and challenging AI platform liability pro

Arizona Women Sue AI Companies Over Non-Consensual Deepfake Content
Three women filed a lawsuit on January 31, 2026, in Arizona courts against AI companies they allege used their social media photos to create and monetize non-consensual sexual imagery. The case names defendants Beau Schultz and Jackson Webb, along with entities CreatorCore, AI ModelForge, FAL – Features & Labels, Inc., and Phyziro, LLC, according to court documents.
The plaintiffs, identified under pseudonyms M.G., H.R., and H.B. for privacy protection, are women in their early twenties from Arizona and California who regularly post lifestyle content on Instagram. One plaintiff is a Kansas City native, KCTV5 reported. The lawsuit alleges the defendants harvested their publicly posted photos to generate explicit synthetic media without consent.
The Technical Infrastructure
The suit centers on AI ModelForge, which the plaintiffs claim operates as both a content generation platform and an instructional service. According to the complaint, the platform teaches users to create non-consensual AI-generated sexual content and monetize the output. The defendants allegedly maintained multiple interconnected entities to facilitate this operation across different aspects of the business model.
The plaintiffs seek court orders requiring the defendants to either cease generating non-consensual imagery or shut down their platforms entirely. They also demand financial accountability from the AI platform operators, as reported by AZ Central.
Legal Framework and Compliance Theater
Arizona updated its revenge porn statute in 2025 to explicitly cover AI-generated images, introducing the term "realistic pictorial representation" to encompass synthetic media. The amendment represents one of several state-level attempts to address gaps in existing non-consensual imagery laws as generative AI tools proliferate.
The defendants appear to have implemented minimal compliance measures after federal legislation passed. Court documents indicate they created a "take it down" request page but continued generating non-consensual imagery, suggesting a perfunctory approach to legal requirements rather than substantive policy changes.
This pattern of surface-level compliance while maintaining core operations reflects a broader challenge in AI governance. Companies often implement just enough process to claim regulatory adherence while preserving their underlying business models.
Section 230 Shield Expansion
A complicating factor in the case involves language that states, including Arizona, have begun incorporating into their laws. These provisions create blanket exemptions for "interactive computer services," terminology borrowed directly from Section 230 of the Communications Decency Act. The federal law shields platforms from liability for user-generated content, but its application to AI-generated synthetic media remains legally untested at scale.
The inclusion of Section 230 language in state revenge porn statutes could create enforcement challenges. Platform operators might argue their services qualify for these protections, even when they actively facilitate the creation of non-consensual content rather than merely hosting user uploads.
Looking at the broader implications, this represents a critical test of how existing internet law frameworks apply to AI systems that generate rather than simply distribute content. The distinction between passive hosting and active generation may prove legally significant as courts work through these novel applications.
Historical Context and Industry Response
We have seen this pattern before, when the early commercial internet enabled new forms of non-consensual content distribution in the late 1990s and early 2000s. Then, as now, legal frameworks lagged behind technological capabilities, leaving victims with limited recourse while platforms claimed safe harbor protections. The key difference today lies in the synthetic nature of the content and the automated scale at which it can be produced.
The case highlights the tension between AI democratization and harm prevention. Tools that lower barriers to content creation inevitably enable both legitimate use cases and malicious applications. Unlike previous waves of platform liability cases, this lawsuit targets not just distribution but the fundamental generation capabilities of AI systems.
Legal Representation and Broader Strategy
The plaintiffs are represented by Nick Brand of the Donlon Group and Cristina Perez Hesano, managing partner of Perez Law Group, according to the Perez Law Group. The legal team's approach suggests a strategy targeting the entire ecosystem rather than individual bad actors, naming both the platform operators and the underlying AI service providers.
This comprehensive defendant list may reflect lessons learned from earlier platform liability cases, where shutting down one service often resulted in operations simply migrating to new entities or jurisdictions. By targeting the technical infrastructure providers alongside the consumer-facing platforms, the plaintiffs aim to disrupt the entire supply chain.
The case also tests whether courts will treat AI-generated content differently from traditional user-uploaded material. If successful, it could establish precedent for holding AI system operators directly liable for harmful outputs, rather than relying solely on downstream platform moderation.
Technical and Legal Precedent
The outcome will likely influence how AI companies structure their services and implement safeguards. Current industry practice varies widely, from basic content filtering to more sophisticated detection systems, but no standard approaches have emerged for preventing non-consensual synthetic media generation.
For enterprise AI providers, the case underscores the importance of robust acceptable use policies and technical controls. The involvement of infrastructure companies like FAL suggests liability may extend beyond consumer-facing applications to underlying compute and model hosting services.
The legal arguments developed in this case will also inform federal AI regulation efforts. As Congress and regulatory agencies consider comprehensive AI governance frameworks, non-consensual synthetic media represents one of the clearest examples of AI-enabled harm requiring immediate policy attention.


