Technology

Fake AI People Are Making Real Money on Social Media. Here's How.

People are using AI to create entirely fictional personas with realistic photos and videos, then building large social media audiences and making real money from them. While some are transparent about

Martin HollowayPublished 3w ago5 min readBased on 15 sources
Reading level
Fake AI People Are Making Real Money on Social Media. Here's How.

Fake AI People Are Making Real Money on Social Media. Here's How.

A 22-year-old medical student from northern India has made thousands of dollars by creating a fake woman named Emily Hart — entirely using artificial intelligence. He then posted her photos and videos on Instagram and sold subscriptions to her fans. This is just one example of a broader trend: people are now using AI to invent fictional personas, build audiences, and turn them into income streams. It's a new form of online moneymaking that's spreading faster than social media platforms can stop it.

The Emily Hart Case

The creator, who goes by Sam, deliberately built his fake persona to appeal to a specific political audience — conservative supporters he calculated had more spending money. The Emily Hart Instagram account generated substantial revenue through subscriptions and merchandise sales. Sam was honest about his goal: he called his strategy "rage bait" — meaning he designed Emily's posts to provoke strong reactions and keep people engaged.

None of it was real. Emily Hart never existed. But the money did.

Sam used AI image generation tools to create consistent photos and videos of this fictional woman — the same way you might use a filter app on your phone, except far more sophisticated and convincing. He then sold access to her content to thousands of people who believed they were following a real person.

More Examples Across the Internet

Emily Hart is not alone. Another AI-generated persona, a fake military officer named Jessica Foster, accumulated over one million Instagram followers in just months by posing as a patriotic soldier.

The Jessica Foster account posted realistic-looking photos of the fictional soldier in military uniforms, standing in front of barracks, sitting in military vehicles, and wearing combat gear. Creators even added photos of this fake soldier posing alongside real world leaders, building an entirely false military career story.

Here's the problem: there was no real Jessica Foster. No military records exist for this person anywhere. Yet Instagram allowed the account to operate for months without any label saying the content was AI-generated or fake.

How These Creators Make Money

These operations use the same tools that real content creators use to earn money. Think of it like this: when a YouTuber or TikToker builds a following, they can make money through ads, subscriptions, or merchandise. These fake personas do exactly the same thing.

The Jessica Foster account operated a private membership site where followers paid to see exclusive images. The account later moved to Fanvue, a platform similar to OnlyFans that specifically allows AI-generated creators, as long as they are labeled as such.

The engagement was substantial. Individual posts received over 100,000 comments from accounts with male profile photos. Even verified accounts — including a Brazilian transportation official — were liking and commenting on the posts, unaware they were interacting with a fictional person.

The Technology Behind It

Creating these fake people is now easier than ever. Modern AI tools like Runway and Flux can generate consistent, realistic-looking images and videos. A single person operating from a bedroom can now create dozens of entirely fictional personas, each with their own carefully crafted personality, backstory, and posting schedule.

This is a major shift. Just a few years ago, creating convincing fake videos required specialized technical knowledge and expensive software. Now, anyone with a computer and an internet connection can do it. The barrier to entry has essentially vanished.

This Isn't Isolated to Political Content

These fake political personas exist within a much larger ecosystem of AI-generated scams. An elderly woman lost her life savings and her home to a deepfake romance scam involving a fake profile impersonating a General Hospital actor named Steve Burton. Scammers created a realistic video of him to trick her into sending money.

Elsewhere, fake advertisements featuring fabricated craftspeople commonly appear on Facebook and Reddit. They use photos of people who don't exist, pair them with sad stories about retirement sales, and promise to sell you handmade goods at discount prices. You send money; the products never arrive.

Banks are now warning customers about deepfake videos impersonating bank officials. Insurance companies have even started selling policies specifically designed to cover losses from AI scams — a sign of how serious the problem has become.

Why Platforms Struggle to Stop This

Social media platforms like Instagram, TikTok, and X have policies against AI-generated content or require it to be clearly labeled. But they're failing to enforce these rules at scale.

The Jessica Foster account operated on Instagram for months without any disclosure that it was AI-generated, despite the platform's stated policy. Similar accounts keep popping up across TikTok and other services — fake soldiers, fake truckers, fake police officers, all generated by AI, all building large audiences.

Analysis: The core problem is that today's AI image and video tools have become extremely good at fooling people. They generate content that looks authentic at first glance, and they exploit a basic human weakness: we're more likely to believe something if we find the person attractive or if they seem to share our values. Detecting AI-generated content at scale is technically hard, and the platforms haven't invested heavily enough to catch it all.

Different platforms have different rules. Some services like Fanvue are openly permitting AI-generated creators, as long as they label the content clearly. Mainstream platforms like Instagram still officially ban it, but their detection systems can't keep up with how fast the AI tools are improving.

A New Kind of Business Model

Worth flagging: These operations represent something new. They're not crude scams where someone impersonates you and steals your credit card number. Instead, they're sustainable businesses where creators build genuinely engaged audiences of people who know (or should know) they're looking at AI-generated content.

The Emily Hart and Jessica Foster cases show something calculated: the creators deliberately chose political audiences they believed had money to spend. This wasn't about spreading ideology. It was about identifying a profitable market segment and targeting it. That same playbook could apply to almost any niche community online.

The Regulatory Vacuum

The law hasn't caught up. Traditional fraud statutes are designed to punish people who deceive you into handing over money. But what happens when someone openly sells you AI-generated content and you willingly buy it? It's legally murky.

In this author's view, the emergence of platforms like Fanvue that explicitly support AI creators suggests the market may evolve toward a regulated, transparent version of this business rather than eliminating it entirely. The critical distinction is between clearly labeled synthetic content sold to informed audiences versus undisclosed fake personas used to defraud people.

What these cases really highlight is how fast the technology is moving. A year ago, creating convincing fake videos was a specialized skill. Today, it's point-and-click accessible to anyone. The creator economy — the infrastructure that lets social media influencers make money — was built around the assumption that you were following a real person. That assumption is breaking down, and neither the platforms nor the law has figured out what comes next.