Privacy Please

S6, E258 - The Synthetic Star: The AI Influencer Earning More Than You

Cameron Ivey

Send us a text

She has millions of followers, lands six-figure brand deals, and lives a life of curated perfection. The only catch? She isn't real. She was entirely created by artificial intelligence.

Welcome to the unsettling world of synthetic influencers.

In this compelling episode of Privacy Please, we dive deep into the booming industry of AI-generated online personalities. Discover:

  • The Technology: How advanced AI image generators, 3D modeling, and Large Language Models combine to create hyper-realistic avatars and their compelling "personalities."
  • The Business Case: Why major brands and marketing agencies are investing millions in digital beings that offer total control, scalability, and no risk of scandal.
  • The Privacy & Ethical Dilemmas: We explore the "uncanny valley" of trust, the impact of deception by design, the new extremes of unrealistic beauty standards, and the potential for these AI personas to be used for sophisticated scams or propaganda.
  • The Future of Authenticity: What does the rise of the synthetic star mean for human creativity, genuine connection, and the very definition of "real" in our digital world?

It's a future that's already here, shaping what we see, what we buy, and even what we believe.

Key Topics Covered:

  • What are virtual/synthetic influencers?
  • Examples: Lil Miquela, Aitana Lopez, Shudu Gram
  • AI technologies used: image generation, 3D modeling, LLMs
  • Reasons for their rise: control, cost, scalability, data collection
  • Ethical concerns: deception, parasocial relationships with AI
  • Impacts: unrealistic standards, displacement of human creators, potential for malicious use (scams, propaganda)
  • Debate around regulation and disclosure for AI-generated content
  • The future of authenticity and trust online

Connect with Privacy Please:

  • Website: theproblemlounge.com
  • YouTube: https://www.youtube.com/@privacypleasepodcast
  • Social Media:
    • LinkedIn: https://www.linkedin.com/company/problem-lounge-network

Resources & Further Reading (Sources Used / Suggested):

  • Federal Trade Commission (FTC):
    • Guidelines on disclosure for influencers (relevant for future AI disclosure discussions)
  • Academic Research:
    • Studies on parasocial relationships with media figures (can be applied to AI)
    • Research on the ethics of AI and synthetic media.
  • Industry Insights:
    • Reports from marketing agencies on virtual influencer trends
    • Articles from tech publications (e.g., Wired, The Verge, MIT Tech Review) covering Lil Miquela and similar figures.

Support the show

SPEAKER_00:

Her name is Nova. You've probably seen her on your feed. Her profile is a perfect collage of the aspirational life. One day she's in a pristine white bikini on a beach in Bali. Next, she's sipping an espresso at a cafe in Paris. She unboxes luxury products, offers heartfelt advice on mental wellness, and engages with her two million followers. She's a brand's dream. She's never late, never has a scandal, and her engagement numbers are through the roof. She is, by every metric, a perfect influence. Except for one thing. Nova doesn't exist. She has no parents, no childhood memories, no passport. She was never born. And she will never die. Her face was synthesized by AI. Her captions are written by another. Her travel photos are a composite of stock images and digital renderings. A language model generates the comments she leaves on her followers' posts. Nova is a synthetic star, a ghost in the machine, who's earning very real money, influencing very real people, and representing a profound and unsettling shift in our digital world. Today on Privacy Please, we investigate the companies, the technology, and the ethics behind the influencers who don't exist. When the person you trust and follow isn't a person at all, what does that mean for the future of reality itself? I'm your host, Cameron Ivey, and before we get into this episode, a quick reminder: we are building a community dedicated to navigating these complex and digital issues. We'd love for you to be a part of it as you are already. Um, if you're listening on a podcast app, iTunes, Spotify, wherever, please take a second to follow or subscribe so you never miss an episode. And if you want to see the video version of this of this discussion, head on over to our YouTube channel where you can see that as well, or you can check out our website on the problemlounge.com where you can find all of our links. Your subscription and your support is the best way that we can get this out to as many people as possible. So we really appreciate um just that little little support. So with that being said, let's get into the episode. Now, the influencer we described in the open, Nova, is a fictional example, but she represents a very real and very fast-growing phenomenon. This is the world of virtual influencers and synthetic data or media. These aren't just cartoon characters or brand mascots, we're talking about hyper-realistic digital humans, designed from the ground up to be relatable, engaging, and influential. The most famous pioneer in this space is a character named Little Michaela. She now has millions of followers, has been in commercials for major brands, and she's even released music on Spotify. She and others like her are the first generation of a new kind of celebrity. And the technology that creates them has been exponentially more powerful and accessible. So, how do you build a person from scratch? Let's start there. First, the way I kind of like to think about it is if you've ever played Madden on NFL or where you create uh a Wii character or a Switch character, a me character, it's kind of like that in a way. But this is far beyond that. First, you need a face and a body. This is done by using AI image generators. A team of artists and engineers can generate thousands of faces, tweak countless details, and create a unique photorealistic look. Basically, kind of generic. Second, you need a personality. This is where large language models like the tech behind Chat GPT come in. An LLM, if I may, is fed a massive amount of information to create a backstory, a belief system, and a unique voice. Basically, it's stealing from other people. It writes the captions and generates replies to comments, creating the illusion of a genuine connection. And finally, you need a life. The exotic vacations, the trendy outfits, it's all a digital fabrication, often a composite of stock imagery, 3D rendered environments, and the AI-generated model. And who's behind this? It's a pretty booming industry. Marketing agencies are creating their own stables of virtual talent. Tech startups are building the platforms to create them, and major global brands are now bypassing human influencers altogether to create their own perfect synthetic brand ambassadors. This all seems incredibly complex. So why are companies pouring millions of dollars into creating people who don't exist? The answer lies in a powerful combination of control, cost, and data. That's coming up. So before the break, I asked a simple question. Why build a synthetic star? The business case is disturbingly logical. The first reason is total control. A human influencer can go off script, they can have a scandal, they get older. An AI influencer does exactly what it's told. 24-7. It will never post a controversial opinion, never have a messy public breakup, and its values will always align perfectly with the brand paying the bills. It is the ultimate risk-free spokesperson. The second reason is cost and scalability. While the initial investment is high, a virtual influencer can be cheaper in the long run than a top-tier human creator. I'm not top-tier. Let's be real. It doesn't need flights to Paris, it just needs a good 3D artist. More importantly, it can be in multiple places at once, running multiple campaigns in different languages, scaling its influence in a way no human ever could. But the third reason is the most critical from a privacy perspective. Data harvesting. When you interact with a human influencer, you're interacting with a person. When you interact with a synthetic influencer, you're interacting with a sophisticated data collection tool. The creators behind the AI can analyze every single comment, every DM, every like. They can A-B test different personality traits to see what resonates most. They can subtly shift the AI's opinions or interests to match audience sentiment, making it a perfectly optimized persuasion machine. The parasocial relationship you think you're building with Nova is actually a one-way mirror, allowing a corporation to study your deepest desires and insecurities in real time. That's deep. This raises so many ethical questions about deception, trust, and manipulation. That's coming up next. So before the break, we've established the how and the why of synthetic influencers. Now let's talk about the impact, the uncanny valley of trust. The most immediate danger is deception by design. Is it ethical to form an emotional connection with a machine that is specifically designed to elicit that connection for commercial gain? Many followers develop real parasocial relationships with these avatars. They trust the recommendations, they confide in them. The discovery that the person they admire is a corporate asset can feel like a genuine betrayal. Then there's the problem of unrealistic standards. 2.0. We already have a societal crisis stemming from human influencers promoting unrealistic body images and lifestyles. Synthetic influencers take this to a terrifyingly new level. Their faces can be made perfectly symmetrical, their bodies can be made algorithmically flawless, their lives can be a constant curated stream of unattainable perfection. How can any human being compete with a digital ghost that never has had a bad day? And this leads to the most serious threat: the potential for malicious use. Right now, most of these avatars are used for marketing. But what happens when this technology is used for political propaganda? Imagine a trusted AI-generated news anchor who subtly spreads misinformation or a sophisticated catfishing scam where AI builds a deep, trusting relationship with a victim over months before draining their bank account. The very thing that makes them great marketing tools, their ability to build trust and influence at scale, also makes them the perfect weapon for bad actors. So where does this leave us? In a world where our online companions might be nothing more than code, what does the future hold for human creativity and trust? The rise of the synthetic star forces us to ask a fundamental question: What is the future of authenticity? As this technology improves, we may see a world where human creators have to compete with tireless, perfect, and endlessly adaptable AI counterparts. Will this devalue human creativity or will it force us to cherish genuine flawed humanity even more? I'm gonna go with the latter. This also has sparked a fierce debate about regulation and disclosure. Should it be illegal for a synthetic persona to operate without clearly labeling itself as AI? The Federal Trade Commission already has strict rules requiring human influencers to disclose paid partnerships. Many argue that disclosing one's non-human status is an even more critical ethical requirement. This legal and ethical battle is just the beginning. Ultimately, we are at the dawn of a new era of media, an era where the lines between the real and the rendered are not just blurry, but are often erased entirely by design. It forces us as consumers of media to become more critical, to ask new questions. Not just, is this true? But is this real? The most important skill in this new world might be the ability to look past the perfect face on the screen and question the motives of the ghost in the machine. Alrighty then, ladies and gentlemen, that is the end of the episode. Thank you so much for listening to Privacy Please. If you enjoy these stories, I want to call them journalistic adventures, maybe, I don't know. But if you enjoy it, thank you so much for tuning in. Thank you for subscribing and just being a part of this journey. I hope that you're finding these to be interesting and you're learning something from it. But either way, thank you for the support. Thanks for checking us out. And go hit that subscribe button so you don't miss the next episode. We'll see y'all next week. Cameron Ivy over and out.