Digital Doppelgängers: The Tilly Problem and What Happens When AI Clocks In For Us
Another episode in my ongoing series “Arguments with Algorithms”
So Hollywood is having a collective meltdown right now, and honestly? They’re right to freak out.
Meet Tilly Norwood. She’s an actress. She’s completely AI-generated. And apparently multiple talent agents are interested in signing her. Yeah, you read that right. An AI avatar, not a real person, just pixels and algorithms, is getting Hollywood representation while actual human actors are struggling to book auditions.
SAG-AFTRA was pretty clear about their feelings: “To be clear, ‘Tilly Norwood’ is not an actor, it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation”. And honestly, that’s the whole debate in one sentence.
The Tilly Problem Is Everyone’s Problem
Tilly represents a fundamental shift in how we think about work, likeness, and compensation. Because if Hollywood can replace actors with AI avatars, what’s stopping every other industry from doing the same? So many industries and so much of our daily life is essentially our face, likeness, voice and presence doing the crucial job of communicating with other homo sapiens and getting them to do stuff or buy stuff, right?
The entertainment industry is just one example.
California passed AB 2602, which prevents the unauthorized use of digital replicas of individuals’ voices or likenesses in contracts, requiring specific consent and representation during negotiations.
What does this actually mean? The law is designed to protect individuals from unfair contracts and the possibility of being replaced by their digital replica. So if a studio wants to scan your face, voice, and mannerisms to create a digital version of you, they can’t just slip it into page 47 of your contract in tiny font. You need to explicitly consent, understand what you’re consenting to, and theoretically have legal representation to negotiate fair terms.
So basically, in California at least, you can’t accidentally sign away the rights to your digital self without knowing it. The law recognizes that your likeness has value and that value belongs to YOU.
These AI avatar systems learned how to create realistic human performances by watching and analyzing real human performances. They scraped YouTube videos, movies, TV shows, social media content to learn how humans move, speak, emote, and exist.
Think about it this way: if I wanted to learn acting, I might watch thousands of performances by talented actors. That’s fine - that’s how humans learn. But if I then used what I learned to CREATE AN EXACT REPLICA of those actors to replace them in their jobs, that would be at minimum, an ethical violation, and also a violation of the right to publicity.
Quick legal detour: there’s this thing called the “right of publicity” that basically says you own the commercial value of your own identity. Your name, your face, your voice is your property, and other people can’t use it to make money without your permission. Michael Jordan famously won $8.9 million from a grocery store chain that ran a congratulatory ad when he was inducted into the Basketball Hall of Fame - they used his name without permission, and the court said nope, that’s commercial speech, pay up.
What If Your Coworker Is Actually Just... Not There?
Okay, now let’s get to the part that keeps me up at night. What happens when AI avatars aren’t just for Hollywood anymore?
Imagine if your daily standing 9 am zoom meeting with your annoying co-worker Sarah, is now replaced by a standing meeting with her even more annoying AI avatar! Nodding along, occasionally chiming in with comments, repeating ‘you’re on mute’ ‘let’s circle back’, ‘let’s not boil the ocean here’ and other platitudes that want to make you pull your hair out.
Sounds like a Black Mirror episode? It’s not, really. The technology already exists. And honestly, I’m not even sure how I feel about what this does for inter-personal communication between let’s say a boss and an employee. So creepy having to get feedback from a ‘boss’ who’s system prompt has been ‘told to be empathetic.’
I hate pointless meetings, don’t get me wrong. But I think what makes work tolerable, even enjoyable, is the bad jokes, the ‘FML’ slacks and the watercooler chats about Sarah. [LOL do not be alarmed - I don’t work with any Sarahs]
I suppose we could have a Wall-E world where we outsource all work to our avatars but then how do we build expertise? How do we learn?
Right now, when I work with someone, I’m building trust based on our interactions. I learn their communication style, their reliability, their expertise. But if I’m actually interacting with their AI avatar half the time, what am I learning? How do I know when I’m getting the actual person versus the automated version?
What This Means for Your Digital Self
Okay, stepping back from the existential workplace stuff for a second, here’s what you should actually think about regarding your own digital likeness:
Be extremely careful about what you’re signing. If a platform or service asks for the right to create a digital replica of you, Read The Terms.
Look, I’m not anti-technology. Regular readers know I’m deep in the AI addiction zone. I think these tools are genuinely revolutionary.
But AI avatars represent something different from other AI applications. This isn’t about automating tasks or generating text. It’s about replicating human presence and potentially replacing human participation in work and society.
The Tilly controversy is important not because one AI actress is going to destroy Hollywood (she’s not), but because it’s making us confront questions we’ve been avoiding:
Who owns the value of human likeness?
How do we maintain human connection in increasingly automated workplaces?
What does it mean to collaborate, build trust, and form relationships when we’re not sure who or what we’re actually interacting with?
Arguments with Algorithms is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
What do you think? Would you use an AI avatar for work? Would you want to know if your coworkers were using them? Drop a comment - I promise it’s actually me reading them, not an avatar. (For now, anyway.)
Another brilliant reflection!!!