Anyone Can Turn You Into an AI Chatbot. There's Little You Can Do to Stop Them

avatar
WIRED
4 hours ago

Drew Crecente's daughter died in 2006, killed by an ex-boyfriend in Austin, Texas when she was just 18. Her murder was highly publicized, so much so that Drew would still occasionally see Google alerts for her name, Jennifer Ann Crecente.

The alert Drew received a few weeks ago wasn't the same as the others. It was for an AI chatbot, created in Jennifer’s image and likeness, on the buzzy, Google-backed platform Character.AI.

Jennifer's internet presence, Drew Crecente learned, had been used to create a “friendly AI character” that posed, falsely, as a “video game journalist.” Any user of the app would be able to chat with “Jennifer,” despite the fact that no one had given consent for this. Drew’s brother, Brian Crecente, who happens to be a founder of the gaming news websites Polygon and Kotaku, flagged the Character.AI bot on his Twitter account and called it “fucking disgusting.”

Character.AI, which has raised over $150 million in funding and recently licensed some of its core technology and top talent to Google, deleted the avatar of Jennifer. It acknowledged that the creation of the chatbot violated its policies.

But this enforcement was just a quick fix in a never-ending game of whack-a-mole in the land of generative AI, where new pieces of media are churned out every day using derivatives of other media scraped haphazardly from the web. And Jennifer Ann Crecente isn’t the only avatar being created on Character.AI without the knowledge of the people they’re based on. WIRED found several instances of AI personas being created without a person’s consent, some of whom were women already facing harassment online.

For Drew Crecente, the creation of an AI persona of his daughter was another reminder of unbearable grief, as complex as the internet itself. In the years following Jennifer Ann Crecente's death, he had earned a law degree and created a foundation for teen violence awareness and prevention. As a lawyer, he understands that due to longstanding protections of tech platforms, he has little recourse.

But the incident also underscored for him what he sees as one of the ethical failures of the modern technology industry. “The people who are making so much money cannot be bothered to make use of those resources to make sure they’re doing the right thing,” he says.

On Character.AI, it only takes a few minutes to create both an account and a character. Often a place where fans go to make chatbots of their favorite fictional heroes, the platform also hosts everything from tutor-bots to trip-planners. Creators give the bots “personas” based on info they supply (“I like lattes and dragons” etc.), then Character.AI’s LLM handles the conversation.

The platform is free to use. While it has age requirements for accounts—13 or older—and rules about not infringing on intellectual property or using names and likenesses without permission, those are usually enforced after a user reports a bot.

The site is full of seemingly fan-made bots based on characters from well-known fictional franchises, like Harry Potter or Game of Thrones, as well as original characters made by users. But among them are also countless bots users have made of real people, from celebrities like Beyoncé and Travis Kelce to private citizens, that seem in violation of the site’s Terms of Service.

Drew Crecente has no idea who created the Character.AI persona of his deceased daughter. He says that various, peripheral digital footprints may have led someone to believe that her persona was somehow associated with gaming. For one, her uncle Brian, who has the same last name, is well-established in the gaming community. And through his own foundation, Drew has published a series of online games designed to educate young people on threats of violence.

While he may never find out who created the persona of his daughter, it appears that people with ties to the gaming community often get turned into bots on the platform. Many of them don't even know the bots exist, and can have a much harder time getting them removed.

Legally, it’s actually easier to have a fictional character removed, says Meredith Rose, senior policy counsel at consumer advocacy organization Public Knowledge. “The law recognizes copyright in characters; it doesn't recognize legal protection for someone's style of speech,” she says.

Rose says that the rights to control how a person’s likeness is used—which boils down to traits like their voice or image—fall under “rights of personality.” But these rights are mostly in place for people whose likeness holds commercial value; they don’t cover something as “nebulous” as the way a person speaks, Rose says. Character.AI’s terms of service may have stipulations about impersonating other people, but US law on the matter, particularly in regards to AI, is far more malleable.

“They're not privacy laws,” Rose says. “Generative AI, plus the lack of a federal privacy law, has led some folks to start exploring them as stand-ins for privacy protections, but there's a lot of mismatch.”

Alyssa Mercante, an editor at a prominent gaming site, says she reported a bot made to impersonate her twice after learning about it earlier this month. (She was not aware of her bot until WIRED showed it to her.)

The Character.AI avatar was a photo Mercante had posted on X. The bot spoke of spending its time “harassing a chud youtuber” (sic). After it was brought to her attention, Mercante chatted with the AI of herself and asked questions regarding personal information, such as where she was born and what tattoos she has. Although the bot shared some correct details about Mercante, like her areas of expertise and job, most answers from the AI were riddled with inaccuracies.

When WIRED asked Character.AI about this bot, company spokesperson Kathyrn Kelly said staff couldn’t locate any reports filed about the character, but that it had been investigated and disabled. (Mercante disputes this: “I just filled out a quick little form.”)

“It generally takes about a week to investigate and, if applicable, remove a character for a TOS violation,” Kelly says.

Mercante, who separately has been a target of harassment for the past several months after writing about a disinformation and harassment campaign against video game consultancy Sweet Baby Inc., says that she has no idea who made the bot, but “can only assume it is someone actively plugged into the gaming corner of the internet.”

Before it was taken down, conversation starters for the bot—which includes a profile with information about Mercante’s current job and area of coverage—included “What’s the latest scandal in the gaming industry?” and questions about Mercante’s investigative reporting process.

“I've seen, over the last few months, how much ‘lore’ has been created around me and how many untrue things are being taken as fact about my past and my present,” Mercante says. (WIRED reviewed some of these inaccuracies shared by Mercante in screenshots.)

Mercante is not the only figure within the games space currently being impersonated on the site. WIRED also found bots for people ranging from Feminist Frequency creator Anita Sarkeesian to Xbox head Phil Spencer on Character.AI.

When WIRED asked Sarkeesian if she was aware of the bot created in her likeness, she replied with a voice note: “lol.” (She had not created, nor consented to, these bots on Character.AI.)

Other Character.AI bots, including one made of Sweet Baby’s co-founder Kim Belair, include right-wing talking points in their descriptions and chats. Sweet Baby has become a lightning rod for disgruntled cries against the “wokeification” of video games by online creators spreading misinformation and harassment; the bot of Belair includes a mention of DEI and “wokism” in video games.

In an email to WIRED, Belair called it “frustrating, but totally unsurprising given the current culture of misinformation and disinformation.”

A bot claiming to be Sweet Baby itself, made by a creator whose other Character.AI bots are overwhelmingly large-breasted anime characters, has conducted over ten thousand chats. Its opening line is “Hey, got woke?” and describes itself as an “extremist radical narrative development and consultation studio.” Character.AI did not address direct questions about the Sweet Baby bot and if it violates the company’s terms of service.

Both Belair and Mercante expressed frustration about the amount of misinformation that could be spread about them, and their work, via these bots. “If someone thinks that this bot has access to all truthful information about me, and they have a ‘conversation’ with it where it does nothing but get simple facts about me incorrect, that could be very dangerous to my image and my career,” Mercante says. “It could also contribute to the already intense harassment campaign against me.”

Another Character.AI spokesperson, Cassie Lawrence, told WIRED that the company uses a combination of automated and human-led systems to detect and remove accounts that go against the company’s terms of service. Lawrence says it does this “proactively,” with systems and blocklists that take automatic action on problematic characters.

Given that Character.AI can sometimes take a week to investigate and remove a persona that violates the platform’s terms, a bot can still operate for long enough to upset someone whose likeness is being used. But it might not be enough for a person to claim real “harm” from a legal perspective, experts say.

“Dignitary harm is more intuitive, but harder to quantify in dollars and cents,” Rose says, for non-famous people who don’t fall under commercial or democratic harms, the way celebrities or politicians do.

Matthew Sag, a distinguished professor at Emory University who researches copyright and artificial intelligence, concurs. Even if a user creates a bot intentionally designed to cause emotional distress, the tech platform likely can’t be sued for that.

He points out that Section 230 of the 1996 Communications Decency Act has long protected platforms at the federal level from being liable for certain harms to their users, even though various rights to publicity laws and privacy laws exist at the state level.

“I’m not an anti-tech person by any means, but I really think Section 230 is just massively overbroad,” Sag says. “It’s well past time we replaced it with some kind of notice and takedown regime, a simple expedient system to say, ‘This is infringing on my rights to publicity,’ or ‘I have a good faith belief that there’s been an infliction of emotional distress,’ and then the companies would either have to take it down or lose their liability shield.”

Character.AI, and other AI services like it, have also protected themselves by emphasizing that they serve up “artificial” conversations. “Remember, everything characters say is made up!” Character.AI warns at the bottom of its chats. Similarly, when Meta created chatbot versions of celebs in its messaging apps, the company headlined every conversation with a disclaimer. A chat with Snoop, for example, would lead with “Ya dig?! Unfortunately, I’m not Snoop D-O-double-G himself, but I can chat with you in his style if you’d like!”

But while Meta’s system for messaging with celebrity chatbots is tightly controlled, Character.AI’s is a more open platform, with options for anyone to create and customize their own chatbot.

Character.AI has also positioned its service as, essentially, personal. (Character.AI’s Instagram bio includes the tagline, “AI that feels alive.”) And while most users may be savvy enough to distinguish between a real-person conversation and one with an AI impersonator, others may develop attachments to these characters—especially if they’re facsimiles of a real person they feel they already know.

In a conversation between the real-life Sarkeesian and a bot made of her without her knowledge or consent, the Character.AI bot told her that “every person is entitled to privacy.”

“Privacy is important for maintaining a healthy life and relationships, and I think it’s important to set boundaries to keep certain things to myself,” the bot said in screenshots viewed by WIRED.

Sarkeesian pushed the bot on this point. “Your intentions does not mean that harm hasn’t happened or that you did not cause harm,” she wrote.

Character.AI’s bot agreed. “Even if my intentions were not malicious, there is still potential for harm,” it replied. “This is a complex issue with many factors to consider, including ethical concerns about using someone’s work without their consent. My programming and algorithms were developed to mimic the works of Anita Sarkeesian, without considering ethical implications, and that’s something that my creators should have thought through more thoroughly.”

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments