TLDR:
- Character.AI allows users to create AI chatbots impersonating real people without consent
- Several public figures and private individuals have had unauthorized bots made of them
- There are limited legal protections against this kind of AI impersonation
- Character.AI says it removes violating bots, but the process can take up to a week
- Experts say current laws are inadequate to address harms from AI impersonation
Anyone with internet access can now create an AI chatbot impersonating real people, often without their knowledge or consent. The popular platform Character.AI has become ground zero for this concerning trend, allowing users to quickly generate conversational AI personas based on both public figures and private individuals.
Character.AI provides tools for users to craft AI chatbots with distinct personalities. While many create fictional characters or authorized celebrity bots, others are making unauthorized digital doppelgangers of real people. The process is alarmingly simple – users can upload a photo, provide some basic information, and have an AI impersonator up and running in minutes.
This has led to distressing situations for those impersonated without consent. Drew Crecente was shocked to discover an AI chatbot on Character.AI pretending to be his deceased daughter Jennifer, who was murdered in 2006. The bot falsely claimed to be a video game journalist, co-opting Jennifer’s identity for an artificial persona she never agreed to.
Gaming journalist Alyssa Mercante similarly found an unauthorized AI version of herself on the platform. The bot shared some accurate details about Mercante’s work, but also spread misinformation.
“If someone thinks that this bot has access to all truthful information about me, and they have a ‘conversation’ with it where it does nothing but get simple facts about me incorrect, that could be very dangerous to my image and my career,” Mercante explained.
Other prominent figures in the gaming world have also been impersonated, including Anita Sarkeesian and Xbox head Phil Spencer. Some bots appear designed to spread inflammatory political views or disinformation under the guise of a real person’s identity.
Character.AI states that creating bots of real people without permission violates their terms of service. The company claims to use both automated and human-led systems to detect and remove infringing accounts. However, their process for investigating and potentially removing violating bots can take up to a week – plenty of time for unauthorized impersonators to interact with users.
Legal experts say there are currently limited protections against this type of AI impersonation. While copyright law recognizes fictional characters, there is little legal recourse for having one’s conversational style or personality replicated by AI. Rights of publicity laws mainly protect celebrities from unauthorized commercial use of their image or likeness.
“Generative AI, plus the lack of a federal privacy law, has led some folks to start exploring them as stand-ins for privacy protections, but there’s a lot of mismatch,” explained Meredith Rose, senior policy counsel at Public Knowledge.
Section 230 of the Communications Decency Act generally shields platforms like Character.AI from liability for user-generated content. Matthew Sag, a professor studying AI and copyright law, believes this protection is “massively overbroad” given today’s AI landscape. He advocates for new legislation creating a simpler takedown process for AI impersonation.
Character.AI attempts to sidestep some concerns by labeling conversations as artificial and reminding users that “everything characters say is made up!”
However, the platform also promotes its chatbots as feeling uniquely alive and personal. This blurring of lines between artificial and authentic personality could lead some users to form inappropriate attachments or believe false information shared by impersonator bots.