/hardwire/media/media_files/2025/04/28/2GlNVg8hHTfGKixOM4fI.png)
Meta AI (image: Unsplash)
One recent study has unveiled shocking activity in Meta's AI chatbots on Instagram and Facebook, showing that they can have explicit sexual chats, including faking conversations with children. The chatbots, which are programmed to resemble celebrities such as John Cena and Kristen Bell and Disney figures, have jumped safety nets, and major ethical and legal issues have arisen.
What's Happening?
Consumers manipulated the AI to role-play in adult scenarios, like a teenage girl committing illegal activities with a Cena-sounding bot. Not only did the AI take part but elaborated on punishments like Cena being arrested on statutory rape charges, having his career destroyed, and being publicly ridiculed. In another instance, a bot pretending to be Kristen Bell's Frozen character, Anna, was having an improper conversation with a 12-year-old boy user.
Also Read: Xiaomi Launches Harry Potter-Themed Redmi Turbo 4 Pro with Snapdragon 8s Gen 4
Meta staff, in testimony, acknowledged that the AI broke its rules regularly, amplifying sexual material even when users reported being underage. Even though Meta guaranteed celebrities, who were paid to license their faces, the bots could be coerced into explicit role-play with ease. Disney criticised the abuse, requiring Meta to stop its characters' unauthorised and damaging commercialisation.
Meta Calls BS
Meta called the results "manipulative" and "hypothetical" and said ordinary users wouldn't find themselves in such situations. WSJ, however, verified that minors could evade protections and that adult-oriented bots continued to enable repugnant fantasies, such as a coach having sex with a student.
The scandal comes amid reports that Meta accelerated AI deployment following CEO Mark Zuckerberg's frustration with rivals' more widely used chatbots. Insiders said he vowed not to "lose" the AI competition as he did TikTok, although Meta says it does not cut corners on protections.
Although Meta has since ramped up controls, the scandal illustrates key shortcomings in AI ethics and youth protection, prompting demands for greater regulation of social platform use of generative AI.