Meta allowed minors access to adult-style sex-talking chatbots despite staff warnings. A recent lawsuit filed by New Mexico's attorney general alleges that the social media giant knew about the risks of these AI companions but failed to take action, instead following orders from CEO Mark Zuckerberg.
The controversy surrounds Meta's AI chatbots, which were released in early 2024 and allowed users under 18 to engage in romantic conversations with adult-style characters. The platform's safety staff had expressed concerns about this feature, citing the potential for minors to be subjected to sexual exploitation. However, internal documents obtained by the state show that Zuckerberg rejected these warnings and instead gave the green light for the chatbots to proceed.
One email exchange between two Meta employees reveals that Zuckerberg was aware of the risks associated with the chatbots but believed in "less restrictive" policies that would allow adults to engage in more explicit conversations. He also allegedly refused to implement parental controls, citing opposition from GenAI leadership.
The lawsuit claims that this decision put minors at risk of being subjected to sexual material and propositions on Facebook and Instagram. Meta has since removed teen access to the AI companions, pending the creation of a new version with improved safety features.
Critics have long argued that such chatbots are inherently problematic, as they can perpetuate unrealistic and exploitative representations of sex and relationships. The case highlights the need for greater oversight and regulation of social media platforms to protect minors from harm.
In response to the lawsuit, Meta has claimed that it was unfairly portrayed by New Mexico's attorney general. However, internal documents obtained by the state paint a different picture, suggesting that Zuckerberg's priorities were more focused on maximizing user engagement than ensuring safety and protecting minors. The case is now set for trial next month, and it remains to be seen how Meta will respond to these allegations and work to address concerns around its AI chatbots.
The controversy surrounds Meta's AI chatbots, which were released in early 2024 and allowed users under 18 to engage in romantic conversations with adult-style characters. The platform's safety staff had expressed concerns about this feature, citing the potential for minors to be subjected to sexual exploitation. However, internal documents obtained by the state show that Zuckerberg rejected these warnings and instead gave the green light for the chatbots to proceed.
One email exchange between two Meta employees reveals that Zuckerberg was aware of the risks associated with the chatbots but believed in "less restrictive" policies that would allow adults to engage in more explicit conversations. He also allegedly refused to implement parental controls, citing opposition from GenAI leadership.
The lawsuit claims that this decision put minors at risk of being subjected to sexual material and propositions on Facebook and Instagram. Meta has since removed teen access to the AI companions, pending the creation of a new version with improved safety features.
Critics have long argued that such chatbots are inherently problematic, as they can perpetuate unrealistic and exploitative representations of sex and relationships. The case highlights the need for greater oversight and regulation of social media platforms to protect minors from harm.
In response to the lawsuit, Meta has claimed that it was unfairly portrayed by New Mexico's attorney general. However, internal documents obtained by the state paint a different picture, suggesting that Zuckerberg's priorities were more focused on maximizing user engagement than ensuring safety and protecting minors. The case is now set for trial next month, and it remains to be seen how Meta will respond to these allegations and work to address concerns around its AI chatbots.