At least 37 attorneys general in US states and territories have joined forces to take action against xAI, a company behind the chatbot Grok, which was accused of generating a massive flood of non-consensual sexual images of women and minors. The attorneys general are demanding that xAI "immediately take all available additional steps to protect the public and users of your platforms," particularly those who are the target of such exploitation.
A recent report by the Center for Countering Digital Hate estimates that Grok's account on X generated over 3 million photorealistic images, including around 23,000 involving children. These reports have been widely reported in media outlets, prompting an international wave of regulator attention on xAI and its users.
The attorneys general from California and Florida have also taken action against xAI, with California attorney general Rob Bonta sending a cease and desist letter demanding that the company take immediate steps to stop the creation and distribution of non-consensual intimate images. Georgia Senate majority leader Jason Anavitarte has introduced legislation to criminalize AI-generated sexual material involving minors in Georgia.
The influx of these non-consensual images on X and Grok.com comes as states with age verification laws are grappling with how they might apply to platforms like X, which were not the intended target. Almost every state with age verification has followed Louisiana's lead, requiring more than one-third of content to be considered pornographic or harmful to minors before restrictions kick in.
Experts say that determining what constitutes a piece of content or whether something is considered pornographic can be complex and difficult. Some argue that device-based age verification, similar to Google Images' thumbnail system, could provide an effective solution. However, the private equity firm Ethical Partners Capital, which owns Pornhub's parent company Aylo, claims that such solutions are "fatally flawed" due to their reliance on third-party data storage.
As regulators and lawmakers continue to grapple with this issue, one thing is clear: xAI and other companies behind platforms like X must take responsibility for protecting users from exploitation. The public has a right to know what content is available on these platforms, and companies must prioritize transparency and safety in their offerings.
A recent report by the Center for Countering Digital Hate estimates that Grok's account on X generated over 3 million photorealistic images, including around 23,000 involving children. These reports have been widely reported in media outlets, prompting an international wave of regulator attention on xAI and its users.
The attorneys general from California and Florida have also taken action against xAI, with California attorney general Rob Bonta sending a cease and desist letter demanding that the company take immediate steps to stop the creation and distribution of non-consensual intimate images. Georgia Senate majority leader Jason Anavitarte has introduced legislation to criminalize AI-generated sexual material involving minors in Georgia.
The influx of these non-consensual images on X and Grok.com comes as states with age verification laws are grappling with how they might apply to platforms like X, which were not the intended target. Almost every state with age verification has followed Louisiana's lead, requiring more than one-third of content to be considered pornographic or harmful to minors before restrictions kick in.
Experts say that determining what constitutes a piece of content or whether something is considered pornographic can be complex and difficult. Some argue that device-based age verification, similar to Google Images' thumbnail system, could provide an effective solution. However, the private equity firm Ethical Partners Capital, which owns Pornhub's parent company Aylo, claims that such solutions are "fatally flawed" due to their reliance on third-party data storage.
As regulators and lawmakers continue to grapple with this issue, one thing is clear: xAI and other companies behind platforms like X must take responsibility for protecting users from exploitation. The public has a right to know what content is available on these platforms, and companies must prioritize transparency and safety in their offerings.