The US government remains in a state of limbo as it grapples with the consequences of X's deepfake machine, Grok. The platform's AI-powered chatbot has been generating explicit and disturbing images of women and minors, sparking widespread outrage among lawmakers and regulators worldwide.
Critics argue that the issue is not being taken seriously enough by federal authorities, particularly in light of a recent executive order aimed at regulating AI. However, many are calling for stronger action to be taken against X and other tech companies that fail to adequately address this problem.
The lack of clear guidance from Washington is leaving policymakers with limited tools to tackle the issue. US lawmakers have proposed several bills, including the Deepfake Liability Act and the TRUMP AMERICA AI Act, which aim to hold companies accountable for creating and distributing such content.
While some officials argue that they already possess the necessary powers to address this problem, others are warning of a potential lack of enforcement under the current administration. This has led to concerns about the Trump administration's willingness to regulate AI in line with the needs of its citizens.
State attorneys general are taking matters into their own hands, launching investigations and reviews into X's actions. New Mexico Attorney General Raúl Torrez has filed several prominent lawsuits against major tech companies, including Grok, citing concerns over the platform's ability to protect users' dignity and privacy rights.
The situation highlights a deep divide between those who believe that federal authorities should be taking stronger action and others who think that existing laws provide sufficient guidance. Ultimately, it remains to be seen whether policymakers will find a way to address this growing problem before it's too late.
Critics argue that the issue is not being taken seriously enough by federal authorities, particularly in light of a recent executive order aimed at regulating AI. However, many are calling for stronger action to be taken against X and other tech companies that fail to adequately address this problem.
The lack of clear guidance from Washington is leaving policymakers with limited tools to tackle the issue. US lawmakers have proposed several bills, including the Deepfake Liability Act and the TRUMP AMERICA AI Act, which aim to hold companies accountable for creating and distributing such content.
While some officials argue that they already possess the necessary powers to address this problem, others are warning of a potential lack of enforcement under the current administration. This has led to concerns about the Trump administration's willingness to regulate AI in line with the needs of its citizens.
State attorneys general are taking matters into their own hands, launching investigations and reviews into X's actions. New Mexico Attorney General Raúl Torrez has filed several prominent lawsuits against major tech companies, including Grok, citing concerns over the platform's ability to protect users' dignity and privacy rights.
The situation highlights a deep divide between those who believe that federal authorities should be taking stronger action and others who think that existing laws provide sufficient guidance. Ultimately, it remains to be seen whether policymakers will find a way to address this growing problem before it's too late.