Google has pulled its Gemma AI model from the platform, following a complaint from Sen. Marsha Blackburn (R-Tenn.), who alleges that the model generated false accusations of sexual misconduct against her. 
The senator, known for her conservative views, published a letter to Google CEO Sundar Pichai on Friday, demanding an explanation for how Gemma could produce such false claims. According to her letter, Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn expressed surprise that an AI model would simply generate fake links to fabricated news articles.
Google has stated that it is working hard to minimize hallucinations in its models but doesn't want non-developers tinkering with the open Gemma model, which could produce inflammatory outputs. The company can still be used through the API or by downloading and developing with the models locally.
Experts say it's unlikely that Gemma was designed to generate false claims and rather highlights a common issue in generative AI known as hallucinations. These are widespread and known issues that AI firms like Google try to mitigate. Other companies, such as Elon Musk's xAI, have intentionally pushed their chatbots to produce biased outputs.
Google's decision not to make Gemma more accessible may be seen as an attempt to avoid giving lawmakers ammunition to criticize the company or other tech giants. However, this move is unlikely to be the end of the saga and raises questions about the accountability of AI companies for the content produced by their models.
				
			The senator, known for her conservative views, published a letter to Google CEO Sundar Pichai on Friday, demanding an explanation for how Gemma could produce such false claims. According to her letter, Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn expressed surprise that an AI model would simply generate fake links to fabricated news articles.
Google has stated that it is working hard to minimize hallucinations in its models but doesn't want non-developers tinkering with the open Gemma model, which could produce inflammatory outputs. The company can still be used through the API or by downloading and developing with the models locally.
Experts say it's unlikely that Gemma was designed to generate false claims and rather highlights a common issue in generative AI known as hallucinations. These are widespread and known issues that AI firms like Google try to mitigate. Other companies, such as Elon Musk's xAI, have intentionally pushed their chatbots to produce biased outputs.
Google's decision not to make Gemma more accessible may be seen as an attempt to avoid giving lawmakers ammunition to criticize the company or other tech giants. However, this move is unlikely to be the end of the saga and raises questions about the accountability of AI companies for the content produced by their models.