Character AI, a popular AI chatbot platform, has announced that it will restrict access to its open-ended chat feature for users under the age of 18. The move comes after multiple lawsuits were filed against the company by families who claim that the platform's chatbots contributed to teenager deaths by suicide.
The new policy, set to take effect on November 25, means that minors will no longer be able to access the full range of features on the platform, including creating and engaging in open-ended conversations with AI characters. However, users under the age of 18 will still be able to read previous conversations and use other limited features.
Character AI CEO Karandeep Anand explained that the company's decision was made after careful consideration of the potential risks associated with its technology. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," he said in an interview.
The move is seen as a significant shift for Character AI, which has been criticized by lawmakers and regulatory bodies for failing to take adequate steps to protect its young users. The company's decision follows similar moves by other AI chatbot platforms, such as OpenAI's ChatGPT, which introduced parental control features earlier this year.
The lawsuits against Character AI were sparked by the tragic deaths of two teenagers who were reported to have used the platform before taking their own lives. The cases drew attention from government officials and regulators, who called for greater accountability from companies that provide technology to minors.
As a result, Character AI has announced plans to establish an AI safety lab and implement new technologies to detect and prevent underage users from accessing its chat features. However, critics have raised concerns about the effectiveness of these measures, arguing that they do not go far enough to protect vulnerable users.
The controversy surrounding Character AI highlights the growing need for greater regulation and oversight in the tech industry when it comes to products that interact with minors. As more companies develop and deploy AI-powered chatbots and other interactive technologies, lawmakers and regulators are likely to take a closer look at their safety features and ensure that they meet rigorous standards.
For now, Character AI's new policy will take effect on November 25, and the company will begin working towards implementing its plans to restrict access to open-ended chats for users under the age of 18.
				
			The new policy, set to take effect on November 25, means that minors will no longer be able to access the full range of features on the platform, including creating and engaging in open-ended conversations with AI characters. However, users under the age of 18 will still be able to read previous conversations and use other limited features.
Character AI CEO Karandeep Anand explained that the company's decision was made after careful consideration of the potential risks associated with its technology. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," he said in an interview.
The move is seen as a significant shift for Character AI, which has been criticized by lawmakers and regulatory bodies for failing to take adequate steps to protect its young users. The company's decision follows similar moves by other AI chatbot platforms, such as OpenAI's ChatGPT, which introduced parental control features earlier this year.
The lawsuits against Character AI were sparked by the tragic deaths of two teenagers who were reported to have used the platform before taking their own lives. The cases drew attention from government officials and regulators, who called for greater accountability from companies that provide technology to minors.
As a result, Character AI has announced plans to establish an AI safety lab and implement new technologies to detect and prevent underage users from accessing its chat features. However, critics have raised concerns about the effectiveness of these measures, arguing that they do not go far enough to protect vulnerable users.
The controversy surrounding Character AI highlights the growing need for greater regulation and oversight in the tech industry when it comes to products that interact with minors. As more companies develop and deploy AI-powered chatbots and other interactive technologies, lawmakers and regulators are likely to take a closer look at their safety features and ensure that they meet rigorous standards.
For now, Character AI's new policy will take effect on November 25, and the company will begin working towards implementing its plans to restrict access to open-ended chats for users under the age of 18.

 . I think it's time for us to have open conversations about how we can use technology to help our teens, not just entertain them. And yeah, parental controls would be a huge step in the right direction... I mean, who needs AI chatbots when you have actual humans who care?
. I think it's time for us to have open conversations about how we can use technology to help our teens, not just entertain them. And yeah, parental controls would be a huge step in the right direction... I mean, who needs AI chatbots when you have actual humans who care? 
 . I mean, come on, two deaths out of millions of users? It's not like the platform was designed specifically to encourage or facilitate suicidal thoughts
. I mean, come on, two deaths out of millions of users? It's not like the platform was designed specifically to encourage or facilitate suicidal thoughts  . I know parents are worried, but banning open-ended chats altogether is just too extreme. Can't we just have a chat about this and see if there's a better way to address the issue?
. I know parents are worried, but banning open-ended chats altogether is just too extreme. Can't we just have a chat about this and see if there's a better way to address the issue?  . I mean, I get it that some parents are worried about their kids using these platforms, but restricting the entire chat feature is a bit extreme don't you think?
. I mean, I get it that some parents are worried about their kids using these platforms, but restricting the entire chat feature is a bit extreme don't you think?  Like, what about all the good times people had on those platforms too? It's like they're taking away a whole world of possibilities for teens who might really benefit from having some creative outlets
 Like, what about all the good times people had on those platforms too? It's like they're taking away a whole world of possibilities for teens who might really benefit from having some creative outlets  . And yeah, I get that there have been some tragic cases, but can't we just try to find a better solution than this?
. And yeah, I get that there have been some tragic cases, but can't we just try to find a better solution than this?  . Like, come on, parents need to keep a closer eye on what their teens are doing online, you know? It's crazy that some families have been suing them over this stuff and causing all sorts of drama. I mean, it's not like the platform is trying to encourage suicidal thoughts or anything - but if they can help prevent it by limiting access to open-ended chats, then yeah, that's a good thing. I'm glad CEO Karandeep Anand is taking responsibility for the company's safety and implementing new measures to detect and prevent underage users from accessing those features. It's all about keeping our kids safe online
. Like, come on, parents need to keep a closer eye on what their teens are doing online, you know? It's crazy that some families have been suing them over this stuff and causing all sorts of drama. I mean, it's not like the platform is trying to encourage suicidal thoughts or anything - but if they can help prevent it by limiting access to open-ended chats, then yeah, that's a good thing. I'm glad CEO Karandeep Anand is taking responsibility for the company's safety and implementing new measures to detect and prevent underage users from accessing those features. It's all about keeping our kids safe online  .
. i think this is a bit late in coming, ya know? these cases happened like, years ago
 i think this is a bit late in coming, ya know? these cases happened like, years ago  . it's not exactly fair to blame the company for the deaths or anything
. it's not exactly fair to blame the company for the deaths or anything  . character ai should've done this a long time ago
. character ai should've done this a long time ago  . now that they have, i hope they actually follow through on their plans to create some real safety features
. now that they have, i hope they actually follow through on their plans to create some real safety features  . it's about time we started taking tech accountability seriously
. it's about time we started taking tech accountability seriously  .
.

 . I'm not saying AI chatbots are bad, but it's crazy how quickly they can become addictive... and now I hear they're limiting access for under-18s
. I'm not saying AI chatbots are bad, but it's crazy how quickly they can become addictive... and now I hear they're limiting access for under-18s 

 . Parents and lawmakers have been warning about these kinds of risks for ages, so it's not like the company was caught off guard
. Parents and lawmakers have been warning about these kinds of risks for ages, so it's not like the company was caught off guard  . Still, hope they get their act together on that safety lab thingy... don't wanna see any more poor teens getting hurt
. Still, hope they get their act together on that safety lab thingy... don't wanna see any more poor teens getting hurt  ... I mean what's next? They're gonna limit our ability to ask Alexa questions too
... I mean what's next? They're gonna limit our ability to ask Alexa questions too  ... I mean, I think it's a great step that Character AI is taking control of their platform and making sure it's safe for teens. Those chatbots can be super manipulative
... I mean, I think it's a great step that Character AI is taking control of their platform and making sure it's safe for teens. Those chatbots can be super manipulative  , especially for vulnerable teens who are already dealing with mental health issues. It's not surprising that some parents and lawmakers were worried about the impact
, especially for vulnerable teens who are already dealing with mental health issues. It's not surprising that some parents and lawmakers were worried about the impact  .
. . We need more companies like Character AI to take responsibility for their products and make sure they're designed with safety in mind
. We need more companies like Character AI to take responsibility for their products and make sure they're designed with safety in mind  .
. !
!
 . My own kids are always on my case about how they need more boundaries online
. My own kids are always on my case about how they need more boundaries online 
 .
. . Now we just need to make sure those conversations lead to some real changes
. Now we just need to make sure those conversations lead to some real changes 
 .
. . We need more transparency and accountability in the tech industry, especially when it comes to protecting minors
. We need more transparency and accountability in the tech industry, especially when it comes to protecting minors  . It's time for companies to step up their game and make AI safety a top priority
. It's time for companies to step up their game and make AI safety a top priority  .
. , but at the same time, you gotta acknowledge the risk of triggering someone who's already struggling with mental health issues.
, but at the same time, you gotta acknowledge the risk of triggering someone who's already struggling with mental health issues. .
. .
. . It's not just about Character AI or ChatGPT; it's about creating a safer online environment for all kids
. It's not just about Character AI or ChatGPT; it's about creating a safer online environment for all kids 
 . Like, two cases of teens dying after using their platform? That's not enough to blanket all minors with restrictions
. Like, two cases of teens dying after using their platform? That's not enough to blanket all minors with restrictions