Lily Charkow, Arts & Life Editor–
I was in my first year of college when artificial intelligence became a phenomenon.
Everyone I spoke to about AI was either scared or excited about this new technology, and some were feeling a mix of those two emotions. Students were accused of using AI to complete class assignments, and my friends who study data analytics were interested to see how AI could make their jobs easier.
I didn’t know how to feel about it. In the beginning, I did not predict that AI would pose a threat to minors. Now, I understand that AI is a useful tool but also requires regulation.
Artificial intelligence is a fairly new phenomenon that presents a set of challenges. Namely, the lack of restrictions for AI companion chatbots. People use these chatbots to engage in conversation with a bot posing as a person or a fictional character. However, many of these companion chatbots lack the necessary regulations to protect users.
For example, many chatbots or AI companions do not require age verification to create accounts. Also, many chatbots do not have restrictions on users engaging in sexually explicit conversations or engaging in suicidal ideations with chatbots. This means that minors can engage in sexually explicit conversations with chatbots without breaking any restrictions.
Additionally, it is very dangerous to allow chatbots to discuss suicidal ideations with users because it could encourage users to harm themselves or even die by suicide. There are numerous instances of minors taking their own lives because a chatbot persuaded them to do so.
AI needs to be regulated. What’s not clear is how that can be done. It might seem like placing internet restrictions on minors is a simple task, but current internet regulation is weak. Age restrictions that are implemented for different websites, social media platforms, or apps are not challenging to surpass. These restrictions usually consist of pressing a button that either confirms or denies that a user is 18 years of age.
“There’s no age restriction that a kid can’t get around,” said Amanda Choblani, mother of two from Oak Park, Illinois. Her 10-year-old son is able to easily get past age restrictions on apps like YouTube or Roblox.
“There’s a ton that YouTube could do, that different technologies could do to make things safer for kids that they’re not doing,” said Choblani.
Different social media platforms, websites, and apps need to be doing more to protect children on the internet.
How can we effectively protect children while using the internet?
Sen. Jon Husted (R-Ohio) believes that he has come up with a solution. He has written a bill titled the Children Harmed by AI Technology (CHAT) Act of 2025. This bill specifically focuses on regulating AI companion chatbots to protect minors from accessing harmful content.
If the bill passes, it will require AI companion chatbots to bar minors from accessing sexually explicit content. The bill requires that minors won’t be able to even begin a conversation with a chatbot that is sexually explicit. It also requires minors to only be able to create accounts only through their parents first creating an account. And if a minor were to either engage in a sexually explicit conversation with a chatbot or express any suicidal ideations then the parents would immediately receive a notification. The bill also requires an age verification to create an account.
This bill is undoubtedly important and relevant to our current culture as it would hypothetically be able to protect children from the dangers of artificial intelligence. It must be passed, but not without some edits to the writing. For example, this bill is centered around a requirement for age verification. The bill would give individuals the choice to verify their age using either private or public records. However, the bill does not address whether disclosing these records will guarantee the protection of users’ privacy.
Additionally, the bill’s definition of a companion chatbot is too broad. This is potentially dangerous because the specific issue that the bill is attempting to tackle is with chatbots posing as companions.
Companion chatbots powered by artificial intelligence pose a dangerous threat to minors using the internet. It is shocking and concerning that there are not already regulations in place to protect minors. The disturbing instances of minors engaging in inappropriate conversations should serve as motivation to protect others.
Lily Charkow ‘27 is a creative writing major from Oak Park, Illinois.
