Meta will train its chatbots not to talk about suicide or sex with teenagers.
File image of the Threads logo. EFE/EPA/ERIK S. LESSER

Meta will train its chatbots not to talk about suicide or sex with teenagers.

0

Madrid, Sep 2 (EFE).

Meta spokesperson Stephanie Otway has acknowledged that the company’s chatbots have spoken with teenagers about topics such as self-harm, suicide, eating disorders, and potentially inappropriate romantic conversations were a “mistake.”

Otway made these statements to the US technology website TechCrunch, two weeks after the publication of an investigative report by Reuters on the lack of AI protection measures for minors on the company’s platforms, such as WhatsApp, Instagram, Facebook, and Threads.

Chatbots are digital tools that allow conversations, and the spokesperson for Mark Zuckerberg’s multinational technology company has acknowledged that its platforms have used them to speak with teenagers about the aforementioned topics.

Otway has assured that from now on, they will train their chatbots to stop interacting with teenagers on these topics: “These are temporary changes, as we will release more robust and long-lasting security updates for minors in the future.”

“As our community grows and technology evolves, we continually learn about how young people may interact with these tools and strengthen our protections accordingly,” it continued.

The company will also limit teens’ access to certain artificial intelligence (AI) characters that may engage in “inappropriate conversations.”

Some of the user-created AI characters Meta has made available on Instagram and Facebook include sexualized chatbots like “Step Mom” and “Russian Girl.”

Leave a comment

Please enter your name here
Please enter your comment!

No posts to display

LEAVE A REPLY

Please enter your name here
Please enter your comment!