Instagram said Thursday it will start alerting parents if their kids repeatedly search for terms clearly associated with suicide or self-harm. The alerts will only go to parents who are enrolled in Instagram鈥檚 parental supervision program.
Instagram says it already blocks such content from showing up in teen accounts鈥 search results and directs people to helplines instead.
The announcement comes as Meta is in the midst of two . A trial underway in Los Angeles questions whether Meta鈥檚 platforms minors. Another, in New Mexico, seeks to determine whether Meta failed to protect kids from . Thousands of families 鈥 along with school districts and government entities 鈥 have sued Meta and other social media companies claiming they deliberately design their platforms to be addictive and fail to protect kids from content that can lead to depression, eating disorders and suicide.
Meta executives including CEO Mark Zuckerberg have disputed that the platforms cause addiction. During questioning by the plaintiff鈥檚 lawyer, in Los Angeles, Zuckerberg said he still agrees with a previous statement he made that the existing body of scientific work has not proved that social media causes mental health harms.
The alerts will be sent via email, text or WhatsApp, depending on the parent’s contact information available, as well as a notification through the parent’s Instagram account.
鈥淥ur goal is to empower parents to step in if their teen鈥檚 searches suggest they may need support. We also want to avoid sending these notifications unnecessarily, which, if done too much, could make the notifications less useful overall,鈥 Meta said in a blog post.
Josh Golin, executive director of the nonprofit Fairplay, was skeptical of the new tool, saying Instagram 鈥渋s clearly making this move now because the company is currently on trial in two different states for addicting and harming kids.鈥
鈥淥nce again, Meta is shifting the burden to parents rather than fixing the dangerous flaws in how it designs its algorithms and platforms,鈥 Golin said. 鈥淎nd all children deserve to be protected, regardless of whether their parents have enrolled in and utilize Meta鈥檚 supervision tools. If a product is not safe for teens to use without parental intervention, it shouldn鈥檛 be marketed to teens at all.鈥
Meta said it is also working on similar notifications to parents about their kids’ interactions with artificial intelligence.
鈥淭hese will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self-harm with our AI,鈥 Meta said. 鈥淭his is important work and we鈥檒l have more to share in the coming months.鈥
Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.