Meta will introduce a system on Instagram to notify parents when teenagers repeatedly search for self-harm or suicide content. Alerts will trigger after multiple searches in a short time. Meta links the feature to its Teen Account supervision tools and says it strengthens online safety for young users.
Previously, Instagram blocked harmful searches and directed teens to external support. Meta now adds direct parental notifications for extra oversight. Teen Accounts in the UK, US, Australia, and Canada will start receiving alerts next week. The company plans a global rollout in the months ahead.
Molly Rose Foundation Raises Concerns
The Molly Rose Foundation has criticized the alert system. Chief executive Andy Burrows warns that automatic notifications could have unintended consequences. He says alerts may cause panic rather than provide helpful guidance.
The foundation was established by the family of Molly Russell, who died by suicide in 2017 at age 14 after viewing self-harm and suicide content online, including on Instagram. Burrows says parents naturally want to know if their child struggles. He argues sudden alerts could leave families shocked and unprepared for sensitive conversations.
Meta says it will attach expert resources to each alert. The company says these materials aim to guide parents through difficult discussions. Ian Russell, who chairs the foundation, questions whether these tools are enough. He says a parent receiving a notification at work could panic. Written guidance alone may not prevent immediate distress.
Experts Call for Broader Protections
Charities argue the alerts expose deeper platform risks. Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomes the alerts but says stronger preventive measures are needed. He says young people still encounter dangerous online content.
Flynn notes parents contact his organization daily, worried about children’s online activity. Families want platforms to prevent harmful material from appearing, not only alert them afterward.
Leanda Barrington-Leach, executive director of 5Rights Foundation, urges Meta to redesign its systems with child safety as default. Burrows cites research showing Instagram still recommends harmful content about depression, self-harm, and suicide to vulnerable teens.
He insists platforms must address systemic risks instead of shifting responsibility to parents. Meta disputes the foundation’s September report, saying it misrepresents its teen safety and parental support efforts.
Governments and Regulators Increase Pressure
Instagram designed Teen Account alerts to detect sudden changes in search behavior. Meta says the system builds on existing safety measures. The platform already hides self-harm and suicide material and blocks related searches.
Parents will receive alerts via email, text, WhatsApp, or within the app. Meta selects the delivery method based on contact information provided. The company acknowledges alerts may occasionally trigger without serious cause, but says it prefers caution when protecting young users.
Sameer Hinduja, co-director of the Cyberbullying Research Center, says alerts will naturally alarm parents. He stresses practical guidance must follow immediately. Companies cannot leave families alone with fear, and Hinduja believes Meta recognizes this responsibility.
Instagram plans to expand alerts to interactions with its AI chatbot. The company notes teens increasingly turn to artificial intelligence tools for support. Governments worldwide continue pressuring social media firms to improve child safety.
Australia has banned social media for children under 16. Spain, France, and the UK are considering similar restrictions. Regulators closely monitor how tech companies engage young users. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court defending the company against allegations it targeted underage users.

