Instagram has announced a new feature where it will inform parents if their teenage children repeatedly search for content related to self-harm or suicide in a short timeframe. This move comes as pressure mounts for governments to take action similar to Australia’s ban on social media use for individuals under 16 years old.
Owned by Meta Platforms Inc., Instagram stated that it will send alerts to parents who have opted for supervision settings if their kids attempt to access self-harm or suicide-related material. These notifications will commence next week for parents in Canada, the United States, Britain, and Australia.
The platform emphasized its commitment to protecting teens from harmful content by implementing these alerts. Instagram’s statement highlighted their strict policies against any content that promotes or glorifies self-harm or suicide. Currently, the platform blocks such searches and redirects users to support resources.
Various governments are increasingly focusing on safeguarding children online. Concerns arose following the AI chatbot Grok’s creation of non-consensual sexualized images. In response, Britain and Australia have taken steps to protect minors online, with Spain, Greece, and Slovenia also exploring measures to restrict access.
In the UK, efforts to prevent children from accessing pornography websites have raised concerns about privacy for adults and sparked tensions with the US over free speech limitations and regulatory boundaries. Instagram’s introduction of “teen accounts” for individuals under 16 requires parental consent to adjust settings. Parents can opt for additional monitoring features with their teenager’s agreement, ensuring that young users are shielded from sensitive and inappropriate content.
